챕터 4. ZFS 데이터셋 (ZFS Datasets)
With일반 ordinary파일시스템에서는 filesystems파티션을 you만들어 create다양한 partitions유형의 to데이터를 separate분리하고, different파티션에 types다양한 of최적화를 data,적용하고, apply파티션이 different사용할 optimizations수 to있는 them,공간의 and양을 limit제한할 how수 much있습니다. of각 your파티션은 space디스크에서 the특정 partition양의 can공간을 consume.할당받습니다. Each우리 partition모두 receives그런 a경험이 specific있습니다. amount우리는 of다음 space달, from내년, the5년 disk.후 We’ve이 all시스템의 been각 there.파티션에 We얼마나 make많은 our디스크 best공간이 guesses필요할지 at추측합니다. how그런데 much미래를 disk내다보면 space각 each파티션에 partition할당하기로 on결정한 this공간의 system양이 will잘못되었을 need가능성이 next높습니다. month,모든 next데이터를 year,저장할 and공간이 five충분하지 years않은 from파티션은 now.디스크를 Fast추가하거나 forward데이터를 to이동해야 the하므로 future,시스템 and관리가 the복잡해집니다. amount파티션에 of공간이 space너무 you많으면 decided다른 to곳에 give두는 each것이 partition더 is나을 more데이터를 than파티션에 likely버리게 wrong.됩니다. A partition without enough space for all its data sends you adding disks or moving data, complicating system management. When a partition has too much space, you kick yourself and use it as a dumping ground for stuff you’d rather have elsewhere. More than one of Lucas’루카스의 UFS2 systems시스템 has중 하나 이상은 /home의 어딘가에 대한 심볼릭 링크로 /usr/ports포트를 as가지고 a있습니다. symlinkJude는 to보통 somewhere in /home. Jude usually ends up with some part of /var living in /usr/local/var.var에 /var의 일부를 저장합니다.
ZFSZFS는 solves여유 this공간을 problem풀링하여 by일반적인 pooling파일 free시스템에서는 space,불가능한 giving파티션의 your유연성을 partitions제공함으로써 flexibility이 impossible문제를 with해결합니다. more생성하는 common filesystems. Each각 ZFS dataset데이터 you세트는 create그 consumes안에 only파일을 the저장하는 space데 required필요한 to공간만 store사용합니다. the각 files데이터 within세트는 it.풀의 Each모든 dataset여유 has공간에 access액세스할 to수 all있으므로 of파티션 the크기에 free대한 space걱정을 in덜 the수 pool,있습니다. eliminating6장에서 your설명한 worries대로 about할당량으로 the데이터 size세트의 of크기를 your제한하거나 partitions.예약을 You통해 can최소한의 limit공간을 the보장할 size수 of a dataset with a quota or guarantee it a minimum amount of space with a reservation, as discussed in Chapter 6.있습니다.
Regular일반 filesystems파일시스템은 use별도의 the파티션을 separate사용하여 partitions다양한 to유형의 establish데이터에 different대해 policies서로 and다른 optimizations정책과 for최적화를 the설정합니다. different types of data. /var에는 contains로그와 often-changing데이터베이스처럼 files자주 like변경되는 logs파일이 and들어 databases.있습니다. The루트 root파일시스템은 filesystem성능보다 needs일관성과 consistency안전성이 and중요합니다. safety/home에서는 over무엇이든 performance.가능합니다. Over하지만 in기존 /home,파일시스템에 anything대한 goes.정책을 Once한 you번 establish설정하면 a변경하기가 policy정말 for어렵습니다. aUFS용 traditional filesystem, though, it’s really hard to change. The tunefs(8) utility유틸리티를 for사용하려면 파일시스템을 마운트 해제해야 변경할 수 있습니다. 아이노드 수와 같은 일부 특성은 파일시스템이 생성된 후에는 변경할 수 없습니다.
기존 파일시스템의 핵심 문제는 유연성 부족으로 귀결됩니다. ZFS 데이터 세트는 거의 무한대로 유연합니다.
데이터세트 (Datasets)
데이터 세트는 이름이 지정된 데이터 덩어리입니다. 이 데이터는 파일, 디렉터리, 권한 등 기존 파일 시스템과 비슷할 수 있습니다. 원시 블록 장치, 다른 데이터의 사본 또는 디스크에 넣을 수 있는 모든 것일 수 있습니다.
ZFS는 기존 파일시스템이 파티션을 사용하는 것과 마찬가지로 데이터세트를 사용합니다. /usr에 대한 정책과 /home에 대한 별도의 정책이 필요하신가요? 각각 데이터세트를 만들면 됩니다. iSCSI 대상에 대한 차단 장치가 필요하신가요? 바로 데이터 세트입니다. 데이터 세트의 사본을 원하시나요? 그것도 또 다른 데이터 세트입니다.
데이터 세트에는 계층적 관계가 있습니다. 하나의 스토리지 풀이 각 최상위 데이터 세트의 부모입니다. 각 데이터 세트는 하위 데이터 세트를 가질 수 있습니다. 이 장에서 살펴보겠지만 데이터 세트는 부모로부터 많은 특성을 상속받습니다.
모든 데이터 세트 작업은 zfs(8) 명령으로 수행합니다. 이 명령에는 다양한 종류의 하위 명령이 있습니다.
데이터세트 유형 (Dataset Types)
현재 ZFS에는 파일시스템(filesystems), 볼륨(volumes), 스냅샷(snapshots), 복제본(clones), 북마크(bookmarks) 등 5가지 유형의 데이터 세트가 있습니다.
파일 시스템 데이터 세트는 기존 파일 시스템과 유사합니다. 파일과 디렉터리를 저장합니다. ZFS 파일시스템에는 마운트 지점이 있으며 읽기 전용, setuid 바이너리 제한 등과 같은 기존 파일시스템의 특성을 지원합니다. 파일시스템 데이터세트에는 권한, 파일 생성 및 수정을 위한 타임스탬프, NFSv4 액세스 제어 플래그, chflags(2) 등의 기타 정보도 포함됩니다.
ZFS 볼륨 또는 zvol은 블록 장치입니다. 일반 파일 시스템에서는 iSCSI용 파일 백업 파일 시스템이나 특수 목적의 UFS requires파티션을 the만들 filesystem수 be있습니다. unmountedZFS에서 to이러한 make블록 changes.디바이스는 Some파일과 characteristics,디렉터리의 such모든 as오버헤드를 the우회하여 number기본 of풀에 inodes,직접 just상주합니다. cannotZvol은 be디스크 changed이미지를 after마운트하는 the데 filesystem사용되는 hasFreeBSD been메모리 created.장치를 건너뛰고 장치 노드를 가져옵니다.
The스냅샷은 core특정 problem시점의 of데이터 traditional세트에 filesystems대한 distills읽기 to전용 inflexibility.사본입니다. ZFS스냅샷을 datasets사용하면 are나중에 almost사용할 infinitely수 flexible.있도록 이전 버전의 파일시스템과 그 안에 있는 파일을 보존할 수 있습니다. 스냅샷은 현재 파일 시스템과 스냅샷에 있는 파일 간의 차이에 따라 일정량의 공간을 사용합니다.
Datasets클론은 기존 데이터 세트의 스냅샷을 기반으로 하는 새로운 데이터 세트로, 파일 시스템을 포크할 수 있습니다. 데이터 세트의 모든 항목에 대한 추가 복사본을 얻을 수 있습니다. 프로덕션 웹 사이트가 포함된 데이터세트를 복제하여 프로덕션 사이트를 건드리지 않고도 해킹할 수 있는 사이트 사본을 만들 수 있습니다. 복제본은 생성된 원본 스냅샷과의 차이점을 저장하는 데만 공간을 사용합니다. 7장에서는 스냅샷, 클론 및 북마크에 대해 다룹니다.
A
데이터 dataset세트가 is필요한 a이유는 named무엇인가요? chunk of data. This data might resemble a traditional filesystem, with files, directories, and permissions and all that fun stuff. It could be a raw block device, or a copy of other data, or anything you can cram onto a disk.
ZFS uses datasets much like a traditional filesystem might use partitions. Need a policy for /usr and a separate policy for /home? Make each a dataset. Need a block device for an iSCSI target? That’s a dataset. Want a copy of a dataset? That’s another dataset.
Datasets have a hierarchical relationship. A single storage pool is the parent of each top-level dataset. Each dataset can have child datasets. Datasets inherit many characteristics from their parent, as we’ll see throughout this chapter.
You’ll perform all dataset operations with the zfs(8) command. This command has all sorts of sub-commands.
Dataset Types
ZFS currently has five types of datasets: filesystems, volumes, snapshots, clones, and bookmarks.
A filesystem dataset resembles a traditional filesystem. It stores files and directories. A ZFS filesystem has a mount point and supports traditional filesystem characteristics like read-only, restricting setuid binaries, and more. Filesystem datasets also hold other information, including permissions, timestamps for file creation and modification, NFSv4 Access Control Flags, chflags(2), and the like.
A ZFS volume, or zvol, is a block device. In an ordinary filesystem, you might create a file-backed filesystem for iSCSI or a special-purpose UFS partition. On ZFS, these block devices bypass all the overhead of files and directories and reside directly on the underlying pool. Zvols get a device node, skipping the FreeBSD memory devices used to mount disk images.
A snapshot is a read-only copy of a dataset from a specific point in time. Snapshots let you retain previous versions of your filesystem and the files therein for later use. Snapshots use an amount of space based on the difference between the current filesystem and what’s in the snapshot.
A clone is a new dataset based on a snapshot of an existing dataset, allowing you to fork a filesystem. You get an extra copy of everything in the dataset. You might clone the dataset containing your production web site, giving you a copy of the site that you can hack on without touching the production site. A clone only consumes space to store the differences from the original snapshot it was created from. Chapter 7 covers snapshots, clones, and bookmarks.
(Why Do I Want Datasets?)
당연히 데이터 세트가 필요합니다. 디스크에 파일을 저장하려면 파일 시스템 데이터 세트가 필요합니다. 그리고 /usr 및 /var와 같은 각 기존 유닉스 파티션에 대한 데이터 세트가 필요할 것입니다. 하지만 ZFS를 사용하면 많은 데이터 세트가 필요합니다. 아주 많은 데이터 세트가 필요합니다. 파티션 수에 대한 하드코딩된 제한과 파티션의 유연성이 없는 기존 파일 시스템에서는 이런 일이 벌어질 수 없습니다. 하지만 많은 데이터세트를 사용하면 데이터에 대한 통제력이 높아집니다.
You각 obviouslyZFS need데이터 datasets.세트에는 Putting작동을 files제어하는 on일련의 the속성이 disk있어 requires관리자가 a데이터 filesystem세트의 dataset.작동 And방식과 you데이터 probably보호 want수준을 a제어할 dataset수 for있습니다. each기존 traditional파일 Unix시스템에서와 partition,마찬가지로 like각 /usr데이터세트를 and정확하게 /var.조정할 But수 with있습니다. ZFS,데이터세트 you속성은 want풀 a속성과 lot매우 of유사하게 datasets. Lots and lots and lots of datasets. This would be cruel madness with a traditional filesystem, with its hard-coded limits on the number of partitions and the inflexibility of those partitions. But using many datasets increases the control you have over your data.작동합니다.
Each시스템 관리자는 개별 데이터 세트에 대한 제어 권한을 다른 사용자에게 위임하고, 해당 사용자가 루트 권한 없이도 데이터 세트를 관리할 수 있도록 할 수 있습니다. 조직에 여러 프로젝트 팀이 있는 경우, 각 프로젝트 관리자에게 자신만의 공간을 주고 "여기, 원하는 대로 정리해 보세요."라고 말할 수 있습니다. 업무량을 줄여주는 것은 무엇이든 좋은 일입니다.
복제 및 스냅샷과 같은 많은 ZFS dataset기능은 has데이터 a세트 series단위로 of작동합니다. properties데이터를 that논리적 control그룹으로 its분리하면 operation,조직을 allowing지원하기 the위해 administrator이러한 toZFS control기능을 how더 the쉽게 dataset사용할 performs수 and how carefully it protects its data. You can tune each dataset exactly as you can with a traditional filesystem. Dataset properties work much like pool properties.있습니다.
The각각 sysadmin다른 can팀에서 delegate관리하는 control수십 over개의 individual사이트가 datasets있는 to웹 another서버를 user,예로 allow들어 the보겠습니다. user어떤 to팀은 manage여러 it사이트를 without담당하는 root반면, privileges.어떤 If팀은 your한 organization사이트만 has담당합니다. a어떤 whole사람들은 bunch여러 of팀에 project소속되어 teams,있습니다. you전통적인 can파일 give시스템 each모델을 project따른다면 manager/webserver their데이터 own집합을 chunk만들어 of모든 space것을 and그 say,안에 “Here,넣고 arrange그룹 it권한과 howeversudo(8)로 you액세스를 want.”제어할 Anything수 that있습니다. reduces수십 our년 workload동안 is이런 a식으로 good살아왔고 thing.잘 작동하는데 왜 바꾸어야 할까요?
Many하지만 ZFS각 features,팀에 such대한 as데이터 replication집합을 and만들고, snapshots,그 operate상위 on데이터 a집합 per-dataset내에 basis.각 Separating사이트에 your고유한 data데이터 into집합을 logical부여하면 groups가능성은 makes it easier to use these ZFS features to support your organization.배가됩니다.
Take팀에서 the테스트를 example위해 of웹 a사이트의 web사본이 server필요하신가요? with dozens of sites, each maintained by different teams. Some teams are responsible for multiple sites, while others have only one. Some people belong to multiple teams. If you follow the traditional filesystem model, you might create a /webserver dataset, put everything in it, and control access with group permissions and sudo(8)복제하세요. You’ve기존 lived파일 like시스템에서는 this사이트 for디렉터리 decades,전체를 and복사해야 it하므로 works,사이트에 so필요한 why디스크의 change?양이 두 배로 늘어나고 시간도 훨씬 더 오래 걸립니다. 복제본은 사이트 간의 차이점에 해당하는 공간만 사용하며 즉시 나타납니다.
But팀에서 create사이트의 a새 dataset버전을 for배포하려고 each하는데 team,이전 and사이트의 give백업이 each필요하신가요? site스냅샷을 its만드세요. own이 dataset새 within사이트는 that이전 parent사이트와 dataset,동일한 and파일을 possibilities많이 multiply.사용하므로 디스크 공간 사용량을 줄일 수 있습니다. 또한 배포가 심하게 잘못되었을 때 스냅샷으로 롤백하여 이전 버전을 복원할 수 있습니다.
A특정 team웹 needs사이트에 a파일 copy시스템 of수준의 a성능 web조정이나 site압축 for또는 testing?로컬에서 Clone생성된 it.일부 With속성이 traditional필요하신가요? filesystems,해당 you’d사이트에 have맞게 to copy the whole site directory, doubling the amount of disk needed for the site and taking much, much longer. A clone uses only the amount of space for the differences between the sites and appears instantaneously.설정하세요.
The각 team팀에 is대한 about데이터 to집합을 deploy만든 a다음, new각 version팀에서 of각자의 a사이트에 site,대한 but하위 wants데이터 a집합을 backup만들도록 of할 the수 old있습니다. site?기술에 Create맞게 a인력을 snapshot.구성하는 This것이 new아니라 site인력에 probably맞게 uses데이터 a집합을 whole구성할 bunch수 of the same files as the old one, so you’ll reduce disk space usage. Plus, when the deployment goes horribly wrong, you can restore the old version by rolling back to the snapshot.있습니다.
A모든 particular사이트에서 web파일 site시스템 needs설정(속성)을 filesystem-level변경해야 performance하는 tweaks,경우, or상위 compression,데이터 or집합을 some변경하고 locally하위 created데이터 property?집합이 Set이를 it상속하도록 for that site.합니다.
You사용자 might홈 create디렉토리에도 a동일한 dataset속성이 for each team, and then let the teams create their own child datasets for their own sites. You can organize your datasets to fit your people, rather than organizing your people to fit your technology.적용됩니다.
When컴퓨터 you간에 must데이터 change세트를 a이동할 filesystem수도 setting있습니다. (property)웹 on사이트가 all웹 of서버를 the넘치나요? sites,데이터 make세트의 the절반을 change사용자 to지정 the설정, parent모든 dataset복제본 and및 let스냅샷과 the함께 children새 inherit서버로 it.전송하세요.
The많은 same파일시스템 benefits데이터세트를 apply사용하는 to데는 user한 home가지 directories.단점이 있습니다. 파일시스템 내에서 파일을 이동하면 파일 이름이 변경됩니다. 서로 다른 파일시스템 간에 파일을 이동하려면 이름만 바꾸는 것이 아니라 파일을 새 위치로 복사하고 이전 위치에서 삭제해야 합니다. 데이터 세트 간 파일 복사는 시간이 더 걸리고 더 많은 여유 공간이 필요합니다. 하지만 ZFS가 여러 데이터 세트에 제공하는 모든 이점에 비하면 사소한 문제입니다. 이 문제는 다른 파일시스템에도 존재하지만, 대부분의 다른 파일시스템을 사용하는 호스트는 파티션이 몇 개 밖에 없기 때문에 잘 드러나지 않습니다.
You데이터 can세트 also보기 move(Viewing datasets between machines. Your web sites overflow the web server? Send half the datasets, along with their custom settings and all their clones and snapshots, to the new server.Datasets)
There is one disadvantage to using many filesystem datasets. When you move a file within a filesystem, the file is renamed. Moving files between separate filesystems requires copying the file to a new location and deleting it from the old, rather than just renaming it. Inter-dataset file copies take more time and require more free space. But that’s trivial against all the benefits ZFS gives you with multiple datasets. This problem exists on other filesystems as well, but hosts using most other filesystems have only a few partitions, making it less obvious.
Viewing Datasets
The zfs list command명령은 shows모든 all데이터 of세트와 the그에 datasets,대한 and몇 some가지 basic기본 information정보를 about them.표시합니다.
#
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 420M 17.9G 96K none
mypool/ROOT 418M 17.9G 96K none
mypool/ROOT/default 418M 17.9G 418M /
...
첫 번째 필드에는 데이터 세트의 이름이 표시됩니다.
TheUSED first및 fieldREFER shows아래에서 the데이터 dataset’s집합이 name.
사용하는 디스크 공간의 양에 대한 정보를 확인할 수 있습니다. ZFS의 놀라운 유연성과 효율성에 대한 한 가지 단점은 디스크 공간 사용량에 대한 해석을 이해하지 못하면 다소 비현실적으로 보인다는 것입니다. 6장에서는 디스크 공간과 이를 사용하는 전략에 대해 설명합니다.
UnderAVAIL USED열은 and풀 REFER또는 you데이터 find집합에 information남아 about있는 how여유 much공간을 disk space the dataset uses. One downside to ZFS’ incredible flexibility and efficiency is that its interpretation of disk space usage seems somewhat surreal if you don’t understand it. Chapter 6 discusses disk space and strategies to use it.표시합니다.
The마지막으로 AVAIL마운트포인트는 column데이터세트를 shows마운트해야 how하는 much위치를 space보여줍니다. remains이는 free데이터 in세트가 the마운트된다는 pool의미는 or아니며, dataset.단지 마운트될 경우 이 위치로 이동한다는 의미일 뿐입니다. 마운트된 모든 ZFS 파일시스템을 보려면 zfs mount를 사용하세요.
Finally데이터세트를 MOUNTPOINT인수로 shows제공하면 where the dataset should be mounted. That doesn’t mean that the dataset is mounted, merely that if it were to be mounted, this is where it would go. (Use zfs mount to see all mounted ZFS filesystems.)
If you give a dataset as an argument, zfs list에 shows해당 only특정 that데이터 specific집합만 dataset.표시됩니다.
#
$ zfs list mypool/lamb
NAME USED AVAIL REFER MOUNTPOINT
mypool/lamb 192K 17.9G 96K /lamb
-t 플래그와 유형으로 표시되는 데이터세트의 유형을 제한합니다. 파일 시스템, 볼륨 또는 스냅샷을 표시할 수 있습니다. 여기서는 스냅샷만 표시합니다.
Restrict the type of dataset shown with the -t flag and the type. You can show filesystems, volumes, or snapshots. Here we display snapshots, and only snapshots.
#
$ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
zroot/var/log/db@backup 0 - 10.0G -
이제 파일시스템을 볼 수 있게 되었으니 직접 만들어 보겠습니다.
Now
데이터세트 that생성, you이동 can및 see삭제하기 filesystems, let’s make some.
(Creating, Moving, and Destroying Datasets
Use the zfs create command명령을 to사용해 create데이터세트를 any생성합니다. dataset.스냅샷, We’ll복제본, look북마크는 at7장에서 snapshots,살펴보겠지만, clones,지금은 and파일시스템과 bookmarks볼륨에 in대해 Chapter 7, but let’s discuss filesystems and volumes now.알아보겠습니다.
파일시스템 만들기 (Creating FilesystemsFilesystems)
파일시스템은 대부분의 시스템에서 가장 일반적인 데이터세트 유형입니다. 누구나 파일을 저장하고 정리할 공간이 필요합니다. 풀과 파일 시스템 이름을 지정하여 파일 시스템 데이터집합을 만듭니다.
Filesystems are the most common type of dataset on most systems. Everyone needs a place to store and organize files. Create a filesystem dataset by specifying the pool and the filesystem name.
#
$ zfs create mypool/lamb
This이렇게 creates하면 a new dataset, lamb, on themypool이라는 ZFS pool풀에 called새 mypool.데이터 If세트인 thelamb이 pool생성됩니다. has풀에 a기본 default마운트 mount지점이 point,있는 the경우 new새 dataset데이터 is세트가 mounted기본적으로 by마운트됩니다(이 default장 (see뒷부분의 “Mounting "ZFS Filesystems”파일 later시스템 this마운트" chapter)참조).
#
$ mount | grep lamb
mypool/lamb on /lamb (zfs, local, noatime, nfsv4acls)
The괄호 mount안의 settings마운트 in설정은 parentheses일반적으로 are부모 usually데이터세트에서 상속된 ZFS properties,속성입니다. inherited하위 from파일시스템을 the만들려면 parent상위 dataset.파일시스템의 To전체 create경로를 a child filesystem, give the full path to the parent filesystem.입력합니다.
#
$ zfs create mypool/lamb/baby
이 장의 뒷부분에 나오는 '부모/자식 관계'에서 살펴보겠지만, 데이터세트는 마운트 지점을 비롯한 많은 특성을 부모로부터 상속받습니다.
The
볼륨 dataset만들기 inherits(Creating many of its characteristics, including its mount point, from the parent, as we’ll see in “Parent/Child Relationships” later in this chapter.Volumes)
Creating Volumes
Use the -V flag플래그와 and볼륨 a크기를 volume사용하여 size볼륨을 to만들려는 tell볼륨을 zfs create에 that알려줍니다. you볼륨 want데이터세트의 to전체 create경로를 a volume. Give the full path to the volume dataset.입력합니다.
#
$ zfs create -V 4G mypool/avolume
ZvolsZvols는 show다른 up데이터세트와 in마찬가지로 a데이터세트 dataset목록에 list표시됩니다. like any other dataset. You can tell zfs list to show only zvols by adding the -t volume option.옵션을 추가하여 zfs list에 zvol만 표시하도록 할 수 있습니다.
#
$ zfs list mypool/avolume
NAME USED AVAIL REFER MOUNTPOINT
mypool/ypool/avolume 4.13G 17.9G 64K -
Z볼은 볼륨의 크기와 ZFS 메타데이터를 더한 만큼의 공간을 자동으로 예약합니다. 이 4GB zvol은 4.13GB의 공간을 사용합니다.
Zvols블록 automatically디바이스로서 reservezvol에는 an마운트 amount지점이 of없습니다. space하지만 equal/dev/zvol to아래에 the디바이스 size노드가 of있으므로 the다른 volume블록 plus디바이스와 the마찬가지로 ZFS액세스할 metadata.수 This 4 GB zvol uses 4.13 GB of space.있습니다.
As block devices, zvols do not have a mount point. They do get a device node under /dev/zvol, so you can access them as you would any other block device.
#
$ ls -al /dev/zvol/mypool/avolume
crw-r----- 1 root operator 0x4d Mar 27 20:22 /dev/zvol/mypool/avolume
이 디바이스 노드에서 newfs(8)를 실행하고 디스크 이미지를 복사한 후 일반적으로 다른 블록 디바이스처럼 사용할 수 있습니다.
You
데이터세트 can이름 run변경 newfs(8)(Renaming on this device node, copy a disk image to it, and generally use it like any other block device.Datasets)
Renaming데이터세트의 Datasets
You바꾸려면 can이상하게도 rename a dataset with, oddly enough, the zfs rename command.명령을 Give사용하면 the됩니다. dataset’s데이터세트의 current현재 name이름을 as첫 the번째 first인수로 argument지정하고 and새 the위치를 new두 location번째 as인수로 the second.지정합니다.
#
$ zfs rename db/production db/old
#$ zfs rename db/testing db/production
데이터 세트의 이름을 강제로 바꾸려면 -f 플래그를 사용합니다. 프로세스가 실행 중인 파일시스템은 마운트 해제할 수 없지만 -f 플래그를 사용하면 강제로 마운트 해제할 수 있습니다. 데이터세트를 사용 중인 모든 프로세스는 사용 중이던 데이터에 대한 액세스 권한을 잃고 사용자가 원하는 대로 반응합니다.
Use아마 the심할 -f겁니다.
데이터세트 to이동하기 forcibly(Moving renameDatasets)
데이터세트를 dataset.ZFS You트리의 cannot일부에서 unmount다른 a부분으로 filesystem이동하여 with데이터세트를 processes새 running부모의 in하위 it,집합으로 but만들 the수 -f있습니다. flag자식은 gleefully부모로부터 forces속성을 the상속하므로 unmount.데이터세트의 Any많은 process속성이 using변경될 the수 dataset있습니다. loses데이터세트에 access특별히 to설정된 whatever속성은 it변경되지 was using, and reacts however it will.1않습니다.
Moving여기서는 Datasets
You개선하기 can위해 move몇 a가지 dataset속성을 from설정한 part새 of상위 the데이터세트인 ZFS tree to another, making the dataset a child of its new parent. This may cause many of the dataset’s properties to change, since children inherit properties from their parent. Any properties set specifically on the dataset will not change.
Here we move a database out from under the zroot/var/db dataset,데이터세트 to아래에서 a데이터베이스를 new parent where you have set some properties to improve fault tolerance.이동합니다.
#
$ zfs rename zroot/var/db/mysql zroot/important/mysql
마운트 지점은 상속되므로, 이렇게 하면 데이터 집합의 마운트 지점이 변경될 수 있습니다. rename 명령에 -u 플래그를 추가하면 ZFS가 마운트 지점을 즉시 변경하지 않으므로 프로퍼티를 의도한 값으로 재설정할 시간을 벌 수 있습니다. 컴퓨터를 다시 시작하거나 데이터세트를 수동으로 다시 마운트하면 새 마운트 지점을 사용한다는 점을 기억하세요.
Note스냅샷의 that이름을 since변경할 mount수는 points있지만 are상위 inherited,데이터 this세트에서 will스냅샷을 likely이동할 change수는 the없습니다. dataset’s스냅샷은 mount7장에서 point.자세히 Adding the -u flag to the rename command will cause ZFS not to immediately change the mount point, giving you time to reset the property to the intended value. Remember that if the machine is restarted, or the dataset is manually remounted, it will use its new mount point.다룹니다.
데이터세트 삭제하기 (Destroying Datasets)
You데이터 can세트가 rename지겨우신가요? a헛간 snapshot,뒤로 but끌어다 you놓고 cannotzfs movedestroy를 snapshots통해 out고통에서 of their parent dataset. Snapshots are covered in detail in Chapter 7.벗어나세요.
Destroying Datasets
Sick of that dataset? Drag it out behind the barn and put it out of your misery with zfs destroy.
#
$ zfs destroy db/old
-r 플래그를 추가하면 데이터세트의 모든 자식(데이터 세트, 스냅샷 등)을 재귀적으로 파기합니다. 복제된 데이터세트를 모두 파기하려면 -R을 사용합니다. 데이터세트의 자식이 정확히 무엇인지 알 수 없는 경우가 종종 있으므로 데이터세트를 재귀적으로 파기할 때는 매우 주의하세요.
If데이터세트를 you파기할 add때 the어떤 -r일이 flag,발생하는지 you정확히 recursively확인하려면 destroy all children (datasets, snapshots, etc.) of the dataset. To destroy any cloned datasets while you’re at it, use -R. Be very careful recursively destroying datasets, as you can frequently be surprised by what, exactly, is a child of a dataset.
You might use the -v and및 -n flags플래그를 to사용할 see수 exactly있습니다. what will happen when you destroy a dataset. The -v flag플래그는 prints소멸되는 verbose항목에 information대한 about자세한 what정보를 gets출력하고, destroyed, while -n은 tells zfs(8)에 to드라이런을 perform수행하도록 a지시합니다. dry이 run.두 Between플래그는 the트리거를 two,실행하기 they전에 show이 what명령이 this실제로 command무엇을 would파괴하는지 actually보여줍니다.
ZFS before속성 you(ZFS pull the trigger.Properties)
ZFS Properties데이터세트에는 데이터세트 작동 방식을 제어하는 속성이라고 하는 여러 가지 설정이 있습니다. 이 중 몇 가지는 데이터세트를 만들 때만 설정할 수 있지만, 대부분은 데이터세트가 라이브 상태일 때 조정할 수 있습니다. ZFS는 또한 데이터세트가 사용하는 공간의 양, 압축 또는 중복 제거 비율, 데이터세트의 생성 시간 등의 정보를 제공하는 여러 읽기 전용 속성을 제공합니다.
ZFS각 datasets데이터세트는 have해당 a데이터세트에 number속성이 of특별히 settings,설정되어 called있지 properties,않는 that한 control부모로부터 how속성을 the dataset works. While you can set a few of these only when you create the dataset, most of them are tunable while the dataset is live. ZFS also offers a number of read-only properties that provide information such as the amount of space consumed by the dataset, the compression or deduplication ratios, and the creation time of the dataset.상속받습니다.
속성 보기 (Viewing Properties)
Eachzfs(8) dataset도구는 inherits특정 its속성 properties또는 from데이터 its세트의 parent,모든 unless속성을 the검색할 property수 is있습니다. specificallyzfs setget on명령과 that원하는 dataset.속성, 원하는 경우 데이터 세트 이름을 사용합니다.
Viewing Properties
The zfs(8) tool can retrieve a specific property, or all properties for a dataset. Here we change the compression property to off.
# zfs set compression=off mypool
$ zfs get compression mypool/lamb
NAME PROPERTY VALUE SOURCE
mypool/lamb compression lz4 inherited from mypool
NAME 아래에는 요청한 데이터 집합이 표시되고, PROPERTY에는 요청한 속성이 표시됩니다. VALUE는 속성이 설정된 값입니다.
UnderSOURCE는 NAME조금 we더 see복잡합니다. the기본 dataset소스는 you이 asked속성이 about,ZFS의 and기본값으로 PROPERTY설정되어 shows있음을 the의미합니다. property로컬 you소스는 requested.누군가가 The이 VALUE데이터세트에 is이 what속성을 the의도적으로 property설정했음을 is의미합니다. set임시 to.속성은 데이터세트가 마운트될 때 설정되었으며, 데이터세트가 마운트 해제되면 이 속성은 일반적인 값으로 되돌아갑니다. 상속된 속성은 이 장의 뒷부분에 있는 "상위/하위 관계"에서 설명하는 대로 상위 데이터세트에서 가져옵니다.
The일부 SOURCE속성은 is소스가 a관련이 little없거나 more본질적으로 complicated.명백하기 A때문에 source소스가 of없습니다. default데이터세트가 means생성된 that날짜와 this시간을 property기록하는 is생성 set속성에는 to소스가 ZFS’없습니다. default.이 A값은 local시스템 source시계에서 means가져온 that someone deliberately set this property on this dataset. A temporary property was set when the dataset was mounted, and this property reverts to its usual value when the dataset is unmounted. An inherited property comes from a parent dataset, as discussed in “Parent/Child Relationships” later in this chapter.것입니다.
Some데이터세트 properties이름을 have지정하지 no않으면 source because the source is either irrelevant or inherently obvious. The creation property, which records the date and time the dataset was created, has no source. The value came from the system clock.
If you don’t specify a dataset name, zfs get은 shows모든 the데이터세트에 value대해 of이 this속성의 property값을 for표시합니다. all특수 datasets.속성 The키워드는 special모두 property데이터세트의 keyword모든 all속성을 retrieves all of a dataset’s properties.검색합니다.
#
$ zfs get all mypool/lamb
NAME PROPERTY VALUE SOURCE
mypool/lamb type filesystem -
mypool/lamb creation Fri Mar 27 20:05 2015 -
mypool/lamb used 192K -
...
all를 사용하고 데이터세트 이름을 지정하지 않으면 모든 데이터세트에 대한 모든 속성을 가져옵니다. 이것은 많은 정보입니다. 속성 이름을 쉼표로 구분하여 여러 속성을 표시합니다.
If you use all and don’t give a dataset name, you get all the properties for all datasets. This is a lot of information. Show multiple properties by separating the property names with commas.
#
$ zfs get quota,reservation zroot/home
NAME PROPERTY VALUE SOURCE
zroot/home quota none local
zroot/home reservation none default
You can also view properties with zfs list와 and the -o modifier.수정자를 This사용하여 is속성을 most볼 suited수도 for있습니다. when이 you방법은 want여러 to데이터세트의 view여러 several속성을 properties보려는 from경우에 multiple가장 datasets.적합합니다. Use데이터세트의 the이름을 special표시하려면 property특수 속성 name을 to show the dataset’s name.사용합니다.
#
$ zfs list -o name,quota,reservation
NAME QUOTA RESERV
db none none
zroot none none
zroot/ROOT none none
zroot/ROOT/default none none
...
zroot/var/log 100G 20G
...
데이터세트 이름을 추가하여 해당 데이터세트에 대해 이러한 속성을 이 형식으로 볼 수도 있습니다.
You
속성 can변경 also(Changing add a dataset name to see these properties in this format for that dataset.Properties)
Changing Properties
Change properties with the zfs set command.명령으로 Give속성을 the변경합니다. property속성 name,이름, the새 new설정, setting,데이터세트 and이름을 the입력합니다. dataset여기서는 name.compression Here속성을 weoff로 change the compression property to off.변경합니다.
#
$ zfs set compression=off mypool/lamb/baby
zfs get으로 변경 사항을 확인합니다.
Confirm your change with zfs get.
#
$ zfs get compression mypool/lamb/baby
NAME PROPERTY VALUE SOURCE
mypool/lamb/baby compression off local
대부분의 속성은 속성이 변경된 후에 기록된 데이터에만 적용됩니다. compression 속성은 디스크에 쓰기 전에 데이터를 압축하도록 ZFS에 지시합니다. 압축에 대해서는 6장에서 설명합니다. 압축을 비활성화해도 변경 전에 쓰여진 데이터는 압축이 해제되지 않습니다. 마찬가지로 압축을 활성화해도 디스크에 이미 있는 데이터가 마술처럼 압축되지 않습니다. 압축 활성화의 이점을 최대한 활용하려면 모든 파일을 다시 작성해야 합니다. 새 데이터 세트를 생성하고 zfs 전송을 통해 데이터를 복사한 다음 원본 데이터 세트를 삭제하는 것이 좋습니다.
읽기 전용 속성 (Read-Only Properties)
MostZFS는 properties읽기 apply전용 only속성을 to사용하여 data데이터 written집합에 after대한 the기본 property정보를 is제공합니다. changed.디스크 The공간 compression사용량은 property속성으로 tells표현됩니다. "디스크가 반쯤 찼습니다."라는 속성을 변경하여 사용 중인 데이터의 양을 변경할 수는 없습니다. (6장에서는 ZFS to디스크 compress공간 data사용량에 before대해 writing it to disk. We talk about compression in Chapter 6. Disabling compression doesn’t uncompress any data written before the change was made. Similarly, enabling compression doesn’t magically compress data already on the disk. To get the full benefit of enabling compression, you must rewrite every file. You’re better off creating a new dataset, copying the data over with zfs send, and destroying the original dataset.
Read-Only Properties
ZFS uses read-only properties to offer basic information about the dataset. Disk space usage is expressed as properties. You can’t change how much data you’re using by changing the property that says “your disk is half-full.” (Chapter 6 covers ZFS disk space usage.다룹니다.) Thecreation creation속성은 property이 records데이터 when집합이 this언제 dataset만들어졌는지 was기록합니다. created.디스크에 You데이터를 can추가하거나 change제거하여 many많은 read-only읽기 properties전용 by속성을 adding변경할 or수 removing있지만 data이러한 to속성을 the직접 disk,쓸 but수는 you can’t write these properties directly.없습니다.
Filesystem Properties
One key tool for managing the performance and behavior of traditional filesystems is mount options. You can mount traditional filesystems read-only, or use the noexec flag to disable running programs from them. ZFS uses properties to achieve the same effects. Here are the properties used to accomplish these familiar goals.
atime
A file’s atime indicates when the file was last accessed. ZFS’ atime property controls whether the dataset tracks access times. The default value, on, updates the file’s atime metadata every time the file is accessed. Using atime means writing to the disk every time it’s read.
Turning this property off avoids writing to the disk when you read a file, and can result in significant performance gains. It might confuse mailers and other similar utilities that depend on being able to determine when a file was last read.
Leaving atime on increases snapshot size. The first time a file is accessed, its atime is updated. The snapshot retains the original access time, while the live filesystem contains the newly updated accessed time. This is the default.
exec
The exec property determines if anyone can run binaries and commands on this filesystem. The default is on, which permits execution. Some environments don’t permit users to execute programs from their personal or temporary directories. Set the exec property to off to disable execution of programs on the filesystem.
The exec property doesn’t prohibit people from running interpreted scripts, however. If a user can run /bin/sh, they can run /bin/sh /home/mydir/script.sh. The shell is what’s actually executing—it only takes instructions from the script.
readonly
If you don’t want anything writing to this dataset, set the readonly property to on. The default, off, lets users modify the dataset within administrative permissions.
setuid
Many people consider setuid programs risky.2 While some setuid programs must be setuid, such as passwd(1) and login(1), there’s rarely a need to have setuid programs on filesystems like /home and /tmp. Many sysadmins disallow setuid programs except on specific filesystems.
ZFS’ setuid property toggles setuid support. If set to on, the filesystem supports setuid. If set to off, the setuid flag is ignored.
User-Defined Properties
ZFS properties are great, and you can’t get enough of them, right? Well, start adding your own. The ability to store your own metadata along with your datasets lets you develop whole new realms of automation. The fact that children automatically inherit these properties makes life even easier.
To make sure your custom properties remain yours, and don’t conflict with other people’s custom properties, create a namespace. Most people prefix their custom properties with an organizational identifier and a colon. For example, FreeBSD-specific properties have the format “org.freebsd:propertyname,” such as org.freebsd:swap. If the illumos project creates its own property named swap, they’d call it org.illumos:swap. The two values won’t collide.
For example, suppose Jude wants to control which datasets get backed up via a dataset property. He creates the namespace com.allanjude.3 Within that namespace, he creates the property backup_ignore.
# zfs set com.allanjude:backup_ignore=on mypool/lamb
Jude’s backup script checks the value of this property. If it’s set to true, the backup process skips this dataset.
Parent/Child Relationships
Datasets inherit properties from their parent datasets. When you set a property on a dataset, that property applies to that dataset and all of its children. For convenience, you can run zfs(8) commands on a dataset and all of its children by adding the -r flag. Here, we query the compression property on a dataset and all of its children.
# zfs get -r compression mypool/lamb NAME PROPERTY VALUE SOURCE mypool/lamb compression lz4 inherited from mypool mypool/lamb/baby compression off local
Look at the source values. The first dataset, mypool/lamb, inherited this property from the parent pool. In the second dataset, this property has a different value. The source is local, meaning that the property was set specifically on this dataset.
We can restore the original setting with the zfs inherit command.
# zfs inherit compression mypool/lamb/baby # zfs get -r compression mypool/lamb NAME PROPERTY VALUE SOURCE mypool/lamb compression lz4 inherited from mypool mypool/lamb/baby compression lz4 inherited from mypool
The child now inherits the compression properties from the parent, which inherits from the grandparent.
When you change a parent’s properties, the new properties automatically propagate down to the child.
# zfs set compression=gzip-9 mypool/lamb # zfs get -r compression mypool/lamb NAME PROPERTY VALUE SOURCE mypool/lamb compression gzip-9 local mypool/lamb/baby compression gzip-9 inherited from mypool/lamb
I told the parent dataset to use gzip-9 compression. That percolated down to the child.
Inheritance and Renaming
When you move or rename a dataset so that it has a new parent, the parent’s properties automatically propagate down to the child. Locally set properties remain unchanged, but inherited ones switch to those from the new parent.
Here we create a new parent dataset and check its compression property.
# zfs create mypool/second # zfs get compress mypool/second NAME PROPERTY VALUE SOURCE mypool/second compression lz4 inherited from mypool
Our baby dataset uses gzip-9 compression. It’s inherited this property from mypool/lamb. Now let’s move baby to be a child of second, and see what happens to the compression property.
# zfs rename mypool/lamb/baby mypool/second/baby # zfs get -r compression mypool/second NAME PROPERTY VALUE SOURCE mypool/second compression lz4 inherited from mypool mypool/second/baby compression lz4 inherited from mypool
The child dataset now belongs to a different parent, and inherits its properties from the new parent. The child keeps any local properties.
Data on the baby dataset is a bit of a tangle, however. Data written before compression was turned on is uncompressed. Data written while the dataset used gzip-9 compression is compressed with gzip-9. Any data written now will be compressed with lz4. ZFS sorts all this out for you automatically, but thinking about it does make one's head hurt.
Removing Properties
While you can set a property back to its default value, it’s not obvious how to change the source back to inherit or default, or how to remove custom properties once they’re set.
To remove a custom property, inherit it.
# zfs inherit com.allanjude:backup_ignore mypool/lamb
This works even if you set the property on the root dataset.
To reset a property to its default value on a dataset and all its children, or totally remove custom properties, use the zfs inherit command on the pool’s root dataset.
# zfs inherit -r compression mypool
It’s counterintuitive, but it knocks the custom setting off of the root dataset.
Mounting ZFS Filesystems
With traditional filesystems you listed each partition, its type, and where it should be mounted in /etc/fstab. You even listed temporary mounts such as floppies and CD-ROM drives, just for convenience. ZFS allows you to create such a large number of filesystems that this quickly grows impractical.
Each ZFS filesystem has a mountpoint property that defines where it should be mounted. The default mountpoint is built from the pool’s mountpoint. If a pool doesn’t have a mount point, you must assign a mount point to any datasets you want to mount.
# zfs get mountpoint zroot/usr/home NAME PROPERTY VALUE SOURCE zroot/usr/home mountpoint /usr/home inherited from zroot/usr
The filesystem normally get mounted at /usr/home. You could override this when manually mounting the filesystem.
The zroot pool used for a default FreeBSD install doesn’t have a mount point set. If you create new datasets directly under zroot, they won’t have a mount point. Datasets created on zroot under, say, /usr, inherit a mount point from their parent dataset.
Any pool other than the pool with the root filesystem normally has a mount point named after the pool. If you create a pool named db, it gets mounted at /db. All children inherit their mount point from that pool unless you change them.
When you change the mountpoint property for a filesystem, the filesystem and any children that inherit the mount point are unmounted. If the new value is legacy, then they remain unmounted. Otherwise, they are automatically remounted in the new location if the property was previously legacy or none, or if they were mounted before the property was changed. In addition, any shared filesystems are unshared and shared in the new location.
Just like ordinary filesystems, ZFS filesystems aren’t necessarily mounted. The canmount property controls a filesystem’s mount behavior. If canmount is set to yes, running zfs mount -a mounts the filesystem, just like mount -a. When you enable ZFS in /etc/rc.conf, FreeBSD runs zfs mount -a at startup.
When the canmount property is set to noauto, a dataset can only be mounted and unmounted explicitly. The dataset is not mounted automatically when the dataset is created or imported, nor is it mounted by the zfs mount -a command or unmounted by zfs unmount -a.
Things can get interesting when you set canmount to off. You might have two non-mountable datasets with the same mount point. A dataset can exist solely for the purpose of being the parent to future datasets, but not actually store files, as we’ll see below. C
hild datasets do not inherit the canmount property.
Changing the canmount property does not automatically unmount or mount the filesystem. If you disable mounting on a mounted filesystem, you’ll need to manually unmount the filesystem or reboot.
Datasets without Mount Points
ZFS datasets are hierarchical. You might need to create a dataset that will never contain any files only so it can be the common parent of a number of other datasets. Consider a default install of FreeBSD 10.1 or newer.
# zfs mount zroot/ROOT/default / zroot/tmp /tmp zroot/usr/home /usr/home zroot/usr/ports /usr/ports zroot/usr/src /usr/src ...
We have all sorts of datasets under /usr, but there’s no /usr dataset mounted. What’s going on?
A zfs list shows that a dataset exists, and it has a mount point of /usr. But let’s check the mountpoint and canmount properties of zroot/usr and all its children.
# zfs list -o name,canmount,mountpoint -r zroot/usr NAME CANMOUNT MOUNTPOINT zroot/usr off /usr zroot/usr/home on /usr/home zroot/usr/ports on /usr/ports zroot/usr/src on /usr/src
With canmount set to off, the zroot/usr dataset is never mounted. Any files written in /usr, such as the commands in /usr/bin and the packages in /usr/local, go into the root filesystem. Lower-level mount points such as /usr/src have their own datasets, which are mounted.
The dataset exists only to be a parent to the child datasets. You’ll see something similar with the /var partitions.
Multiple Datasets with the Same Mount Point
Setting canmount to off allows datasets to be used solely as a mechanism to inherit properties. One reason to set canmount to off is to have two datasets with the same mount point, so that the children of both datasets appear in the same directory, but might have different inherited characteristics.
FreeBSD’s installer does not have a mountpoint on the default pool, zroot. When you create a new dataset, you must assign a mount point to it.
If you don’t want to assign a mount point to every dataset you create right under the pool, you might assign a mountpoint of / to the zroot pool and leave canmount set to off. This way, when you create a new dataset, it has a mountpoint to inherit. This is a very simple example of using multiple datasets with the same mount point.
Imagine you want an /opt directory with two sets of subdirectories. Some of these directories contain programs, and should never be written to after installation. Other directories contain data. You must lock down the ability to run programs at the filesystem level.
# zfs create db/programs # zfs create db/data
Now give both of these datasets the mountpoint of /opt and tell them that they cannot be mounted.
# zfs set canmount=off db/programs # zfs set mountpoint=/opt db/programs
Install your programs to the dataset, and then make it read-only.
# zfs set readonly=on db/programs
You can’t run programs from the db/data dataset, so turn off exec and setuid. We need to write data to these directories, however.
# zfs set canmount=off db/data # zfs set mountpoint=/opt db/data # zfs set setuid=off db/data # zfs set exec=off db/data
Now create some child datasets. The children of the db/programs dataset inherit that dataset’s properties, while the children of the db/data dataset inherit the other set of properties.
# zfs create db/programs/bin # zfs create db/programs/sbin # zfs create db/data/test # zfs create db/data/production
We now have four datasets mounted inside /opt, two for binaries and two for data. As far as users know, these are normal directories. No matter what the file permissions say, though, nobody can write to two of these directories. Regardless of what trickery people pull, the system won’t recognize executables and setuid files in the other two. When you need another dataset for data or programs, create it as a child of the dataset with the desired settings. Changes to the parent datasets propagate immediately to all the children.
Pools without Mount Points
While a pool is normally mounted at a directory named after the pool, that isn’t necessarily so.
# zfs set mountpoint=none mypool
This pool no longer gets mounted. Neither does any dataset on the pool unless you specify a mount point. This is how the FreeBSD installer creates the pool for the OS.
# zfs set mountpoint=/someplace mypool/lamb
The directory will be created if necessary and the filesystem mounted.
Manually Mounting and Unmounting Filesystems
To manually mount a filesystem, use zfs mount and the dataset name. This is most commonly used for filesystems with canmount set to noauto.
# zfs mount mypool/usr/src
To unmount a filesystem and all of its children, use zfs unmount.
# zfs unmount mypool/second
If you want to temporarily mount a dataset at a different location, use the -o flag to specify a new mount point. This mount point only lasts until you unmount the dataset.
# zfs mount -o mountpoint=/mnt mypool/lamb
You can only mount a dataset if it has a mountpoint defined. Defining a temporary mount point when the dataset has no mount point gives you an error.
ZFS and /etc/fstab
You can choose to manage some or all of your ZFS filesystem mount points with /etc/fstab if you prefer. You can recreate the zvol device by renaming the volume with zfshe filesystem.
# zfs set mountpoint=legacy mypool/second
Now you can mount this dataset with the mount(8) command:
# mount -t zfs mypool/second /tmp/second
You can also add ZFS datasets to the system’s /etc/fstab. Use the full dataset name as the device node. Set the type to zfs. You can use the standard filesystem options of noatime, noexec, readonly or ro, and nosuid. (You could also explicitly give the default behaviors of atime, exec, rw, and suid, but these are ZFS’ defaults.) The mount order is normal, but the fsck field is ignored. Here’s an /etc/fstab entry that mounts the dataset scratch/junk nosuid at /tmp.
scratch/junk /tmp nosuid 2 0
We recommend using ZFS properties to manage your mounts, however. Properties can do almost everything /etc/fstab does, and more.
Tweaking ZFS Volumes
Zvols are pretty straightforward—here’s a chunk of space as a block device; use it. You can adjust how a volume uses space and what kind of device node it offers.
Space Reservations
The volsize property of a zvol specifies the volume’s logical size. By default, creating a volume reserves an amount of space for the dataset equal to the volume size. (If you look ahead to Chapter 6, it establishes a refreservation of equal size.) Changing volsize changes the reservation. The volsize can only be set to a multiple of the volblocksize property, and cannot be zero.
Without the reservation, the volume could run out of space, resulting in undefined behavior or data corruption, depending on how the volume is used. These effects can also occur when the volume size is changed while it is in use, particularly when shrinking the size. Adjusting the volume size can confuse applications using the block device.
Zvols also support sparse volumes, also known as thin provisioning. A sparse volume is a volume where the reservation is less than the volume size. Essentially, using a sparse volume permits allocating more space than the dataset has available. With sparse provisioning you could, say, create ten 1 TB sparse volumes on your 5 TB dataset. So long as your volumes are never heavily used, nobody will notice that you’re overcommitted.
Sparse volumes are not recommended. Writes to a sparse volume can fail with an “out of space” error even if the volume itself looks only partially full.
Specify a sparse volume at creation time by specifying the -s option to the zfs create -V command. Changes to volsize are not reflected in the reservation. You can also reduce the reservation after the volume has been created.
Zvol Mode
FreeBSD normally exposes zvols to the operating system as geom(4) providers, giving them maximum flexibility. You can change this with the volmode property.
Setting a volume’s volmode to dev exposes volumes only as a character device in /dev. Such volumes can be accessed only as raw disk device files. They cannot be partitioned or mounted, and they cannot participate in RAIDs or other GEOM features. They are faster. In some cases where you don’t trust the device using the volume, dev mode can be safer.
Setting volmode to none means that the volume is not exposed outside ZFS. These volumes can be snapshotted, cloned, and replicated, however. These volumes can be suitable for backup purposes.
Setting volmode to default means that volume exposure is controlled by the sysctl vfs.zfs.vol.mode. You can set the default zvol mode system-wide. A value of 1 means the default is geom, 2 means dev, and 3 means none.
While you can change the property on a live volume, it has no effect. This property is processed only during volume creation and pool import. You can recreate the zvol device by renaming the volume with zfs rename.
Dataset Integrity
Most of ZFS’ protections work at the VDEV layer. That’s where blocks and disks go bad, after all. Some hardware limits pool redundancy, however. Very few laptops have enough hard drives to use mirroring, let alone RAID-Z. You can do some things at the dataset layer to offer some redundancy, however, by using checksums, metadata redundancy, and copies. Most users should never touch the first two, and users with redundant virtual devices probably want to leave all three alone.
Checksums
ZFS computes and stores checksums for every block that it writes. This ensures that when a block is read back, ZFS can verify that it is the same as when it was written, and has not been silently corrupted in one way or another. The checksum property controls which checksum algorithm the dataset uses. Valid settings are on, fletcher2, fletcher4, sha256, off, and noparity.
The default value, on, uses the algorithm selected by the OpenZFS developers. In 2015 that algorithm is fletcher4, but it might change in future releases.
The standard algorithm, fletcher4, is the default checksum algorithm. It’s good enough for most use and is very fast. If you want to use fletcher4 forever and ever, you could set this property to fletcher4. We recommend keeping the default of on, however, and letting ZFS upgrade your pool’s checksum algorithm when it’s time.
The value off disables integrity checking on user data.
The value noparity not only disables integrity but also disables maintaining parity for user data. This setting is used internally by a dump device residing on a RAID-Z pool and should not be used by any other dataset. Disabling checksums is not recommended.
Older versions of ZFS used the fletcher2 algorithm. While it’s supported for older pools, it’s certainly not encouraged. The sha256 algorithm is slower than fletcher4, but less likely to result in a collision. In most cases, a collision is not harmful.
The sha256 algorithm is frequently recommended when doing deduplication.
Copies
ZFS stores two or three copies of important metadata, and can give the same treatment to your important user data. The copies property tells ZFS how many copies of user data to keep. ZFS attempts to put those copies on different disks, or failing that, as far apart on the physical disk as possible, to help guard against hardware failure. When you increase the copies property, ZFS also increases the number of copies of the metadata for that dataset, to a maximum of three.
If your pool runs on two mirrored disks, and you set copies to 3, you’ll have six copies of your data. One of them should survive your ill-advised use of dd(1) on the raw provider device or that plunge off the roof.
Increasing or decreasing copies only affects data written after the setting change. Changing copies from 1 to 2 doesn’t suddenly create duplicate copies of all your data, as we see here. Create a 10 MB file of random data:
# dd if=/dev/random of=/lamb/random1 bs=1m count=10 10+0 records in 10+0 records out 10485760 bytes transferred in 0.144787 secs (72421935 bytes/sec) # zfs set copies=2 mypool/lamb
Now every block is stored twice. If one of the copies becomes corrupt, ZFS can still read your file. It knows which of the blocks is corrupt because its checksums won’t match. But look at the space use on the pool (the REFER space in the pool listing).
# zfs list mypool/lamb NAME USED AVAIL REFER MOUNTPOINT mypool/lamb 10.2M 13.7G 10.1M /lamb
Only the 10 MB we wrote were used. No extra copy was made of this file, as you wrote it before changing the copies property. With copies set to 2, however, if we either write another file or overwrite the original file, we’ll see different disk usage.
# dd if=/dev/random of=/lamb/random2 bs=1m count=10 10+0 records in 10+0 records out 10485760 bytes transferred in 0.141795 secs (73950181 bytes/sec)
Look at disk usage now.
# zfs list mypool/lamb NAME USED AVAIL REFER MOUNTPOINT mypool/lamb 30.2M 13.7G 30.1M /lamb
The total space usage is 30 MB, 10 for the first file of random data, and 20 for 2 copies of the second 10 MB file. When we look at the files with ls(1), they only show the actual size:
# ls -l /lamb/random* -rw-r--r-- 1 root wheel 10485760 Apr 6 15:27 /lamb/random1 -rw-r--r-- 1 root wheel 10485760 Apr 6 15:29 /lamb/random2
If you really want to muck with your dataset’s resilience, look at metadata redundancy.
Metadata Redundancy
Each dataset stores an extra copy of its internal metadata, so that if a single block is corrupted, the amount of user data lost is limited. This extra copy is in addition to any redundancy provided at the VDEV level (e.g., by mirroring or RAID-Z). It’s also in addition to any extra copies specified by the copies property (below), up to a total of three copies.
The redundant_metadata property lets you decide how redundant you want your dataset metadata to be. Most users should never change this property.
When redundant_metadata is set to all (the default), ZFS stores an extra copy of all metadata. If a single on-disk block is corrupt, at worst a single block of user data can be lost.
When you set redundant_metadata to most, ZFS stores an extra copy of only most types of metadata. This can improve performance of random writes, because less metadata must be written. When only most metadata is redundant, at worst about 100 blocks of user data can be lost if a single on-disk block is corrupt. The exact behavior of which metadata blocks are stored redundantly may change in future releases.
If you set redundant_metadata to most and copies to 3, and the dataset lives on a mirrored pool, then ZFS stores six copies of most metadata, and four copies of data and some metadata.
This property was designed for specific use cases that frequently update metadata, such as databases. If the data is already protected by sufficiently strong fault tolerance, reducing the number of copies of the metadata that must be written each time the database changes can improve performance. Change this value only if you know what you are doing.
Now that you have a grip on datasets, let’s talk about pool maintenance.
1 Probably badly.
2 Properly written setuid programs are not risky. That’s why real setuid programs are risky.
3 When you name ZFS properties after yourself, you are immortalized by your work. Whether this is good or bad depends on your work.