2019-11-12 22:17:40 +03:00
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER START
|
|
|
|
.\"
|
|
|
|
.\" The contents of this file are subject to the terms of the
|
|
|
|
.\" Common Development and Distribution License (the "License").
|
|
|
|
.\" You may not use this file except in compliance with the License.
|
|
|
|
.\"
|
|
|
|
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
.\" or http://www.opensolaris.org/os/licensing.
|
|
|
|
.\" See the License for the specific language governing permissions
|
|
|
|
.\" and limitations under the License.
|
|
|
|
.\"
|
|
|
|
.\" When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
.\" If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
.\" fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
.\" information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER END
|
|
|
|
.\"
|
|
|
|
.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
|
|
|
|
.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
|
|
|
|
.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
|
|
|
|
.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
|
|
|
|
.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
|
|
|
|
.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
|
|
|
|
.\" Copyright (c) 2014 Integros [integros.com]
|
|
|
|
.\" Copyright 2019 Richard Laager. All rights reserved.
|
|
|
|
.\" Copyright 2018 Nexenta Systems, Inc.
|
|
|
|
.\" Copyright 2019 Joyent, Inc.
|
|
|
|
.\"
|
|
|
|
.Dd June 30, 2019
|
2021-06-04 23:29:26 +03:00
|
|
|
.Dt ZFSCONCEPTS 7
|
2020-08-21 21:55:47 +03:00
|
|
|
.Os
|
2021-05-27 03:46:40 +03:00
|
|
|
.
|
2019-11-12 22:17:40 +03:00
|
|
|
.Sh NAME
|
|
|
|
.Nm zfsconcepts
|
2021-05-27 03:46:40 +03:00
|
|
|
.Nd overview of ZFS concepts
|
|
|
|
.
|
2019-11-12 22:17:40 +03:00
|
|
|
.Sh DESCRIPTION
|
|
|
|
.Ss ZFS File System Hierarchy
|
|
|
|
A ZFS storage pool is a logical collection of devices that provide space for
|
|
|
|
datasets.
|
|
|
|
A storage pool is also the root of the ZFS file system hierarchy.
|
|
|
|
.Pp
|
|
|
|
The root of the pool can be accessed as a file system, such as mounting and
|
|
|
|
unmounting, taking snapshots, and setting properties.
|
|
|
|
The physical storage characteristics, however, are managed by the
|
|
|
|
.Xr zpool 8
|
|
|
|
command.
|
|
|
|
.Pp
|
|
|
|
See
|
|
|
|
.Xr zpool 8
|
|
|
|
for more information on creating and administering pools.
|
|
|
|
.Ss Snapshots
|
|
|
|
A snapshot is a read-only copy of a file system or volume.
|
|
|
|
Snapshots can be created extremely quickly, and initially consume no additional
|
|
|
|
space within the pool.
|
|
|
|
As data within the active dataset changes, the snapshot consumes more data than
|
|
|
|
would otherwise be shared with the active dataset.
|
|
|
|
.Pp
|
|
|
|
Snapshots can have arbitrary names.
|
|
|
|
Snapshots of volumes can be cloned or rolled back, visibility is determined
|
|
|
|
by the
|
|
|
|
.Sy snapdev
|
|
|
|
property of the parent volume.
|
|
|
|
.Pp
|
|
|
|
File system snapshots can be accessed under the
|
|
|
|
.Pa .zfs/snapshot
|
|
|
|
directory in the root of the file system.
|
|
|
|
Snapshots are automatically mounted on demand and may be unmounted at regular
|
|
|
|
intervals.
|
|
|
|
The visibility of the
|
|
|
|
.Pa .zfs
|
|
|
|
directory can be controlled by the
|
|
|
|
.Sy snapdir
|
|
|
|
property.
|
|
|
|
.Ss Bookmarks
|
|
|
|
A bookmark is like a snapshot, a read-only copy of a file system or volume.
|
|
|
|
Bookmarks can be created extremely quickly, compared to snapshots, and they
|
2021-05-27 03:46:40 +03:00
|
|
|
consume no additional space within the pool.
|
|
|
|
Bookmarks can also have arbitrary names, much like snapshots.
|
|
|
|
.Pp
|
|
|
|
Unlike snapshots, bookmarks can not be accessed through the filesystem in any way.
|
|
|
|
From a storage standpoint a bookmark just provides a way to reference
|
|
|
|
when a snapshot was created as a distinct object.
|
|
|
|
Bookmarks are initially tied to a snapshot, not the filesystem or volume,
|
|
|
|
and they will survive if the snapshot itself is destroyed.
|
|
|
|
Since they are very light weight there's little incentive to destroy them.
|
2019-11-12 22:17:40 +03:00
|
|
|
.Ss Clones
|
|
|
|
A clone is a writable volume or file system whose initial contents are the same
|
|
|
|
as another dataset.
|
|
|
|
As with snapshots, creating a clone is nearly instantaneous, and initially
|
|
|
|
consumes no additional space.
|
|
|
|
.Pp
|
|
|
|
Clones can only be created from a snapshot.
|
|
|
|
When a snapshot is cloned, it creates an implicit dependency between the parent
|
|
|
|
and child.
|
|
|
|
Even though the clone is created somewhere else in the dataset hierarchy, the
|
|
|
|
original snapshot cannot be destroyed as long as a clone exists.
|
|
|
|
The
|
|
|
|
.Sy origin
|
|
|
|
property exposes this dependency, and the
|
|
|
|
.Cm destroy
|
|
|
|
command lists any such dependencies, if they exist.
|
|
|
|
.Pp
|
|
|
|
The clone parent-child dependency relationship can be reversed by using the
|
|
|
|
.Cm promote
|
|
|
|
subcommand.
|
|
|
|
This causes the
|
|
|
|
.Qq origin
|
|
|
|
file system to become a clone of the specified file system, which makes it
|
|
|
|
possible to destroy the file system that the clone was created from.
|
|
|
|
.Ss "Mount Points"
|
|
|
|
Creating a ZFS file system is a simple operation, so the number of file systems
|
|
|
|
per system is likely to be numerous.
|
|
|
|
To cope with this, ZFS automatically manages mounting and unmounting file
|
|
|
|
systems without the need to edit the
|
|
|
|
.Pa /etc/fstab
|
|
|
|
file.
|
|
|
|
All automatically managed file systems are mounted by ZFS at boot time.
|
|
|
|
.Pp
|
|
|
|
By default, file systems are mounted under
|
|
|
|
.Pa /path ,
|
|
|
|
where
|
|
|
|
.Ar path
|
|
|
|
is the name of the file system in the ZFS namespace.
|
|
|
|
Directories are created and destroyed as needed.
|
|
|
|
.Pp
|
|
|
|
A file system can also have a mount point set in the
|
|
|
|
.Sy mountpoint
|
|
|
|
property.
|
|
|
|
This directory is created as needed, and ZFS automatically mounts the file
|
|
|
|
system when the
|
|
|
|
.Nm zfs Cm mount Fl a
|
|
|
|
command is invoked
|
|
|
|
.Po without editing
|
|
|
|
.Pa /etc/fstab
|
|
|
|
.Pc .
|
|
|
|
The
|
|
|
|
.Sy mountpoint
|
|
|
|
property can be inherited, so if
|
|
|
|
.Em pool/home
|
|
|
|
has a mount point of
|
|
|
|
.Pa /export/stuff ,
|
|
|
|
then
|
|
|
|
.Em pool/home/user
|
|
|
|
automatically inherits a mount point of
|
|
|
|
.Pa /export/stuff/user .
|
|
|
|
.Pp
|
|
|
|
A file system
|
|
|
|
.Sy mountpoint
|
|
|
|
property of
|
|
|
|
.Sy none
|
|
|
|
prevents the file system from being mounted.
|
|
|
|
.Pp
|
|
|
|
If needed, ZFS file systems can also be managed with traditional tools
|
|
|
|
.Po
|
|
|
|
.Nm mount ,
|
|
|
|
.Nm umount ,
|
|
|
|
.Pa /etc/fstab
|
|
|
|
.Pc .
|
|
|
|
If a file system's mount point is set to
|
|
|
|
.Sy legacy ,
|
|
|
|
ZFS makes no attempt to manage the file system, and the administrator is
|
2021-05-27 03:46:40 +03:00
|
|
|
responsible for mounting and unmounting the file system.
|
|
|
|
Because pools must
|
2019-11-12 22:17:40 +03:00
|
|
|
be imported before a legacy mount can succeed, administrators should ensure
|
|
|
|
that legacy mounts are only attempted after the zpool import process
|
2021-05-27 03:46:40 +03:00
|
|
|
finishes at boot time.
|
|
|
|
For example, on machines using systemd, the mount option
|
2019-11-12 22:17:40 +03:00
|
|
|
.Pp
|
|
|
|
.Nm x-systemd.requires=zfs-import.target
|
|
|
|
.Pp
|
|
|
|
will ensure that the zfs-import completes before systemd attempts mounting
|
2021-05-27 03:46:40 +03:00
|
|
|
the filesystem.
|
|
|
|
See
|
|
|
|
.Xr systemd.mount 5
|
|
|
|
for details.
|
2019-11-12 22:17:40 +03:00
|
|
|
.Ss Deduplication
|
|
|
|
Deduplication is the process for removing redundant data at the block level,
|
2021-05-27 03:46:40 +03:00
|
|
|
reducing the total amount of data stored.
|
|
|
|
If a file system has the
|
2019-11-12 22:17:40 +03:00
|
|
|
.Sy dedup
|
2021-05-27 03:46:40 +03:00
|
|
|
property enabled, duplicate data blocks are removed synchronously.
|
|
|
|
The result
|
2019-11-12 22:17:40 +03:00
|
|
|
is that only unique data is stored and common components are shared among files.
|
|
|
|
.Pp
|
2021-05-27 03:46:40 +03:00
|
|
|
Deduplicating data is a very resource-intensive operation.
|
|
|
|
It is generally recommended that you have at least 1.25 GiB of RAM
|
|
|
|
per 1 TiB of storage when you enable deduplication.
|
|
|
|
Calculating the exact requirement depends heavily
|
2019-11-12 22:17:40 +03:00
|
|
|
on the type of data stored in the pool.
|
|
|
|
.Pp
|
|
|
|
Enabling deduplication on an improperly-designed system can result in
|
2022-02-17 23:26:43 +03:00
|
|
|
performance issues (slow I/O and administrative operations).
|
2021-05-27 03:46:40 +03:00
|
|
|
It can potentially lead to problems importing a pool due to memory exhaustion.
|
|
|
|
Deduplication can consume significant processing power (CPU) and memory as well
|
2022-02-17 23:26:43 +03:00
|
|
|
as generate additional disk I/O.
|
2019-11-12 22:17:40 +03:00
|
|
|
.Pp
|
|
|
|
Before creating a pool with deduplication enabled, ensure that you have planned
|
|
|
|
your hardware requirements appropriately and implemented appropriate recovery
|
2021-05-27 03:46:40 +03:00
|
|
|
practices, such as regular backups.
|
|
|
|
Consider using the
|
|
|
|
.Sy compression
|
|
|
|
property as a less resource-intensive alternative.
|