Modernise/fix/rewrite unlinted manpages

zpool-destroy.8: flatten, fix description
zfs-wait.8: flatten, fix description, use list for events
zpool-reguid.8: flatten, fix description
zpool-history.8: flatten, fix description
zpool-export.8: flatten, fix description, remove -f "unmount" reference
  AFAICT no such command exists even in Illumos (as of today, anyway),
  and we definitely don't call it
zpool-labelclear.8: flatten, fix description
zpool-features.5: modernise
spl-module-parameters.5: modernise
zfs-mount-generator.8: rewrite
zfs-module-parameters.5: modernise

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12169
This commit is contained in:
наб 2021-06-07 21:41:54 +02:00 committed by GitHub
parent afb96fa6ee
commit 2d815d955e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 3144 additions and 5530 deletions

View File

@ -1,285 +1,196 @@
'\" te
.\"
.\" The contents of this file are subject to the terms of the Common Development
.\" and Distribution License (the "License"). You may not use this file except
.\" in compliance with the License. You can obtain a copy of the license at
.\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
.\"
.\" See the License for the specific language governing permissions and
.\" limitations under the License. When distributing Covered Code, include this
.\" CDDL HEADER in each file and include the License file at
.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
.\" own identifying information:
.\" Portions Copyright [yyyy] [name of copyright owner]
.\"
.\" Copyright 2013 Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
.\"
.TH SPL-MODULE-PARAMETERS 5 "Aug 24, 2020" OpenZFS
.SH NAME
spl\-module\-parameters \- SPL module parameters
.SH DESCRIPTION
.sp
.LP
Description of the different parameters to the SPL module.
.SS "Module parameters"
.sp
.LP
.sp
.ne 2
.na
\fBspl_kmem_cache_kmem_threads\fR (uint)
.ad
.RS 12n
The number of threads created for the spl_kmem_cache task queue. This task
queue is responsible for allocating new slabs for use by the kmem caches.
.Dd August 24, 2020
.Dt SPL-MODULE-PARAMETERS 5
.Os
.
.Sh NAME
.Nm spl-module-parameters
.Nd parameters of the SPL kernel module
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Sy spl_kmem_cache_kmem_threads Ns = Ns Sy 4 Pq uint
The number of threads created for the spl_kmem_cache task queue.
This task queue is responsible for allocating new slabs
for use by the kmem caches.
For the majority of systems and workloads only a small number of threads are
required.
.sp
Default value: \fB4\fR
.RE
.sp
.ne 2
.na
\fBspl_kmem_cache_reclaim\fR (uint)
.ad
.RS 12n
.
.It Sy spl_kmem_cache_reclaim Ns = Ns Sy 0 Pq uint
When this is set it prevents Linux from being able to rapidly reclaim all the
memory held by the kmem caches. This may be useful in circumstances where
it's preferable that Linux reclaim memory from some other subsystem first.
memory held by the kmem caches.
This may be useful in circumstances where it's preferable that Linux
reclaim memory from some other subsystem first.
Setting this will increase the likelihood out of memory events on a memory
constrained system.
.sp
Default value: \fB0\fR
.RE
.sp
.ne 2
.na
\fBspl_kmem_cache_obj_per_slab\fR (uint)
.ad
.RS 12n
The preferred number of objects per slab in the cache. In general, a larger
value will increase the caches memory footprint while decreasing the time
required to perform an allocation. Conversely, a smaller value will minimize
the footprint and improve cache reclaim time but individual allocations may
take longer.
.sp
Default value: \fB8\fR
.RE
.sp
.ne 2
.na
\fBspl_kmem_cache_max_size\fR (uint)
.ad
.RS 12n
The maximum size of a kmem cache slab in MiB. This effectively limits
the maximum cache object size to \fBspl_kmem_cache_max_size\fR /
\fBspl_kmem_cache_obj_per_slab\fR. Caches may not be created with
.
.It Sy spl_kmem_cache_obj_per_slab Ns = Ns Sy 8 Pq uint
The preferred number of objects per slab in the cache.
In general, a larger value will increase the caches memory footprint
while decreasing the time required to perform an allocation.
Conversely, a smaller value will minimize the footprint
and improve cache reclaim time but individual allocations may take longer.
.
.It Sy spl_kmem_cache_max_size Ns = Ns Sy 32 Po 64-bit Pc or Sy 4 Po 32-bit Pc Pq uint
The maximum size of a kmem cache slab in MiB.
This effectively limits the maximum cache object size to
.Sy spl_kmem_cache_max_size Ns / Ns Sy spl_kmem_cache_obj_per_slab .
.Pp
Caches may not be created with
object sized larger than this limit.
.sp
Default value: \fB32 (64-bit) or 4 (32-bit)\fR
.RE
.sp
.ne 2
.na
\fBspl_kmem_cache_slab_limit\fR (uint)
.ad
.RS 12n
.
.It Sy spl_kmem_cache_slab_limit Ns = Ns Sy 16384 Pq uint
For small objects the Linux slab allocator should be used to make the most
efficient use of the memory. However, large objects are not supported by
the Linux slab and therefore the SPL implementation is preferred. This
value is used to determine the cutoff between a small and large object.
.sp
Objects of \fBspl_kmem_cache_slab_limit\fR or smaller will be allocated
using the Linux slab allocator, large objects use the SPL allocator. A
cutoff of 16K was determined to be optimal for architectures using 4K pages.
.sp
Default value: \fB16,384\fR
.RE
.sp
.ne 2
.na
\fBspl_kmem_alloc_warn\fR (uint)
.ad
.RS 12n
As a general rule kmem_alloc() allocations should be small, preferably
just a few pages since they must by physically contiguous. Therefore, a
rate limited warning will be printed to the console for any kmem_alloc()
efficient use of the memory.
However, large objects are not supported by
the Linux slab and therefore the SPL implementation is preferred.
This value is used to determine the cutoff between a small and large object.
.Pp
Objects of size
.Sy spl_kmem_cache_slab_limit
or smaller will be allocated using the Linux slab allocator,
large objects use the SPL allocator.
A cutoff of 16K was determined to be optimal for architectures using 4K pages.
.
.It Sy spl_kmem_alloc_warn Ns = Ns Sy 32768 Pq uint
As a general rule
.Fn kmem_alloc
allocations should be small,
preferably just a few pages, since they must by physically contiguous.
Therefore, a rate limited warning will be printed to the console for any
.Fn kmem_alloc
which exceeds a reasonable threshold.
.sp
.Pp
The default warning threshold is set to eight pages but capped at 32K to
accommodate systems using large pages. This value was selected to be small
enough to ensure the largest allocations are quickly noticed and fixed.
accommodate systems using large pages.
This value was selected to be small enough to ensure
the largest allocations are quickly noticed and fixed.
But large enough to avoid logging any warnings when a allocation size is
larger than optimal but not a serious concern. Since this value is tunable,
developers are encouraged to set it lower when testing so any new largish
allocations are quickly caught. These warnings may be disabled by setting
the threshold to zero.
.sp
Default value: \fB32,768\fR
.RE
.sp
.ne 2
.na
\fBspl_kmem_alloc_max\fR (uint)
.ad
.RS 12n
Large kmem_alloc() allocations will fail if they exceed KMALLOC_MAX_SIZE.
larger than optimal but not a serious concern.
Since this value is tunable, developers are encouraged to set it lower
when testing so any new largish allocations are quickly caught.
These warnings may be disabled by setting the threshold to zero.
.
.It Sy spl_kmem_alloc_max Ns = Ns Sy KMALLOC_MAX_SIZE Ns / Ns Sy 4 Pq uint
Large
.Fn kmem_alloc
allocations will fail if they exceed
.Sy KMALLOC_MAX_SIZE .
Allocations which are marginally smaller than this limit may succeed but
should still be avoided due to the expense of locating a contiguous range
of free pages. Therefore, a maximum kmem size with reasonable safely
margin of 4x is set. Kmem_alloc() allocations larger than this maximum
will quickly fail. Vmem_alloc() allocations less than or equal to this
value will use kmalloc(), but shift to vmalloc() when exceeding this value.
.sp
Default value: \fBKMALLOC_MAX_SIZE/4\fR
.RE
.sp
.ne 2
.na
\fBspl_kmem_cache_magazine_size\fR (uint)
.ad
.RS 12n
of free pages.
Therefore, a maximum kmem size with reasonable safely margin of 4x is set.
.Fn kmem_alloc
allocations larger than this maximum will quickly fail.
.Fn vmem_alloc
allocations less than or equal to this value will use
.Fn kmalloc ,
but shift to
.Fn vmalloc
when exceeding this value.
.
.It Sy spl_kmem_cache_magazine_size Ns = Ns Sy 0 Pq uint
Cache magazines are an optimization designed to minimize the cost of
allocating memory. They do this by keeping a per-cpu cache of recently
freed objects, which can then be reallocated without taking a lock. This
can improve performance on highly contended caches. However, because
objects in magazines will prevent otherwise empty slabs from being
immediately released this may not be ideal for low memory machines.
.sp
For this reason \fBspl_kmem_cache_magazine_size\fR can be used to set a
maximum magazine size. When this value is set to 0 the magazine size will
be automatically determined based on the object size. Otherwise magazines
will be limited to 2-256 objects per magazine (i.e per cpu). Magazines
may never be entirely disabled in this implementation.
.sp
Default value: \fB0\fR
.RE
.sp
.ne 2
.na
\fBspl_hostid\fR (ulong)
.ad
.RS 12n
allocating memory.
They do this by keeping a per-cpu cache of recently
freed objects, which can then be reallocated without taking a lock.
This can improve performance on highly contended caches.
However, because objects in magazines will prevent otherwise empty slabs
from being immediately released this may not be ideal for low memory machines.
.Pp
For this reason,
.Sy spl_kmem_cache_magazine_size
can be used to set a maximum magazine size.
When this value is set to 0 the magazine size will
be automatically determined based on the object size.
Otherwise magazines will be limited to 2-256 objects per magazine (i.e per cpu).
Magazines may never be entirely disabled in this implementation.
.
.It Sy spl_hostid Ns = Ns Sy 0 Pq ulong
The system hostid, when set this can be used to uniquely identify a system.
By default this value is set to zero which indicates the hostid is disabled.
It can be explicitly enabled by placing a unique non-zero value in
\fB/etc/hostid/\fR.
.sp
Default value: \fB0\fR
.RE
.sp
.ne 2
.na
\fBspl_hostid_path\fR (charp)
.ad
.RS 12n
The expected path to locate the system hostid when specified. This value
may be overridden for non-standard configurations.
.sp
Default value: \fB/etc/hostid\fR
.RE
.sp
.ne 2
.na
\fBspl_panic_halt\fR (uint)
.ad
.RS 12n
Cause a kernel panic on assertion failures. When not enabled, the thread is
halted to facilitate further debugging.
.sp
.Pa /etc/hostid .
.
.It Sy spl_hostid_path Ns = Ns Pa /etc/hostid Pq charp
The expected path to locate the system hostid when specified.
This value may be overridden for non-standard configurations.
.
.It Sy spl_panic_halt Ns = Ns Sy 0 Pq uint
Cause a kernel panic on assertion failures.
When not enabled, the thread is halted to facilitate further debugging.
.Pp
Set to a non-zero value to enable.
.sp
Default value: \fB0\fR
.RE
.sp
.ne 2
.na
\fBspl_taskq_kick\fR (uint)
.ad
.RS 12n
Kick stuck taskq to spawn threads. When writing a non-zero value to it, it will
scan all the taskqs. If any of them have a pending task more than 5 seconds old,
it will kick it to spawn more threads. This can be used if you find a rare
.
.It Sy spl_taskq_kick Ns = Ns Sy 0 Pq uint
Kick stuck taskq to spawn threads.
When writing a non-zero value to it, it will scan all the taskqs.
If any of them have a pending task more than 5 seconds old,
it will kick it to spawn more threads.
This can be used if you find a rare
deadlock occurs because one or more taskqs didn't spawn a thread when it should.
.sp
Default value: \fB0\fR
.RE
.sp
.ne 2
.na
\fBspl_taskq_thread_bind\fR (int)
.ad
.RS 12n
Bind taskq threads to specific CPUs. When enabled all taskq threads will
be distributed evenly over the available CPUs. By default, this behavior
is disabled to allow the Linux scheduler the maximum flexibility to determine
where a thread should run.
.sp
Default value: \fB0\fR
.RE
.sp
.ne 2
.na
\fBspl_taskq_thread_dynamic\fR (int)
.ad
.RS 12n
Allow dynamic taskqs. When enabled taskqs which set the TASKQ_DYNAMIC flag
will by default create only a single thread. New threads will be created on
demand up to a maximum allowed number to facilitate the completion of
outstanding tasks. Threads which are no longer needed will be promptly
destroyed. By default this behavior is enabled but it can be disabled to
.
.It Sy spl_taskq_thread_bind Ns = Ns Sy 0 Pq int
Bind taskq threads to specific CPUs.
When enabled all taskq threads will be distributed evenly
across the available CPUs.
By default, this behavior is disabled to allow the Linux scheduler
the maximum flexibility to determine where a thread should run.
.
.It Sy spl_taskq_thread_dynamic Ns = Ns Sy 1 Pq int
Allow dynamic taskqs.
When enabled taskqs which set the
.Sy TASKQ_DYNAMIC
flag will by default create only a single thread.
New threads will be created on demand up to a maximum allowed number
to facilitate the completion of outstanding tasks.
Threads which are no longer needed will be promptly destroyed.
By default this behavior is enabled but it can be disabled to
aid performance analysis or troubleshooting.
.sp
Default value: \fB1\fR
.RE
.sp
.ne 2
.na
\fBspl_taskq_thread_priority\fR (int)
.ad
.RS 12n
.
.It Sy spl_taskq_thread_priority Ns = Ns Sy 1 Pq int
Allow newly created taskq threads to set a non-default scheduler priority.
When enabled the priority specified when a taskq is created will be applied
to all threads created by that taskq. When disabled all threads will use
the default Linux kernel thread priority. By default, this behavior is
enabled.
.sp
Default value: \fB1\fR
.RE
.sp
.ne 2
.na
\fBspl_taskq_thread_sequential\fR (int)
.ad
.RS 12n
When enabled, the priority specified when a taskq is created will be applied
to all threads created by that taskq.
When disabled all threads will use the default Linux kernel thread priority.
By default, this behavior is enabled.
.
.It Sy spl_taskq_thread_sequential Ns = Ns Sy 4 Pq int
The number of items a taskq worker thread must handle without interruption
before requesting a new worker thread be spawned. This is used to control
before requesting a new worker thread be spawned.
This is used to control
how quickly taskqs ramp up the number of threads processing the queue.
Because Linux thread creation and destruction are relatively inexpensive a
small default value has been selected. This means that normally threads will
be created aggressively which is desirable. Increasing this value will
small default value has been selected.
This means that normally threads will be created aggressively which is desirable.
Increasing this value will
result in a slower thread creation rate which may be preferable for some
configurations.
.sp
Default value: \fB4\fR
.RE
.sp
.ne 2
.na
\fBspl_max_show_tasks\fR (uint)
.ad
.RS 12n
.
.It Sy spl_max_show_tasks Ns = Ns Sy 512 Pq uint
The maximum number of tasks per pending list in each taskq shown in
/proc/spl/{taskq,taskq-all}. Write 0 to turn off the limit. The proc file will
walk the lists with lock held, reading it could cause a lock up if the list
grow too large without limiting the output. "(truncated)" will be shown if the
list is larger than the limit.
.sp
Default value: \fB512\fR
.RE
.Pa /proc/spl/taskq{,-all} .
Write
.Sy 0
to turn off the limit.
The proc file will walk the lists with lock held,
reading it could cause a lock-up if the list grow too large
without limiting the output.
"(truncated)" will be shown if the list is larger than the limit.
.
.El

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -21,232 +21,172 @@
.\" LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
.\" OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
.\" WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
.TH ZFS-MOUNT-GENERATOR 8 "Apr 19, 2021" OpenZFS
.SH "NAME"
zfs\-mount\-generator \- generates systemd mount units for ZFS
.SH SYNOPSIS
.B @systemdgeneratordir@/zfs\-mount\-generator
.sp
.SH DESCRIPTION
zfs\-mount\-generator implements the \fBGenerators Specification\fP
of
.BR systemd (1),
and is called during early boot to generate
.BR systemd.mount (5)
units for automatically mounted datasets. Mount ordering and dependencies
are created for all tracked pools (see below).
.SS ENCRYPTION KEYS
If the dataset is an encryption root, a service that loads the associated key (either from file or through a
.BR systemd\-ask\-password (1)
prompt) will be created. This service
. BR RequiresMountsFor
the path of the key (if file-based) and also copies the mount unit's
.BR After ,
.BR Before
.\"
.Dd May 31, 2021
.Dt ZFS-MOUNT-GENERATOR 8
.Os
.
.Sh NAME
.Nm zfs-mount-generator
.Nd generate systemd mount units for ZFS filesystems
.Sh SYNOPSIS
.Pa @systemdgeneratordir@/zfs-mount-generator
.
.Sh DESCRIPTION
.Nm
is a
.Xr systemd.generator 7
that generates native
.Xr systemd.mount 5
units for configured ZFS datasets.
.
.Ss Properties
.Bl -tag -compact -width "org.openzfs.systemd:required-by=unit[ unit]…"
.It Sy mountpoint Ns =
.No Skipped if Sy legacy No or Sy none .
.
.It Sy canmount Ns =
.No Skipped if Sy off .
.No Skipped if only Sy noauto
datasets exist for a given mountpoint and there's more than one.
.No Datasets with Sy yes No take precedence over ones with Sy noauto No for the same mountpoint.
.No Sets logical Em noauto No flag if Sy noauto .
Encryption roots always generate
.Sy zfs-load-key@ Ns Ar root Ns Sy .service ,
even if
.Sy off .
.
.It Sy atime Ns = , Sy relatime Ns = , Sy devices Ns = , Sy exec Ns = , Sy readonly Ns = , Sy setuid Ns = , Sy nbmand Ns =
Used to generate mount options equivalent to
.Nm zfs Cm mount .
.
.It Sy encroot Ns = , Sy keylocation Ns =
If the dataset is an encryption root, its mount unit will bind to
.Sy zfs-load-key@ Ns Ar root Ns Sy .service ,
with additional dependencies as follows:
.Bl -tag -compact -offset Ds -width "keylocation=https://URL (et al.)"
.It Sy keylocation Ns = Ns Sy prompt
None, uses
.Xr systemd-ask-password 1
.It Sy keylocation Ns = Ns Sy https:// Ns Ar URL Pq et al.\&
.Sy Wants Ns = , Sy After Ns = : Pa network-online.target
.It Sy keylocation Ns = Ns Sy file:// Ns < Ns Ar path Ns >
.Sy RequiresMountsFor Ns = Ns Ar path
.El
.
The service also uses the same
.Sy Wants Ns = ,
.Sy After Ns = ,
.Sy Requires Ns = , No and
.Sy RequiresMountsFor Ns = ,
as the mount unit.
.
.It Sy org.openzfs.systemd:requires Ns = Ns Pa path Ns Oo " " Ns Pa path Oc Ns
.No Sets Sy Requires Ns = for the mount- and key-loading unit.
.
.It Sy org.openzfs.systemd:requires-mounts-for Ns = Ns Pa path Ns Oo " " Ns Pa path Oc Ns
.No Sets Sy RequiresMountsFor Ns = for the mount- and key-loading unit.
.
.It Sy org.openzfs.systemd:before Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns
.No Sets Sy Before Ns = for the mount unit.
.
.It Sy org.openzfs.systemd:after Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns
.No Sets Sy After Ns = for the mount unit.
.
.It Sy org.openzfs.systemd:wanted-by Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns
.No Sets logical Em noauto No flag (see below).
.No If not Sy none , No sets Sy WantedBy Ns = for the mount unit.
.It Sy org.openzfs.systemd:required-by Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns
.No Sets logical Em noauto No flag (see below).
.No If not Sy none , No sets Sy RequiredBy Ns = for the mount unit.
.
.It Sy org.openzfs.systemd:nofail Ns = Ns (unset) Ns | Ns Sy on Ns | Ns Sy off
Waxes or wanes strength of default reverse dependencies of the mount unit, see below.
.
.It Sy org.openzfs.systemd:ignore Ns = Ns Sy on Ns | Ns Sy off
.No Skip if Sy on .
.No Defaults to Sy off .
.El
.
.Ss Unit Ordering And Dependencies
Additionally, unless the pool the dataset resides on
is imported at generation time, both units gain
.Sy Wants Ns = Ns Pa zfs-import.target
and
.BR Requires .
All mount units of encrypted datasets add the key\-load service for their encryption root to their
.BR Wants
and
.BR After .
The service will not be
.BR Want ed
or
.BR Require d
by
.BR local-fs.target
directly, and so will only be started manually or as a dependency of a started mount unit.
.SS UNIT ORDERING AND DEPENDENCIES
mount unit's
.BR Before
\->
key\-load service (if any)
\->
mount unit
\->
mount unit's
.BR After
It is worth nothing that when a mount unit is activated, it activates all available mount units for parent paths to its mountpoint, i.e. activating the mount unit for /tmp/foo/1/2/3 automatically activates all available mount units for /tmp, /tmp/foo, /tmp/foo/1, and /tmp/foo/1/2. This is true for any combination of mount units from any sources, not just ZFS.
.SS CACHE FILE
.Sy After Ns = Ns Pa zfs-import.target .
.Pp
Additionally, unless the logical
.Em noauto
flag is set, the mount unit gains a reverse-dependency for
.Pa local-fs.target
of strength
.Bl -tag -compact -offset Ds -width "(unset)"
.It (unset)
.Sy WantedBy Ns = No + Sy Before Ns =
.It Sy on
.Sy WantedBy Ns =
.It Sy off
.Sy RequiredBy Ns = No + Sy Before Ns =
.El
.
.Ss Cache File
Because ZFS pools may not be available very early in the boot process,
information on ZFS mountpoints must be stored separately. The output of the command
.PP
.RS 4
zfs list -H -o name,mountpoint,canmount,atime,relatime,devices,exec,readonly,setuid,nbmand,encroot,keylocation,org.openzfs.systemd:requires,org.openzfs.systemd:requires-mounts-for,org.openzfs.systemd:before,org.openzfs.systemd:after,org.openzfs.systemd:wanted-by,org.openzfs.systemd:required-by,org.openzfs.systemd:nofail,org.openzfs.systemd:ignore
.RE
.PP
for datasets that should be mounted by systemd, should be kept
separate from the pool at
.RI @sysconfdir@/zfs/zfs-list.cache/ POOLNAME .
.PP
The cache file, if writeable, will be kept synchronized with the pool
state by the
.I history_event-zfs-list-cacher.sh
ZEDLET.
.PP
.sp
.SS PROPERTIES
The behavior of the generator script can be influenced by the following dataset properties:
.sp
.TP 4
.BR canmount = on | off | noauto
If a dataset has
.BR mountpoint
set and
.BR canmount
is not
.BR off ,
a mount unit will be generated.
Additionally, if
.BR canmount
is
.BR on ,
.BR local-fs.target
will gain a dependency on the mount unit.
This behavior is equal to the
.BR auto
and
.BR noauto
legacy mount options, see
.BR systemd.mount (5).
Encryption roots always generate a key-load service, even for
.BR canmount=off .
.TP 4
.BR org.openzfs.systemd:requires\-mounts\-for = \fIpath\fR...
Space\-separated list of mountpoints to require to be mounted for this mount unit
.TP 4
.BR org.openzfs.systemd:before = \fIunit\fR...
The mount unit and associated key\-load service will be ordered before this space\-separated list of units.
.TP 4
.BR org.openzfs.systemd:after = \fIunit\fR...
The mount unit and associated key\-load service will be ordered after this space\-separated list of units.
.TP 4
.BR org.openzfs.systemd:wanted\-by = \fIunit\fR...
Space-separated list of units that will gain a
.BR Wants
dependency on this mount unit.
Setting this property implies
.BR noauto .
.TP 4
.BR org.openzfs.systemd:required\-by = \fIunit\fR...
Space-separated list of units that will gain a
.BR Requires
dependency on this mount unit.
Setting this property implies
.BR noauto .
.TP 4
.BR org.openzfs.systemd:nofail = unset | on | off
Toggles between a
.BR Wants
and
.BR Requires
type of dependency between the mount unit and
.BR local-fs.target ,
if
.BR noauto
isn't set or implied.
.BR on :
Mount will be
.BR WantedBy
local-fs.target
.BR off :
Mount will be
.BR Before
and
.BR RequiredBy
local-fs.target
.BR unset :
Mount will be
.BR Before
and
.BR WantedBy
local-fs.target
.TP 4
.BR org.openzfs.systemd:ignore = on | off
If set to
.BR on ,
do not generate a mount unit for this dataset.
See also
.BR systemd.mount (5)
.PP
.SH ENVIRONMENT
information on ZFS mountpoints must be stored separately.
The output of
.Dl Nm zfs Cm list Fl Ho Ar name , Ns Aq every property above in order
for datasets that should be mounted by systemd should be kept at
.Pa @sysconfdir@/zfs/zfs-list.cache/ Ns Ar poolname ,
and, if writeable, will be kept synchronized for the entire pool by the
.Pa history_event-zfs-list-cacher.sh
ZEDLET, if enabled
.Pq see Xr zed 8 .
.
.Sh ENVIRONMENT
The
.BR $ZFS_DEBUG
environment variable, which can either be 0 (default),
1 (print summary accounting information at the end),
or at least 2 (print accounting information for each subprocess as it finishes).
If not present, /proc/cmdline is additionally checked for
.BR debug ,
in which case the debug level is set to 2.
.SH EXAMPLE
.Sy ZFS_DEBUG
environment variable can either be
.Sy 0
(default),
.Sy 1
(print summary accounting information at the end), or at least
.Sy 2
(print accounting information for each subprocess as it finishes).
.
If not present,
.Pa /proc/cmdline
is additionally checked for
.Qq debug ,
in which case the debug level is set to
.Sy 2 .
.
.Sh EXAMPLES
To begin, enable tracking for the pool:
.PP
.RS 4
touch
.RI @sysconfdir@/zfs/zfs-list.cache/ POOLNAME
.RE
.PP
Then, enable the tracking ZEDLET:
.PP
.RS 4
ln -s "@zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh" "@sysconfdir@/zfs/zed.d"
systemctl enable zfs-zed.service
systemctl restart zfs-zed.service
.RE
.PP
Force the running of the ZEDLET by setting a monitored property, e.g.
.BR canmount ,
for at least one dataset in the pool:
.PP
.RS 4
zfs set canmount=on
.I DATASET
.RE
.PP
This forces an update to the stale cache file.
To test the generator output, run
.PP
.RS 4
@systemdgeneratordir@/zfs-mount-generator /tmp/zfs-mount-generator
.RE
.PP
This will generate units and dependencies in
.I /tmp/zfs-mount-generator
for you to inspect them. The second and third argument are ignored.
If you're satisfied with the generated units, instruct systemd to re-run all generators:
.PP
.RS 4
systemctl daemon-reload
.RE
.PP
.sp
.SH SEE ALSO
.BR zfs (5)
.BR zfs-events (5)
.BR zed (8)
.BR zpool (5)
.BR systemd (1)
.BR systemd.target (5)
.BR systemd.special (7)
.BR systemd.mount (7)
.Dl # Nm touch Pa @sysconfdir@/zfs/zfs-list.cache/ Ns Ar poolname
Then enable the tracking ZEDLET:
.Dl # Nm ln Fl s Pa @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh @sysconfdir@/zfs/zed.d
.Dl # Nm systemctl Cm enable Pa zfs-zed.service
.Dl # Nm systemctl Cm restart Pa zfs-zed.service
.Pp
If no history event is in the queue,
inject one to ensure the ZEDLET runs to refresh the cache file
by setting a monitored property somewhere on the pool:
.Dl # Nm zfs Cm set Sy relatime Ns = Ns Sy off Ar poolname/dset
.Dl # Nm zfs Cm inherit Sy relatime Ar poolname/dset
.Pp
To test the generator output:
.Dl $ Nm mkdir Pa /tmp/zfs-mount-generator
.Dl $ Nm @systemdgeneratordir@/zfs-mount-generator Pa /tmp/zfs-mount-generator
.
If the generated units are satisfactory, instruct
.Nm systemd
to re-run all generators:
.Dl # Nm systemctl daemon-reload
.
.Sh SEE ALSO
.Xr systemd.mount 5 ,
.Xr systemd.target 5 ,
.Xr zfs 5 ,
.Xr zfs-events 5 ,
.Xr systemd.generator 7 ,
.Xr systemd.special 7 ,
.Xr zed 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,25 +26,20 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 31, 2021
.Dt ZFS-WAIT 8
.Os
.
.Sh NAME
.Nm zfs-wait
.Nd Wait for background activity to stop in a ZFS filesystem
.Nd wait for activity in ZFS filesystem to stop
.Sh SYNOPSIS
.Nm zfs
.Cm wait
.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns ...
.Ar fs
.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns
.Ar filesystem
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zfs
.Cm wait
.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns ...
.Ar fs
.Xc
Waits until all background activity of the given types has ceased in the given
filesystem.
The activity could cease because it has completed or because the filesystem has
@ -58,13 +52,14 @@ immediately.
These are the possible values for
.Ar activity ,
along with what each one waits for:
.Bd -literal
deleteq The filesystem's internal delete queue to empty
.Ed
.Bl -tag -compact -offset Ds -width "deleteq"
.It Sy deleteq
The filesystem's internal delete queue to empty
.El
.Pp
Note that the internal delete queue does not finish draining until
all large files have had time to be fully destroyed and all open file
handles to unlinked files are closed.
.El
.
.Sh SEE ALSO
.Xr lsof 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,29 +26,23 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 31, 2021
.Dt ZPOOL-DESTROY 8
.Os
.
.Sh NAME
.Nm zpool-destroy
.Nd Destroys the given ZFS storage pool, freeing up any devices for other use
.Nd destroy ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm destroy
.Op Fl f
.Ar pool
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm destroy
.Op Fl f
.Ar pool
.Xc
Destroys the given pool, freeing up any devices for other use.
This command tries to unmount any active datasets before destroying the pool.
.Bl -tag -width Ds
.It Fl f
Forces any active datasets contained within the pool to be unmounted.
.El
Forcefully unmount all active datasets.
.El

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,24 +29,17 @@
.Dd February 16, 2020
.Dt ZPOOL-EXPORT 8
.Os
.
.Sh NAME
.Nm zpool-export
.Nd Exports the given ZFS storage pools from the system
.Nd export ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm export
.Op Fl a
.Op Fl f
.Ar pool Ns ...
.Fl a Ns | Ns Ar pool Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm export
.Op Fl a
.Op Fl f
.Ar pool Ns ...
.Xc
Exports the given pools from the system.
All devices are marked as exported, but are still considered in use by other
subsystems.
@ -69,15 +61,12 @@ the disks.
.It Fl a
Exports all pools imported on the system.
.It Fl f
Forcefully unmount all datasets, using the
.Nm unmount Fl f
command.
This option is not supported on Linux.
Forcefully unmount all datasets, and allow export of pools with active shared spares.
.Pp
This command will forcefully export the pool even if it has a shared spare that
is currently being used.
This may lead to potential data corruption.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-import 8

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -30,22 +29,17 @@
.Dd August 9, 2019
.Dt ZPOOL-HISTORY 8
.Os
.
.Sh NAME
.Nm zpool-history
.Nd Displays the command history of the specified ZFS storage pool(s)
.Nd inspect command history of ZFS storage pools
.Sh SYNOPSIS
.Nm zpool
.Cm history
.Op Fl il
.Oo Ar pool Oc Ns ...
.Oo Ar pool Oc Ns
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm history
.Op Fl il
.Oo Ar pool Oc Ns ...
.Xc
Displays the command history of the specified pool(s) or all pools if no pool is
specified.
.Bl -tag -width Ds
@ -56,7 +50,7 @@ Displays log records in long format, which in addition to standard format
includes, the user name, the hostname, and the zone in which the operation was
performed.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-checkpoint 8 ,
.Xr zpool-events 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,25 +26,20 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 31, 2021
.Dt ZPOOL-LABELCLEAR 8
.Os
.
.Sh NAME
.Nm zpool-labelclear
.Nd Removes ZFS label information from the specified physical device
.Nd remove ZFS label information from device
.Sh SYNOPSIS
.Nm zpool
.Cm labelclear
.Op Fl f
.Ar device
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm labelclear
.Op Fl f
.Ar device
.Xc
Removes ZFS label information from the specified
.Ar device .
If the
@ -58,7 +52,7 @@ must not be part of an active pool configuration.
.It Fl f
Treat exported or foreign devices as inactive.
.El
.El
.
.Sh SEE ALSO
.Xr zpool-destroy 8 ,
.Xr zpool-detach 8 ,

View File

@ -18,7 +18,6 @@
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
@ -27,27 +26,23 @@
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
.\"
.Dd August 9, 2019
.Dd May 31, 2021
.Dt ZPOOL-REGUID 8
.Os
.
.Sh NAME
.Nm zpool-reguid
.Nd Generate a new unique identifier for a ZFS storage pool
.Nd generate new unique identifier for ZFS storage pool
.Sh SYNOPSIS
.Nm zpool
.Cm reguid
.Ar pool
.
.Sh DESCRIPTION
.Bl -tag -width Ds
.It Xo
.Nm zpool
.Cm reguid
.Ar pool
.Xc
Generates a new unique identifier for the pool.
You must ensure that all devices in this pool are online and healthy before
performing this action.
.El
.
.Sh SEE ALSO
.Xr zpool-export 8 ,
.Xr zpool-import 8

View File

@ -26,7 +26,7 @@ fi
IFS="
"
files="$(find "$@" -type f -name '*[1-9]*' ! -name '*module-param*' ! -name 'zpool-features*' ! -name 'zfs-mount-generator*')" || exit 1
files="$(find "$@" -type f -name '*[1-9]*')" || exit 1
add_excl="$(awk '
/^.\\" lint-ok:/ {