mirror_zfs/man/man8/zfs-mount-generator.8.in
Antonio Russo f88d069cbb systemd encryption key support
Modify zfs-mount-generator to produce a dependency on new
zfs-import-key-*.service units, dynamically created at boot to call
zfs load-key for the encryption root, before attempting to mount any
encrypted datasets.

These units are created by zfs-mount-generator, and RequiresMountsFor on
the keyfile, if present, or call systemd-ask-password if a passphrase is
requested.

This patch includes suggestions from @Fabian-Gruenbichler, @ryanjaeb and
@rlaager, as well an adaptation of @rlaager's script to retry on
incorrect password entry.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Antonio Russo <antonio.e.russo@gmail.com>
Closes #8750
Closes #8848
2019-07-15 16:31:47 -07:00

84 lines
1.9 KiB
Groff

.TH "ZFS\-MOUNT\-GENERATOR" "8" "ZFS" "zfs-mount-generator" "\""
.SH "NAME"
zfs\-mount\-generator \- generates systemd mount units for ZFS
.SH SYNOPSIS
.B /lib/systemd/system-generators/zfs\-mount\-generator
.sp
.SH DESCRIPTION
zfs\-mount\-generator implements the \fBGenerators Specification\fP
of
.BR systemd (1),
and is called during early boot to generate
.BR systemd.mount (5)
units for automatically mounted datasets. Mount ordering and dependencies
are created for all tracked pools (see below). If a dataset has
.BR canmount=on
and
.BR mountpoint
set, the
.BR auto
mount option will be set, and a dependency for
.BR local-fs.target
on the mount will be created.
Because zfs pools may not be available very early in the boot process,
information on ZFS mountpoints must be stored separately. The output
of the command
.PP
.RS 4
zfs list -H -o name,mountpoint,canmount,atime,relatime,devices,exec,readonly,setuid,nbmand,encroot,keylocation
.RE
.PP
for datasets that should be mounted by systemd, should be kept
separate from the pool, at
.PP
.RS 4
.RI @sysconfdir@/zfs/zfs-list.cache/ POOLNAME
.
.RE
.PP
The cache file, if writeable, will be kept synchronized with the pool
state by the ZEDLET
.PP
.RS 4
history_event-zfs-list-cacher.sh .
.RE
.PP
.sp
.SH EXAMPLE
To begin, enable tracking for the pool:
.PP
.RS 4
touch
.RI @sysconfdir@/zfs/zfs-list.cache/ POOLNAME
.RE
.PP
Then, enable the tracking ZEDLET:
.PP
.RS 4
ln -s "@zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh" "@sysconfdir@/zfs/zed.d"
systemctl enable zed.service
systemctl restart zed.service
.RE
.PP
Force the running of the ZEDLET by setting canmount=on for at least one dataset in the pool:
.PP
.RS 4
zfs set canmount=on
.I DATASET
.RE
.PP
This forces an update to the stale cache file.
.sp
.SH SEE ALSO
.BR zfs (5)
.BR zfs-events (5)
.BR zed (8)
.BR zpool (5)
.BR systemd (1)
.BR systemd.target (5)
.BR systemd.special (7)
.BR systemd.mount (7)