mirror of
https://git.proxmox.com/git/mirror_zfs.git
synced 2024-11-17 10:01:01 +03:00
69142125d7
Add `zpool` flags to control the slot power to drives. This assumes your SAS or NVMe enclosure supports slot power control via sysfs. The new `--power` flag is added to `zpool offline|online|clear`: zpool offline --power <pool> <device> Turn off device slot power zpool online --power <pool> <device> Turn on device slot power zpool clear --power <pool> [device] Turn on device slot power If the ZPOOL_AUTO_POWER_ON_SLOT env var is set, then the '--power' option is automatically implied for `zpool online` and `zpool clear` and does not need to be passed. zpool status also gets a --power option to print the slot power status. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Mart Frauenlob <AllKind@fastest.cc> Signed-off-by: Tony Hutter <hutter2@llnl.gov> Closes #15662
614 lines
18 KiB
Groff
614 lines
18 KiB
Groff
.\"
|
|
.\" CDDL HEADER START
|
|
.\"
|
|
.\" The contents of this file are subject to the terms of the
|
|
.\" Common Development and Distribution License (the "License").
|
|
.\" You may not use this file except in compliance with the License.
|
|
.\"
|
|
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
.\" or https://opensource.org/licenses/CDDL-1.0.
|
|
.\" See the License for the specific language governing permissions
|
|
.\" and limitations under the License.
|
|
.\"
|
|
.\" When distributing Covered Code, include this CDDL HEADER in each
|
|
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
.\" If applicable, add the following below this CDDL HEADER, with the
|
|
.\" fields enclosed by brackets "[]" replaced with your own identifying
|
|
.\" information: Portions Copyright [yyyy] [name of copyright owner]
|
|
.\"
|
|
.\" CDDL HEADER END
|
|
.\"
|
|
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
|
|
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
|
|
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
|
|
.\" Copyright (c) 2017 Datto Inc.
|
|
.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
|
|
.\" Copyright 2017 Nexenta Systems, Inc.
|
|
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
|
|
.\"
|
|
.Dd March 16, 2022
|
|
.Dt ZPOOL 8
|
|
.Os
|
|
.
|
|
.Sh NAME
|
|
.Nm zpool
|
|
.Nd configure ZFS storage pools
|
|
.Sh SYNOPSIS
|
|
.Nm
|
|
.Fl ?V
|
|
.Nm
|
|
.Cm version
|
|
.Nm
|
|
.Cm subcommand
|
|
.Op Ar arguments
|
|
.
|
|
.Sh DESCRIPTION
|
|
The
|
|
.Nm
|
|
command configures ZFS storage pools.
|
|
A storage pool is a collection of devices that provides physical storage and
|
|
data replication for ZFS datasets.
|
|
All datasets within a storage pool share the same space.
|
|
See
|
|
.Xr zfs 8
|
|
for information on managing datasets.
|
|
.Pp
|
|
For an overview of creating and managing ZFS storage pools see the
|
|
.Xr zpoolconcepts 7
|
|
manual page.
|
|
.
|
|
.Sh SUBCOMMANDS
|
|
All subcommands that modify state are logged persistently to the pool in their
|
|
original form.
|
|
.Pp
|
|
The
|
|
.Nm
|
|
command provides subcommands to create and destroy storage pools, add capacity
|
|
to storage pools, and provide information about the storage pools.
|
|
The following subcommands are supported:
|
|
.Bl -tag -width Ds
|
|
.It Xo
|
|
.Nm
|
|
.Fl ?\&
|
|
.Xc
|
|
Displays a help message.
|
|
.It Xo
|
|
.Nm
|
|
.Fl V , -version
|
|
.Xc
|
|
.It Xo
|
|
.Nm
|
|
.Cm version
|
|
.Xc
|
|
Displays the software version of the
|
|
.Nm
|
|
userland utility and the ZFS kernel module.
|
|
.El
|
|
.
|
|
.Ss Creation
|
|
.Bl -tag -width Ds
|
|
.It Xr zpool-create 8
|
|
Creates a new storage pool containing the virtual devices specified on the
|
|
command line.
|
|
.It Xr zpool-initialize 8
|
|
Begins initializing by writing to all unallocated regions on the specified
|
|
devices, or all eligible devices in the pool if no individual devices are
|
|
specified.
|
|
.El
|
|
.
|
|
.Ss Destruction
|
|
.Bl -tag -width Ds
|
|
.It Xr zpool-destroy 8
|
|
Destroys the given pool, freeing up any devices for other use.
|
|
.It Xr zpool-labelclear 8
|
|
Removes ZFS label information from the specified
|
|
.Ar device .
|
|
.El
|
|
.
|
|
.Ss Virtual Devices
|
|
.Bl -tag -width Ds
|
|
.It Xo
|
|
.Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8
|
|
.Xc
|
|
Converts a non-redundant disk into a mirror, or increases
|
|
the redundancy level of an existing mirror
|
|
.Cm ( attach Ns ), or performs the inverse operation (
|
|
.Cm detach Ns ).
|
|
.It Xo
|
|
.Xr zpool-add 8 Ns / Ns Xr zpool-remove 8
|
|
.Xc
|
|
Adds the specified virtual devices to the given pool,
|
|
or removes the specified device from the pool.
|
|
.It Xr zpool-replace 8
|
|
Replaces an existing device (which may be faulted) with a new one.
|
|
.It Xr zpool-split 8
|
|
Creates a new pool by splitting all mirrors in an existing pool (which decreases
|
|
its redundancy).
|
|
.El
|
|
.
|
|
.Ss Properties
|
|
Available pool properties listed in the
|
|
.Xr zpoolprops 7
|
|
manual page.
|
|
.Bl -tag -width Ds
|
|
.It Xr zpool-list 8
|
|
Lists the given pools along with a health status and space usage.
|
|
.It Xo
|
|
.Xr zpool-get 8 Ns / Ns Xr zpool-set 8
|
|
.Xc
|
|
Retrieves the given list of properties
|
|
.Po
|
|
or all properties if
|
|
.Sy all
|
|
is used
|
|
.Pc
|
|
for the specified storage pool(s).
|
|
.El
|
|
.
|
|
.Ss Monitoring
|
|
.Bl -tag -width Ds
|
|
.It Xr zpool-status 8
|
|
Displays the detailed health status for the given pools.
|
|
.It Xr zpool-iostat 8
|
|
Displays logical I/O statistics for the given pools/vdevs.
|
|
Physical I/O operations may be observed via
|
|
.Xr iostat 1 .
|
|
.It Xr zpool-events 8
|
|
Lists all recent events generated by the ZFS kernel modules.
|
|
These events are consumed by the
|
|
.Xr zed 8
|
|
and used to automate administrative tasks such as replacing a failed device
|
|
with a hot spare.
|
|
That manual page also describes the subclasses and event payloads
|
|
that can be generated.
|
|
.It Xr zpool-history 8
|
|
Displays the command history of the specified pool(s) or all pools if no pool is
|
|
specified.
|
|
.El
|
|
.
|
|
.Ss Maintenance
|
|
.Bl -tag -width Ds
|
|
.It Xr zpool-scrub 8
|
|
Begins a scrub or resumes a paused scrub.
|
|
.It Xr zpool-checkpoint 8
|
|
Checkpoints the current state of
|
|
.Ar pool ,
|
|
which can be later restored by
|
|
.Nm zpool Cm import Fl -rewind-to-checkpoint .
|
|
.It Xr zpool-trim 8
|
|
Initiates an immediate on-demand TRIM operation for all of the free space in a
|
|
pool.
|
|
This operation informs the underlying storage devices of all blocks
|
|
in the pool which are no longer allocated and allows thinly provisioned
|
|
devices to reclaim the space.
|
|
.It Xr zpool-sync 8
|
|
This command forces all in-core dirty data to be written to the primary
|
|
pool storage and not the ZIL.
|
|
It will also update administrative information including quota reporting.
|
|
Without arguments,
|
|
.Nm zpool Cm sync
|
|
will sync all pools on the system.
|
|
Otherwise, it will sync only the specified pool(s).
|
|
.It Xr zpool-upgrade 8
|
|
Manage the on-disk format version of storage pools.
|
|
.It Xr zpool-wait 8
|
|
Waits until all background activity of the given types has ceased in the given
|
|
pool.
|
|
.El
|
|
.
|
|
.Ss Fault Resolution
|
|
.Bl -tag -width Ds
|
|
.It Xo
|
|
.Xr zpool-offline 8 Ns / Ns Xr zpool-online 8
|
|
.Xc
|
|
Takes the specified physical device offline or brings it online.
|
|
.It Xr zpool-resilver 8
|
|
Starts a resilver.
|
|
If an existing resilver is already running it will be restarted from the
|
|
beginning.
|
|
.It Xr zpool-reopen 8
|
|
Reopen all the vdevs associated with the pool.
|
|
.It Xr zpool-clear 8
|
|
Clears device errors in a pool.
|
|
.El
|
|
.
|
|
.Ss Import & Export
|
|
.Bl -tag -width Ds
|
|
.It Xr zpool-import 8
|
|
Make disks containing ZFS storage pools available for use on the system.
|
|
.It Xr zpool-export 8
|
|
Exports the given pools from the system.
|
|
.It Xr zpool-reguid 8
|
|
Generates a new unique identifier for the pool.
|
|
.El
|
|
.
|
|
.Sh EXIT STATUS
|
|
The following exit values are returned:
|
|
.Bl -tag -compact -offset 4n -width "a"
|
|
.It Sy 0
|
|
Successful completion.
|
|
.It Sy 1
|
|
An error occurred.
|
|
.It Sy 2
|
|
Invalid command line options were specified.
|
|
.El
|
|
.
|
|
.Sh EXAMPLES
|
|
.\" Examples 1, 2, 3, 4, 12, 13 are shared with zpool-create.8.
|
|
.\" Examples 6, 14 are shared with zpool-add.8.
|
|
.\" Examples 7, 16 are shared with zpool-list.8.
|
|
.\" Examples 8 are shared with zpool-destroy.8.
|
|
.\" Examples 9 are shared with zpool-export.8.
|
|
.\" Examples 10 are shared with zpool-import.8.
|
|
.\" Examples 11 are shared with zpool-upgrade.8.
|
|
.\" Examples 15 are shared with zpool-remove.8.
|
|
.\" Examples 17 are shared with zpool-status.8.
|
|
.\" Examples 14, 17 are also shared with zpool-iostat.8.
|
|
.\" Make sure to update them omnidirectionally
|
|
.Ss Example 1 : No Creating a RAID-Z Storage Pool
|
|
The following command creates a pool with a single raidz root vdev that
|
|
consists of six disks:
|
|
.Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf
|
|
.
|
|
.Ss Example 2 : No Creating a Mirrored Storage Pool
|
|
The following command creates a pool with two mirrors, where each mirror
|
|
contains two disks:
|
|
.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd
|
|
.
|
|
.Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions
|
|
The following command creates a non-redundant pool using two disk partitions:
|
|
.Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2
|
|
.
|
|
.Ss Example 4 : No Creating a ZFS Storage Pool by Using Files
|
|
The following command creates a non-redundant pool using files.
|
|
While not recommended, a pool based on files can be useful for experimental
|
|
purposes.
|
|
.Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b
|
|
.
|
|
.Ss Example 5 : No Making a non-mirrored ZFS Storage Pool mirrored
|
|
The following command converts an existing single device
|
|
.Ar sda
|
|
into a mirror by attaching a second device to it,
|
|
.Ar sdb .
|
|
.Dl # Nm zpool Cm attach Ar tank Pa sda sdb
|
|
.
|
|
.Ss Example 6 : No Adding a Mirror to a ZFS Storage Pool
|
|
The following command adds two mirrored disks to the pool
|
|
.Ar tank ,
|
|
assuming the pool is already made up of two-way mirrors.
|
|
The additional space is immediately available to any datasets within the pool.
|
|
.Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb
|
|
.
|
|
.Ss Example 7 : No Listing Available ZFS Storage Pools
|
|
The following command lists all available pools on the system.
|
|
In this case, the pool
|
|
.Ar zion
|
|
is faulted due to a missing device.
|
|
The results from this command are similar to the following:
|
|
.Bd -literal -compact -offset Ds
|
|
.No # Nm zpool Cm list
|
|
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
|
|
rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
|
|
tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
|
|
zion - - - - - - - FAULTED -
|
|
.Ed
|
|
.
|
|
.Ss Example 8 : No Destroying a ZFS Storage Pool
|
|
The following command destroys the pool
|
|
.Ar tank
|
|
and any datasets contained within:
|
|
.Dl # Nm zpool Cm destroy Fl f Ar tank
|
|
.
|
|
.Ss Example 9 : No Exporting a ZFS Storage Pool
|
|
The following command exports the devices in pool
|
|
.Ar tank
|
|
so that they can be relocated or later imported:
|
|
.Dl # Nm zpool Cm export Ar tank
|
|
.
|
|
.Ss Example 10 : No Importing a ZFS Storage Pool
|
|
The following command displays available pools, and then imports the pool
|
|
.Ar tank
|
|
for use on the system.
|
|
The results from this command are similar to the following:
|
|
.Bd -literal -compact -offset Ds
|
|
.No # Nm zpool Cm import
|
|
pool: tank
|
|
id: 15451357997522795478
|
|
state: ONLINE
|
|
action: The pool can be imported using its name or numeric identifier.
|
|
config:
|
|
|
|
tank ONLINE
|
|
mirror ONLINE
|
|
sda ONLINE
|
|
sdb ONLINE
|
|
|
|
.No # Nm zpool Cm import Ar tank
|
|
.Ed
|
|
.
|
|
.Ss Example 11 : No Upgrading All ZFS Storage Pools to the Current Version
|
|
The following command upgrades all ZFS Storage pools to the current version of
|
|
the software:
|
|
.Bd -literal -compact -offset Ds
|
|
.No # Nm zpool Cm upgrade Fl a
|
|
This system is currently running ZFS version 2.
|
|
.Ed
|
|
.
|
|
.Ss Example 12 : No Managing Hot Spares
|
|
The following command creates a new pool with an available hot spare:
|
|
.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc
|
|
.Pp
|
|
If one of the disks were to fail, the pool would be reduced to the degraded
|
|
state.
|
|
The failed device can be replaced using the following command:
|
|
.Dl # Nm zpool Cm replace Ar tank Pa sda sdd
|
|
.Pp
|
|
Once the data has been resilvered, the spare is automatically removed and is
|
|
made available for use should another device fail.
|
|
The hot spare can be permanently removed from the pool using the following
|
|
command:
|
|
.Dl # Nm zpool Cm remove Ar tank Pa sdc
|
|
.
|
|
.Ss Example 13 : No Creating a ZFS Pool with Mirrored Separate Intent Logs
|
|
The following command creates a ZFS storage pool consisting of two, two-way
|
|
mirrors and mirrored log devices:
|
|
.Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf
|
|
.
|
|
.Ss Example 14 : No Adding Cache Devices to a ZFS Pool
|
|
The following command adds two disks for use as cache devices to a ZFS storage
|
|
pool:
|
|
.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd
|
|
.Pp
|
|
Once added, the cache devices gradually fill with content from main memory.
|
|
Depending on the size of your cache devices, it could take over an hour for
|
|
them to fill.
|
|
Capacity and reads can be monitored using the
|
|
.Cm iostat
|
|
subcommand as follows:
|
|
.Dl # Nm zpool Cm iostat Fl v Ar pool 5
|
|
.
|
|
.Ss Example 15 : No Removing a Mirrored top-level (Log or Data) Device
|
|
The following commands remove the mirrored log device
|
|
.Sy mirror-2
|
|
and mirrored top-level data device
|
|
.Sy mirror-1 .
|
|
.Pp
|
|
Given this configuration:
|
|
.Bd -literal -compact -offset Ds
|
|
pool: tank
|
|
state: ONLINE
|
|
scrub: none requested
|
|
config:
|
|
|
|
NAME STATE READ WRITE CKSUM
|
|
tank ONLINE 0 0 0
|
|
mirror-0 ONLINE 0 0 0
|
|
sda ONLINE 0 0 0
|
|
sdb ONLINE 0 0 0
|
|
mirror-1 ONLINE 0 0 0
|
|
sdc ONLINE 0 0 0
|
|
sdd ONLINE 0 0 0
|
|
logs
|
|
mirror-2 ONLINE 0 0 0
|
|
sde ONLINE 0 0 0
|
|
sdf ONLINE 0 0 0
|
|
.Ed
|
|
.Pp
|
|
The command to remove the mirrored log
|
|
.Ar mirror-2 No is :
|
|
.Dl # Nm zpool Cm remove Ar tank mirror-2
|
|
.Pp
|
|
The command to remove the mirrored data
|
|
.Ar mirror-1 No is :
|
|
.Dl # Nm zpool Cm remove Ar tank mirror-1
|
|
.
|
|
.Ss Example 16 : No Displaying expanded space on a device
|
|
The following command displays the detailed information for the pool
|
|
.Ar data .
|
|
This pool is comprised of a single raidz vdev where one of its devices
|
|
increased its capacity by 10 GiB.
|
|
In this example, the pool will not be able to utilize this extra capacity until
|
|
all the devices under the raidz vdev have been expanded.
|
|
.Bd -literal -compact -offset Ds
|
|
.No # Nm zpool Cm list Fl v Ar data
|
|
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
|
|
data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
|
|
raidz1 23.9G 14.6G 9.30G - 48%
|
|
sda - - - - -
|
|
sdb - - - 10G -
|
|
sdc - - - - -
|
|
.Ed
|
|
.
|
|
.Ss Example 17 : No Adding output columns
|
|
Additional columns can be added to the
|
|
.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c .
|
|
.Bd -literal -compact -offset Ds
|
|
.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size
|
|
NAME STATE READ WRITE CKSUM vendor model size
|
|
tank ONLINE 0 0 0
|
|
mirror-0 ONLINE 0 0 0
|
|
U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
|
|
U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
|
|
U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
|
|
U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
|
|
U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
|
|
U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
|
|
|
|
.No # Nm zpool Cm iostat Fl vc Pa size
|
|
capacity operations bandwidth
|
|
pool alloc free read write read write size
|
|
---------- ----- ----- ----- ----- ----- ----- ----
|
|
rpool 14.6G 54.9G 4 55 250K 2.69M
|
|
sda1 14.6G 54.9G 4 55 250K 2.69M 70G
|
|
---------- ----- ----- ----- ----- ----- ----- ----
|
|
.Ed
|
|
.
|
|
.Sh ENVIRONMENT VARIABLES
|
|
.Bl -tag -compact -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE"
|
|
.It Sy ZFS_ABORT
|
|
Cause
|
|
.Nm
|
|
to dump core on exit for the purposes of running
|
|
.Sy ::findleaks .
|
|
.It Sy ZFS_COLOR
|
|
Use ANSI color in
|
|
.Nm zpool Cm status
|
|
and
|
|
.Nm zpool Cm iostat
|
|
output.
|
|
.It Sy ZPOOL_AUTO_POWER_ON_SLOT
|
|
Automatically attempt to turn on the drives enclosure slot power to a drive when
|
|
running the
|
|
.Nm zpool Cm online
|
|
or
|
|
.Nm zpool Cm clear
|
|
commands.
|
|
This has the same effect as passing the
|
|
.Fl -power
|
|
option to those commands.
|
|
.It Sy ZPOOL_POWER_ON_SLOT_TIMEOUT_MS
|
|
The maximum time in milliseconds to wait for a slot power sysfs value
|
|
to return the correct value after writing it.
|
|
For example, after writing "on" to the sysfs enclosure slot power_control file,
|
|
it can take some time for the enclosure to power down the slot and return
|
|
"on" if you read back the 'power_control' value.
|
|
Defaults to 30 seconds (30000ms) if not set.
|
|
.It Sy ZPOOL_IMPORT_PATH
|
|
The search path for devices or files to use with the pool.
|
|
This is a colon-separated list of directories in which
|
|
.Nm
|
|
looks for device nodes and files.
|
|
Similar to the
|
|
.Fl d
|
|
option in
|
|
.Nm zpool import .
|
|
.It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS
|
|
The maximum time in milliseconds that
|
|
.Nm zpool import
|
|
will wait for an expected device to be available.
|
|
.It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE
|
|
If set, suppress warning about non-native vdev ashift in
|
|
.Nm zpool Cm status .
|
|
The value is not used, only the presence or absence of the variable matters.
|
|
.It Sy ZPOOL_VDEV_NAME_GUID
|
|
Cause
|
|
.Nm
|
|
subcommands to output vdev guids by default.
|
|
This behavior is identical to the
|
|
.Nm zpool Cm status Fl g
|
|
command line option.
|
|
.It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS
|
|
Cause
|
|
.Nm
|
|
subcommands to follow links for vdev names by default.
|
|
This behavior is identical to the
|
|
.Nm zpool Cm status Fl L
|
|
command line option.
|
|
.It Sy ZPOOL_VDEV_NAME_PATH
|
|
Cause
|
|
.Nm
|
|
subcommands to output full vdev path names by default.
|
|
This behavior is identical to the
|
|
.Nm zpool Cm status Fl P
|
|
command line option.
|
|
.It Sy ZFS_VDEV_DEVID_OPT_OUT
|
|
Older OpenZFS implementations had issues when attempting to display pool
|
|
config vdev names if a
|
|
.Sy devid
|
|
NVP value is present in the pool's config.
|
|
.Pp
|
|
For example, a pool that originated on illumos platform would have a
|
|
.Sy devid
|
|
value in the config and
|
|
.Nm zpool Cm status
|
|
would fail when listing the config.
|
|
This would also be true for future Linux-based pools.
|
|
.Pp
|
|
A pool can be stripped of any
|
|
.Sy devid
|
|
values on import or prevented from adding
|
|
them on
|
|
.Nm zpool Cm create
|
|
or
|
|
.Nm zpool Cm add
|
|
by setting
|
|
.Sy ZFS_VDEV_DEVID_OPT_OUT .
|
|
.Pp
|
|
.It Sy ZPOOL_SCRIPTS_AS_ROOT
|
|
Allow a privileged user to run
|
|
.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
|
|
Normally, only unprivileged users are allowed to run
|
|
.Fl c .
|
|
.It Sy ZPOOL_SCRIPTS_PATH
|
|
The search path for scripts when running
|
|
.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
|
|
This is a colon-separated list of directories and overrides the default
|
|
.Pa ~/.zpool.d
|
|
and
|
|
.Pa /etc/zfs/zpool.d
|
|
search paths.
|
|
.It Sy ZPOOL_SCRIPTS_ENABLED
|
|
Allow a user to run
|
|
.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
|
|
If
|
|
.Sy ZPOOL_SCRIPTS_ENABLED
|
|
is not set, it is assumed that the user is allowed to run
|
|
.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
|
|
.\" Shared with zfs.8
|
|
.It Sy ZFS_MODULE_TIMEOUT
|
|
Time, in seconds, to wait for
|
|
.Pa /dev/zfs
|
|
to appear.
|
|
Defaults to
|
|
.Sy 10 ,
|
|
max
|
|
.Sy 600 Pq 10 minutes .
|
|
If
|
|
.Pf < Sy 0 ,
|
|
wait forever; if
|
|
.Sy 0 ,
|
|
don't wait.
|
|
.El
|
|
.
|
|
.Sh INTERFACE STABILITY
|
|
.Sy Evolving
|
|
.
|
|
.Sh SEE ALSO
|
|
.Xr zfs 4 ,
|
|
.Xr zpool-features 7 ,
|
|
.Xr zpoolconcepts 7 ,
|
|
.Xr zpoolprops 7 ,
|
|
.Xr zed 8 ,
|
|
.Xr zfs 8 ,
|
|
.Xr zpool-add 8 ,
|
|
.Xr zpool-attach 8 ,
|
|
.Xr zpool-checkpoint 8 ,
|
|
.Xr zpool-clear 8 ,
|
|
.Xr zpool-create 8 ,
|
|
.Xr zpool-destroy 8 ,
|
|
.Xr zpool-detach 8 ,
|
|
.Xr zpool-events 8 ,
|
|
.Xr zpool-export 8 ,
|
|
.Xr zpool-get 8 ,
|
|
.Xr zpool-history 8 ,
|
|
.Xr zpool-import 8 ,
|
|
.Xr zpool-initialize 8 ,
|
|
.Xr zpool-iostat 8 ,
|
|
.Xr zpool-labelclear 8 ,
|
|
.Xr zpool-list 8 ,
|
|
.Xr zpool-offline 8 ,
|
|
.Xr zpool-online 8 ,
|
|
.Xr zpool-reguid 8 ,
|
|
.Xr zpool-remove 8 ,
|
|
.Xr zpool-reopen 8 ,
|
|
.Xr zpool-replace 8 ,
|
|
.Xr zpool-resilver 8 ,
|
|
.Xr zpool-scrub 8 ,
|
|
.Xr zpool-set 8 ,
|
|
.Xr zpool-split 8 ,
|
|
.Xr zpool-status 8 ,
|
|
.Xr zpool-sync 8 ,
|
|
.Xr zpool-trim 8 ,
|
|
.Xr zpool-upgrade 8 ,
|
|
.Xr zpool-wait 8
|