2013-03-13 00:26:50 +04:00
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER START
|
|
|
|
.\"
|
|
|
|
.\" The contents of this file are subject to the terms of the
|
|
|
|
.\" Common Development and Distribution License (the "License").
|
|
|
|
.\" You may not use this file except in compliance with the License.
|
|
|
|
.\"
|
|
|
|
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
2022-07-12 00:16:13 +03:00
|
|
|
.\" or https://opensource.org/licenses/CDDL-1.0.
|
2013-03-13 00:26:50 +04:00
|
|
|
.\" See the License for the specific language governing permissions
|
|
|
|
.\" and limitations under the License.
|
|
|
|
.\"
|
|
|
|
.\" When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
.\" If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
.\" fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
.\" information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER END
|
|
|
|
.\"
|
|
|
|
.\" Copyright (c) 2009 Oracle and/or its affiliates. All rights reserved.
|
|
|
|
.\" Copyright (c) 2009 Michael Gebetsroither <michael.geb@gmx.at>. All rights
|
|
|
|
.\" reserved.
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
.\" Copyright (c) 2017, Intel Corporation.
|
2013-03-13 00:26:50 +04:00
|
|
|
.\"
|
2021-05-26 15:10:56 +03:00
|
|
|
.Dd May 26, 2021
|
|
|
|
.Dt ZTEST 1
|
|
|
|
.Os
|
|
|
|
.
|
|
|
|
.Sh NAME
|
|
|
|
.Nm ztest
|
|
|
|
.Nd was written by the ZFS Developers as a ZFS unit test
|
|
|
|
.Sh SYNOPSIS
|
|
|
|
.Nm
|
|
|
|
.Op Fl VEG
|
|
|
|
.Op Fl v Ar vdevs
|
|
|
|
.Op Fl s Ar size_of_each_vdev
|
|
|
|
.Op Fl a Ar alignment_shift
|
|
|
|
.Op Fl m Ar mirror_copies
|
|
|
|
.Op Fl r Ar raidz_disks/draid_disks
|
|
|
|
.Op Fl R Ar raid_parity
|
|
|
|
.Op Fl K Ar raid_kind
|
|
|
|
.Op Fl D Ar draid_data
|
|
|
|
.Op Fl S Ar draid_spares
|
|
|
|
.Op Fl C Ar vdev_class_state
|
|
|
|
.Op Fl d Ar datasets
|
|
|
|
.Op Fl t Ar threads
|
|
|
|
.Op Fl g Ar gang_block_threshold
|
|
|
|
.Op Fl i Ar initialize_pool_i_times
|
|
|
|
.Op Fl k Ar kill_percentage
|
|
|
|
.Op Fl p Ar pool_name
|
|
|
|
.Op Fl T Ar time
|
|
|
|
.Op Fl z Ar zil_failure_rate
|
|
|
|
.
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
.Nm
|
|
|
|
.Fl X
|
|
|
|
.Op Fl VG
|
|
|
|
.Op Fl s Ar size_of_each_vdev
|
|
|
|
.Op Fl a Ar alignment_shift
|
|
|
|
.Op Fl r Ar raidz_disks
|
|
|
|
.Op Fl R Ar raid_parity
|
|
|
|
.Op Fl d Ar datasets
|
|
|
|
.Op Fl t Ar threads
|
|
|
|
.
|
2021-05-26 15:10:56 +03:00
|
|
|
.Sh DESCRIPTION
|
|
|
|
.Nm
|
|
|
|
was written by the ZFS Developers as a ZFS unit test.
|
|
|
|
The tool was developed in tandem with the ZFS functionality and was
|
|
|
|
executed nightly as one of the many regression test against the daily build.
|
|
|
|
As features were added to ZFS, unit tests were also added to
|
|
|
|
.Nm .
|
|
|
|
In addition, a separate test development team wrote and
|
2013-03-13 00:26:50 +04:00
|
|
|
executed more functional and stress tests.
|
2021-05-26 15:10:56 +03:00
|
|
|
.
|
|
|
|
.Pp
|
|
|
|
By default
|
|
|
|
.Nm
|
|
|
|
runs for ten minutes and uses block files
|
|
|
|
(stored in
|
|
|
|
.Pa /tmp )
|
|
|
|
to create pools rather than using physical disks.
|
|
|
|
Block files afford
|
|
|
|
.Nm
|
|
|
|
its flexibility to play around with
|
2013-03-13 00:26:50 +04:00
|
|
|
zpool components without requiring large hardware configurations.
|
2021-05-26 15:10:56 +03:00
|
|
|
However, storing the block files in
|
|
|
|
.Pa /tmp
|
|
|
|
may not work for you if you
|
2013-03-13 00:26:50 +04:00
|
|
|
have a small tmp directory.
|
2021-05-26 15:10:56 +03:00
|
|
|
.
|
|
|
|
.Pp
|
|
|
|
By default is non-verbose.
|
|
|
|
This is why entering the command above will result in
|
|
|
|
.Nm
|
|
|
|
quietly executing for 5 minutes.
|
|
|
|
The
|
|
|
|
.Fl V
|
|
|
|
option can be used to increase the verbosity of the tool.
|
|
|
|
Adding multiple
|
|
|
|
.Fl V
|
|
|
|
options is allowed and the more you add the more chatty
|
|
|
|
.Nm
|
2013-03-13 00:26:50 +04:00
|
|
|
becomes.
|
2021-05-26 15:10:56 +03:00
|
|
|
.
|
|
|
|
.Pp
|
|
|
|
After the
|
|
|
|
.Nm
|
|
|
|
run completes, you should notice many
|
|
|
|
.Pa ztest.*
|
|
|
|
files lying around.
|
|
|
|
Once the run completes you can safely remove these files.
|
|
|
|
Note that you shouldn't remove these files during a run.
|
|
|
|
You can re-use these files in your next
|
|
|
|
.Nm
|
|
|
|
run by using the
|
|
|
|
.Fl E
|
2013-03-13 00:26:50 +04:00
|
|
|
option.
|
2021-05-26 15:10:56 +03:00
|
|
|
.
|
|
|
|
.Sh OPTIONS
|
|
|
|
.Bl -tag -width "-v v"
|
|
|
|
.It Fl h , \&? , -help
|
2013-03-13 00:26:50 +04:00
|
|
|
Print a help summary.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl v , -vdevs Ns = (default: Sy 5 )
|
2013-03-13 00:26:50 +04:00
|
|
|
Number of vdevs.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl s , -vdev-size Ns = (default: Sy 64M )
|
2013-03-13 00:26:50 +04:00
|
|
|
Size of each vdev.
|
2022-11-12 15:23:30 +03:00
|
|
|
.It Fl a , -alignment-shift Ns = (default: Sy 9 ) No (use Sy 0 No for random )
|
2021-05-29 01:06:07 +03:00
|
|
|
Alignment shift used in test.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl m , -mirror-copies Ns = (default: Sy 2 )
|
2013-03-13 00:26:50 +04:00
|
|
|
Number of mirror copies.
|
2022-11-12 15:23:30 +03:00
|
|
|
.It Fl r , -raid-disks Ns = (default: Sy 4 No for raidz/ Ns Sy 16 No for draid )
|
2021-05-29 01:06:07 +03:00
|
|
|
Number of raidz/draid disks.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl R , -raid-parity Ns = (default: Sy 1 )
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
Raid parity (raidz & draid).
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
.It Xo
|
|
|
|
.Fl K , -raid-kind Ns = Ns
|
|
|
|
.Sy raidz Ns | Ns Sy eraidz Ns | Ns Sy draid Ns | Ns Sy random
|
|
|
|
(default:
|
|
|
|
.Sy random Ns
|
|
|
|
)
|
|
|
|
.Xc
|
2021-05-26 15:10:56 +03:00
|
|
|
The kind of RAID config to use.
|
|
|
|
With
|
|
|
|
.Sy random
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
the kind alternates between raidz, eraidz (expandable raidz) and draid.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl D , -draid-data Ns = (default: Sy 4 )
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
Number of data disks in a dRAID redundancy group.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl S , -draid-spares Ns = (default: Sy 1 )
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
Number of dRAID distributed spare disks.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl d , -datasets Ns = (default: Sy 7 )
|
2013-03-13 00:26:50 +04:00
|
|
|
Number of datasets.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl t , -threads Ns = (default: Sy 23 )
|
2013-03-13 00:26:50 +04:00
|
|
|
Number of threads.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl g , -gang-block-threshold Ns = (default: Sy 32K )
|
2013-03-13 00:26:50 +04:00
|
|
|
Gang block threshold.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl i , -init-count Ns = (default: Sy 1 )
|
2021-05-29 01:06:07 +03:00
|
|
|
Number of pool initializations.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl k , -kill-percentage Ns = (default: Sy 70% )
|
2013-03-13 00:26:50 +04:00
|
|
|
Kill percentage.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl p , -pool-name Ns = (default: Sy ztest )
|
2013-03-13 00:26:50 +04:00
|
|
|
Pool name.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl f , -vdev-file-directory Ns = (default: Pa /tmp )
|
2021-05-29 01:06:07 +03:00
|
|
|
File directory for vdev files.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl M , -multi-host
|
2021-05-29 01:06:07 +03:00
|
|
|
Multi-host; simulate pool imported on remote host.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl E , -use-existing-pool
|
2013-03-13 00:26:50 +04:00
|
|
|
Use existing pool (use existing pool instead of creating new one).
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl T , -run-time Ns = (default: Sy 300 Ns s)
|
2013-03-13 00:26:50 +04:00
|
|
|
Total test run time.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl P , -pass-time Ns = (default: Sy 60 Ns s)
|
2021-05-29 01:06:07 +03:00
|
|
|
Time per pass.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl F , -freeze-loops Ns = (default: Sy 50 )
|
|
|
|
Max loops in
|
|
|
|
.Fn spa_freeze .
|
|
|
|
.It Fl B , -alt-ztest Ns =
|
2022-05-03 17:55:14 +03:00
|
|
|
Path to alternate ("older")
|
|
|
|
.Nm ztest
|
2022-11-12 15:23:30 +03:00
|
|
|
to drive, which will be used to initialise the pool, and, a stochastic half the
|
|
|
|
time, to run the tests.
|
2022-05-03 17:55:14 +03:00
|
|
|
The parallel
|
|
|
|
.Pa lib
|
|
|
|
directory is prepended to
|
|
|
|
.Ev LD_LIBRARY_PATH ;
|
|
|
|
i.e. given
|
|
|
|
.Fl B Pa ./chroots/lenny/usr/bin/ Ns Nm ,
|
|
|
|
.Pa ./chroots/lenny/usr/lib
|
|
|
|
will be loaded.
|
2022-11-12 15:23:30 +03:00
|
|
|
.It Fl C , -vdev-class-state Ns = Ns Sy on Ns | Ns Sy off Ns | Ns Sy random No (default : Sy random )
|
2021-05-29 01:06:07 +03:00
|
|
|
The vdev allocation class state.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl o , -option Ns = Ns Ar variable Ns = Ns Ar value
|
|
|
|
Set global
|
|
|
|
.Ar variable
|
|
|
|
to an unsigned 32-bit integer
|
|
|
|
.Ar value
|
|
|
|
(little-endian only).
|
|
|
|
.It Fl G , -dump-debug
|
2021-05-29 01:06:07 +03:00
|
|
|
Dump zfs_dbgmsg buffer before exiting due to an error.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Fl V , -verbose
|
2021-05-29 01:06:07 +03:00
|
|
|
Verbose (use multiple times for ever more verbosity).
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
.It Fl X , -raidz-expansion
|
|
|
|
Perform a dedicated raidz expansion test.
|
2021-05-26 15:10:56 +03:00
|
|
|
.El
|
|
|
|
.
|
|
|
|
.Sh EXAMPLES
|
|
|
|
To override
|
|
|
|
.Pa /tmp
|
|
|
|
as your location for block files, you can use the
|
|
|
|
.Fl f
|
2013-03-13 00:26:50 +04:00
|
|
|
option:
|
2021-05-26 15:10:56 +03:00
|
|
|
.Dl # ztest -f /
|
|
|
|
.Pp
|
|
|
|
To get an idea of what
|
|
|
|
.Nm
|
|
|
|
is actually testing try this:
|
|
|
|
.Dl # ztest -f / -VVV
|
|
|
|
.Pp
|
|
|
|
Maybe you'd like to run
|
|
|
|
.Nm ztest
|
|
|
|
for longer? To do so simply use the
|
|
|
|
.Fl T
|
2013-03-13 00:26:50 +04:00
|
|
|
option and specify the runlength in seconds like so:
|
2021-05-26 15:10:56 +03:00
|
|
|
.Dl # ztest -f / -V -T 120
|
|
|
|
.
|
|
|
|
.Sh ENVIRONMENT VARIABLES
|
|
|
|
.Bl -tag -width "ZF"
|
|
|
|
.It Ev ZFS_HOSTID Ns = Ns Em id
|
|
|
|
Use
|
|
|
|
.Em id
|
|
|
|
instead of the SPL hostid to identify this host.
|
|
|
|
Intended for use with
|
|
|
|
.Nm , but this environment variable will affect any utility which uses
|
|
|
|
libzpool, including
|
|
|
|
.Xr zpool 8 .
|
|
|
|
Since the kernel is unaware of this setting,
|
2017-08-11 01:45:25 +03:00
|
|
|
results with utilities other than ztest are undefined.
|
2021-05-26 15:10:56 +03:00
|
|
|
.It Ev ZFS_STACK_SIZE Ns = Ns Em stacksize
|
|
|
|
Limit the default stack size to
|
|
|
|
.Em stacksize
|
|
|
|
bytes for the purpose of
|
|
|
|
detecting and debugging kernel stack overflows.
|
|
|
|
This value defaults to
|
|
|
|
.Em 32K
|
|
|
|
which is double the default
|
|
|
|
.Em 16K
|
|
|
|
Linux kernel stack size.
|
|
|
|
.Pp
|
2016-01-13 21:41:24 +03:00
|
|
|
In practice, setting the stack size slightly higher is needed because
|
2014-09-26 02:15:45 +04:00
|
|
|
differences in stack usage between kernel and user space can lead to spurious
|
2021-05-26 15:10:56 +03:00
|
|
|
stack overflows (especially when debugging is enabled).
|
|
|
|
The specified value
|
2014-09-26 02:15:45 +04:00
|
|
|
will be rounded up to a floor of PTHREAD_STACK_MIN which is the minimum stack
|
|
|
|
required for a NULL procedure in user space.
|
2021-05-26 15:10:56 +03:00
|
|
|
.Pp
|
|
|
|
By default the stack size is limited to
|
|
|
|
.Em 256K .
|
|
|
|
.El
|
|
|
|
.
|
|
|
|
.Sh SEE ALSO
|
|
|
|
.Xr zdb 1 ,
|
|
|
|
.Xr zfs 1 ,
|
|
|
|
.Xr zpool 1 ,
|
2021-06-04 23:29:26 +03:00
|
|
|
.Xr spl 4
|