mirror_zfs/man/man1/ztest.1
Brian Behlendorf b2255edcc0
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID.  This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.

A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`.  No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.

    zpool create <pool> draid[1,2,3] <vdevs...>

Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons.  The supported options include:

    zpool create <pool> \
        draid[<parity>][:<data>d][:<children>c][:<spares>s] \
        <vdevs...>

    - draid[parity]       - Parity level (default 1)
    - draid[:<data>d]     - Data devices per group (default 8)
    - draid[:<children>c] - Expected number of child vdevs
    - draid[:<spares>s]   - Distributed hot spares (default 0)

Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.

```
  pool: tank
 state: ONLINE
config:

    NAME                  STATE     READ WRITE CKSUM
    slag7                 ONLINE       0     0     0
      draid2:8d:68c:2s-0  ONLINE       0     0     0
        L0                ONLINE       0     0     0
        L1                ONLINE       0     0     0
        ...
        U25               ONLINE       0     0     0
        U26               ONLINE       0     0     0
        spare-53          ONLINE       0     0     0
          U27             ONLINE       0     0     0
          draid2-0-0      ONLINE       0     0     0
        U28               ONLINE       0     0     0
        U29               ONLINE       0     0     0
        ...
        U42               ONLINE       0     0     0
        U43               ONLINE       0     0     0
    special
      mirror-1            ONLINE       0     0     0
        L5                ONLINE       0     0     0
        U5                ONLINE       0     0     0
      mirror-2            ONLINE       0     0     0
        L6                ONLINE       0     0     0
        U6                ONLINE       0     0     0
    spares
      draid2-0-0          INUSE     currently in use
      draid2-0-1          AVAIL
```

When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command.  These options are leverages
by zloop.sh to test a wide range of dRAID configurations.

    -K draid|raidz|random - kind of RAID to test
    -D <value>            - dRAID data drives per group
    -S <value>            - dRAID distributed hot spares
    -R <value>            - RAID parity (raidz or dRAID)

The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.

Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 13:51:51 -08:00

197 lines
5.9 KiB
Groff

'\" t
.\"
.\" CDDL HEADER START
.\"
.\" The contents of this file are subject to the terms of the
.\" Common Development and Distribution License (the "License").
.\" You may not use this file except in compliance with the License.
.\"
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
.\" or http://www.opensolaris.org/os/licensing.
.\" See the License for the specific language governing permissions
.\" and limitations under the License.
.\"
.\" When distributing Covered Code, include this CDDL HEADER in each
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
.\" If applicable, add the following below this CDDL HEADER, with the
.\" fields enclosed by brackets "[]" replaced with your own identifying
.\" information: Portions Copyright [yyyy] [name of copyright owner]
.\"
.\" CDDL HEADER END
.\"
.\"
.\" Copyright (c) 2009 Oracle and/or its affiliates. All rights reserved.
.\" Copyright (c) 2009 Michael Gebetsroither <michael.geb@gmx.at>. All rights
.\" reserved.
.\" Copyright (c) 2017, Intel Corporation.
.\"
.TH ZTEST 1 "Aug 24, 2020" OpenZFS
.SH NAME
\fBztest\fR \- was written by the ZFS Developers as a ZFS unit test.
.SH SYNOPSIS
.LP
.BI "ztest <options>"
.SH DESCRIPTION
.LP
This manual page documents briefly the \fBztest\fR command.
.LP
\fBztest\fR was written by the ZFS Developers as a ZFS unit test. The
tool was developed in tandem with the ZFS functionality and was
executed nightly as one of the many regression test against the daily
build. As features were added to ZFS, unit tests were also added to
\fBztest\fR. In addition, a separate test development team wrote and
executed more functional and stress tests.
.LP
By default \fBztest\fR runs for ten minutes and uses block files
(stored in /tmp) to create pools rather than using physical disks.
Block files afford \fBztest\fR its flexibility to play around with
zpool components without requiring large hardware configurations.
However, storing the block files in /tmp may not work for you if you
have a small tmp directory.
.LP
By default is non-verbose. This is why entering the command above will
result in \fBztest\fR quietly executing for 5 minutes. The -V option
can be used to increase the verbosity of the tool. Adding multiple -V
option is allowed and the more you add the more chatty \fBztest\fR
becomes.
.LP
After the \fBztest\fR run completes, you should notice many ztest.*
files lying around. Once the run completes you can safely remove these
files. Note that you shouldn't remove these files during a run. You
can re-use these files in your next \fBztest\fR run by using the -E
option.
.SH OPTIONS
.HP
.BI "\-?" ""
.IP
Print a help summary.
.HP
.BI "\-v" " vdevs" " (default: 5)
.IP
Number of vdevs.
.HP
.BI "\-s" " size_of_each_vdev" " (default: 64M)"
.IP
Size of each vdev.
.HP
.BI "\-a" " alignment_shift" " (default: 9) (use 0 for random)"
.IP
Used alignment in test.
.HP
.BI "\-m" " mirror_copies" " (default: 2)"
.IP
Number of mirror copies.
.HP
.BI "\-r" " raidz_disks / draid_disks" " (default: 4 / 16)"
.IP
Number of raidz disks.
.HP
.BI "\-R" " raid_parity" " (default: 1)"
.IP
Raid parity (raidz & draid).
.HP
.BI "\-K" " raid_kind" " (default: 'random') raidz|draid|random"
.IP
The kind of RAID config to use. With 'random' the kind alternates between raidz and draid.
.HP
.BI "\-D" " draid_data" " (default: 4)"
.IP
Number of data disks in a dRAID redundancy group.
.HP
.BI "\-S" " draid_spares" " (default: 1)"
.IP
Number of dRAID distributed spare disks.
.HP
.BI "\-C" " vdev_class_state" " (default: random)"
.IP
The vdev allocation class state: special=on|off|random.
.HP
.BI "\-d" " datasets" " (default: 7)"
.IP
Number of datasets.
.HP
.BI "\-t" " threads" " (default: 23)"
.IP
Number of threads.
.HP
.BI "\-g" " gang_block_threshold" " (default: 32K)"
.IP
Gang block threshold.
.HP
.BI "\-i" " initialize_pool_i_times" " (default: 1)"
.IP
Number of pool initialisations.
.HP
.BI "\-k" " kill_percentage" " (default: 70%)"
.IP
Kill percentage.
.HP
.BI "\-p" " pool_name" " (default: ztest)"
.IP
Pool name.
.HP
.BI "\-V(erbose)"
.IP
Verbose (use multiple times for ever more blather).
.HP
.BI "\-E(xisting)"
.IP
Use existing pool (use existing pool instead of creating new one).
.HP
.BI "\-T" " time" " (default: 300 sec)"
.IP
Total test run time.
.HP
.BI "\-z" " zil_failure_rate" " (default: fail every 2^5 allocs)
.IP
Injected failure rate.
.HP
.BI "\-G"
.IP
Dump zfs_dbgmsg buffer before exiting.
.SH "EXAMPLES"
.LP
To override /tmp as your location for block files, you can use the -f
option:
.IP
ztest -f /
.LP
To get an idea of what ztest is actually testing try this:
.IP
ztest -f / -VVV
.LP
Maybe you'd like to run ztest for longer? To do so simply use the -T
option and specify the runlength in seconds like so:
.IP
ztest -f / -V -T 120
.SH "ENVIRONMENT VARIABLES"
.TP
.B "ZFS_HOSTID=id"
Use \fBid\fR instead of the SPL hostid to identify this host. Intended for use
with ztest, but this environment variable will affect any utility which uses
libzpool, including \fBzpool(8)\fR. Since the kernel is unaware of this setting
results with utilities other than ztest are undefined.
.TP
.B "ZFS_STACK_SIZE=stacksize"
Limit the default stack size to \fBstacksize\fR bytes for the purpose of
detecting and debugging kernel stack overflows. This value defaults to
\fB32K\fR which is double the default \fB16K\fR Linux kernel stack size.
In practice, setting the stack size slightly higher is needed because
differences in stack usage between kernel and user space can lead to spurious
stack overflows (especially when debugging is enabled). The specified value
will be rounded up to a floor of PTHREAD_STACK_MIN which is the minimum stack
required for a NULL procedure in user space.
By default the stack size is limited to 256K.
.SH "SEE ALSO"
.BR "spl-module-parameters (5)" ","
.BR "zpool (1)" ","
.BR "zfs (1)" ","
.BR "zdb (1)" ","
.SH "AUTHOR"
This manual page was transferred to asciidoc by Michael Gebetsroither
<gebi@grml.org> from http://opensolaris.org/os/community/zfs/ztest/