mirror of
https://git.proxmox.com/git/mirror_zfs.git
synced 2024-12-27 11:29:36 +03:00
b2255edcc0
This patch adds a new top-level vdev type called dRAID, which stands for Distributed parity RAID. This pool configuration allows all dRAID vdevs to participate when rebuilding to a distributed hot spare device. This can substantially reduce the total time required to restore full parity to pool with a failed device. A dRAID pool can be created using the new top-level `draid` type. Like `raidz`, the desired redundancy is specified after the type: `draid[1,2,3]`. No additional information is required to create the pool and reasonable default values will be chosen based on the number of child vdevs in the dRAID vdev. zpool create <pool> draid[1,2,3] <vdevs...> Unlike raidz, additional optional dRAID configuration values can be provided as part of the draid type as colon separated values. This allows administrators to fully specify a layout for either performance or capacity reasons. The supported options include: zpool create <pool> \ draid[<parity>][:<data>d][:<children>c][:<spares>s] \ <vdevs...> - draid[parity] - Parity level (default 1) - draid[:<data>d] - Data devices per group (default 8) - draid[:<children>c] - Expected number of child vdevs - draid[:<spares>s] - Distributed hot spares (default 0) Abbreviated example `zpool status` output for a 68 disk dRAID pool with two distributed spares using special allocation classes. ``` pool: tank state: ONLINE config: NAME STATE READ WRITE CKSUM slag7 ONLINE 0 0 0 draid2:8d:68c:2s-0 ONLINE 0 0 0 L0 ONLINE 0 0 0 L1 ONLINE 0 0 0 ... U25 ONLINE 0 0 0 U26 ONLINE 0 0 0 spare-53 ONLINE 0 0 0 U27 ONLINE 0 0 0 draid2-0-0 ONLINE 0 0 0 U28 ONLINE 0 0 0 U29 ONLINE 0 0 0 ... U42 ONLINE 0 0 0 U43 ONLINE 0 0 0 special mirror-1 ONLINE 0 0 0 L5 ONLINE 0 0 0 U5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 L6 ONLINE 0 0 0 U6 ONLINE 0 0 0 spares draid2-0-0 INUSE currently in use draid2-0-1 AVAIL ``` When adding test coverage for the new dRAID vdev type the following options were added to the ztest command. These options are leverages by zloop.sh to test a wide range of dRAID configurations. -K draid|raidz|random - kind of RAID to test -D <value> - dRAID data drives per group -S <value> - dRAID distributed hot spares -R <value> - RAID parity (raidz or dRAID) The zpool_create, zpool_import, redundancy, replacement and fault test groups have all been updated provide test coverage for the dRAID feature. Co-authored-by: Isaac Huang <he.huang@intel.com> Co-authored-by: Mark Maybee <mmaybee@cray.com> Co-authored-by: Don Brady <don.brady@delphix.com> Co-authored-by: Matthew Ahrens <mahrens@delphix.com> Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Mark Maybee <mmaybee@cray.com> Reviewed-by: Matt Ahrens <matt@delphix.com> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #10102
107 lines
3.1 KiB
Groff
107 lines
3.1 KiB
Groff
'\" t
|
|
.\"
|
|
.\" CDDL HEADER START
|
|
.\"
|
|
.\" The contents of this file are subject to the terms of the
|
|
.\" Common Development and Distribution License (the "License").
|
|
.\" You may not use this file except in compliance with the License.
|
|
.\"
|
|
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
.\" or http://www.opensolaris.org/os/licensing.
|
|
.\" See the License for the specific language governing permissions
|
|
.\" and limitations under the License.
|
|
.\"
|
|
.\" When distributing Covered Code, include this CDDL HEADER in each
|
|
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
.\" If applicable, add the following below this CDDL HEADER, with the
|
|
.\" fields enclosed by brackets "[]" replaced with your own identifying
|
|
.\" information: Portions Copyright [yyyy] [name of copyright owner]
|
|
.\"
|
|
.\" CDDL HEADER END
|
|
.\"
|
|
.\"
|
|
.\" Copyright (c) 2016 Gvozden Nešković. All rights reserved.
|
|
.\"
|
|
.TH RAIDZ_TEST 1 "Aug 24, 2020" OpenZFS
|
|
|
|
.SH NAME
|
|
\fBraidz_test\fR \- raidz implementation verification and benchmarking tool
|
|
.SH SYNOPSIS
|
|
.LP
|
|
.BI "raidz_test <options>"
|
|
.SH DESCRIPTION
|
|
.LP
|
|
This manual page documents briefly the \fBraidz_test\fR command.
|
|
.LP
|
|
Purpose of this tool is to run all supported raidz implementation and verify
|
|
results of all methods. Tool also contains a parameter sweep option where all
|
|
parameters affecting RAIDZ block are verified (like ashift size, data offset,
|
|
data size, etc...).
|
|
The tool also supports a benchmarking mode using -B option.
|
|
.SH OPTION
|
|
.HP
|
|
.BI "\-h" ""
|
|
.IP
|
|
Print a help summary.
|
|
.HP
|
|
.BI "\-a" " ashift (default: 9)"
|
|
.IP
|
|
Ashift value.
|
|
.HP
|
|
.BI "\-o" " zio_off_shift" " (default: 0)"
|
|
.IP
|
|
Zio offset for raidz block. Offset value is 1 << (zio_off_shift)
|
|
.HP
|
|
.BI "\-d" " raidz_data_disks" " (default: 8)"
|
|
.IP
|
|
Number of raidz data disks to use. Additional disks for parity will be used
|
|
during testing.
|
|
.HP
|
|
.BI "\-s" " zio_size_shift" " (default: 19)"
|
|
.IP
|
|
Size of data for raidz block. Size is 1 << (zio_size_shift).
|
|
.HP
|
|
.BI "\-r" " reflow_offset" " (default: uint max)"
|
|
.IP
|
|
Set raidz expansion offset. The expanded raidz map allocation function will
|
|
produce different map configurations depending on this value.
|
|
.HP
|
|
.BI "\-S(weep)"
|
|
.IP
|
|
Sweep parameter space while verifying the raidz implementations. This option
|
|
will exhaust all most of valid values for -a -o -d -s options. Runtime using
|
|
this option will be long.
|
|
.HP
|
|
.BI "\-t(imeout)"
|
|
.IP
|
|
Wall time for sweep test in seconds. The actual runtime could be longer.
|
|
.HP
|
|
.BI "\-B(enchmark)"
|
|
.IP
|
|
This options starts the benchmark mode. All implementations are benchmarked
|
|
using increasing per disk data size. Results are given as throughput per disk,
|
|
measured in MiB/s.
|
|
.HP
|
|
.BI "\-e(xpansion)"
|
|
.IP
|
|
Use expanded raidz map allocation function.
|
|
.HP
|
|
.BI "\-v(erbose)"
|
|
.IP
|
|
Increase verbosity.
|
|
.HP
|
|
.BI "\-T(est the test)"
|
|
.IP
|
|
Debugging option. When this option is specified tool is supposed to fail
|
|
all tests. This is to check if tests would properly verify bit-exactness.
|
|
.HP
|
|
.BI "\-D(ebug)"
|
|
.IP
|
|
Debugging option. Specify to attach gdb when SIGSEGV or SIGABRT are received.
|
|
.HP
|
|
|
|
.SH "SEE ALSO"
|
|
.BR "ztest (1)"
|
|
.SH "AUTHORS"
|
|
vdev_raidz, created for OpenZFS by Gvozden Nešković <neskovic@gmail.com>
|