2020-01-03 00:48:06 +03:00
|
|
|
#!/usr/bin/env bash
|
2016-03-23 04:08:59 +03:00
|
|
|
|
|
|
|
#
|
|
|
|
# CDDL HEADER START
|
|
|
|
#
|
|
|
|
# This file and its contents are supplied under the terms of the
|
|
|
|
# Common Development and Distribution License ("CDDL"), version 1.0.
|
|
|
|
# You may only use this file in accordance with the terms of version
|
|
|
|
# 1.0 of the CDDL.
|
|
|
|
#
|
|
|
|
# A full copy of the text of the CDDL should have accompanied this
|
|
|
|
# source. A copy of the CDDL is also available via the Internet at
|
|
|
|
# http://www.illumos.org/license/CDDL.
|
|
|
|
#
|
|
|
|
# CDDL HEADER END
|
|
|
|
#
|
|
|
|
|
|
|
|
#
|
|
|
|
# Copyright (c) 2015 by Delphix. All rights reserved.
|
|
|
|
# Copyright (C) 2016 Lawrence Livermore National Security, LLC.
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
# Copyright (c) 2017, Intel Corporation.
|
2016-03-23 04:08:59 +03:00
|
|
|
#
|
|
|
|
|
2022-04-30 02:21:16 +03:00
|
|
|
BASE_DIR=${0%/*}
|
2016-03-23 04:08:59 +03:00
|
|
|
SCRIPT_COMMON=common.sh
|
2022-11-12 15:22:49 +03:00
|
|
|
if [[ -f "${BASE_DIR}/${SCRIPT_COMMON}" ]]; then
|
Retire legacy test infrastructure
* Removed zpios kmod, utility, headers and man page.
* Removed unused scripts zpios-profile/*, zpios-test/*,
zpool-config/*, smb.sh, zpios-sanity.sh, zpios-survey.sh,
zpios.sh, and zpool-create.sh.
* Removed zfs-script-config.sh.in. When building 'make' generates
a common.sh with in-tree path information from the common.sh.in
template. This file and sourced by the test scripts and used
for in-tree testing, it is not included in the packages. When
building packages 'make install' uses the same template to
create a new common.sh which is appropriate for the packaging.
* Removed unused functions/variables from scripts/common.sh.in.
Only minimal path information and configuration environment
variables remain.
* Removed unused scripts from scripts/ directory.
* Remaining shell scripts in the scripts directory updated to
cleanly pass shellcheck and added to checked scripts.
* Renamed tests/test-runner/cmd/ to tests/test-runner/bin/ to
match install location name.
* Removed last traces of the --enable-debug-dmu-tx configure
options which was retired some time ago.
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #6509
2017-08-16 03:26:38 +03:00
|
|
|
. "${BASE_DIR}/${SCRIPT_COMMON}"
|
2016-03-23 04:08:59 +03:00
|
|
|
else
|
|
|
|
echo "Missing helper script ${SCRIPT_COMMON}" && exit 1
|
|
|
|
fi
|
|
|
|
|
2017-03-09 21:20:15 +03:00
|
|
|
# shellcheck disable=SC2034
|
2016-03-23 04:08:59 +03:00
|
|
|
PROG=zloop.sh
|
2018-01-10 21:49:27 +03:00
|
|
|
GDB=${GDB:-gdb}
|
2016-03-23 04:08:59 +03:00
|
|
|
|
|
|
|
DEFAULTWORKDIR=/var/tmp
|
|
|
|
DEFAULTCOREDIR=/var/tmp/zloop
|
|
|
|
|
|
|
|
function usage
|
|
|
|
{
|
2021-07-23 00:29:27 +03:00
|
|
|
cat >&2 <<EOF
|
|
|
|
|
|
|
|
$0 [-hl] [-c <dump directory>] [-f <vdev directory>]
|
|
|
|
[-m <max core dumps>] [-s <vdev size>] [-t <timeout>]
|
|
|
|
[-I <max iterations>] [-- [extra ztest parameters]]
|
|
|
|
|
|
|
|
This script runs ztest repeatedly with randomized arguments.
|
|
|
|
If a crash is encountered, the ztest logs, any associated
|
|
|
|
vdev files, and core file (if one exists) are moved to the
|
|
|
|
output directory ($DEFAULTCOREDIR by default). Any options
|
|
|
|
after the -- end-of-options marker will be passed to ztest.
|
|
|
|
|
|
|
|
Options:
|
|
|
|
-c Specify a core dump directory to use.
|
|
|
|
-f Specify working directory for ztest vdev files.
|
|
|
|
-h Print this help message.
|
|
|
|
-l Create 'ztest.core.N' symlink to core directory.
|
|
|
|
-m Max number of core dumps to allow before exiting.
|
|
|
|
-s Size of vdev devices.
|
|
|
|
-t Total time to loop for, in seconds. If not provided,
|
|
|
|
zloop runs forever.
|
|
|
|
-I Max number of iterations to loop before exiting.
|
|
|
|
|
|
|
|
EOF
|
2016-03-23 04:08:59 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
function or_die
|
|
|
|
{
|
2022-04-30 02:21:16 +03:00
|
|
|
if ! "$@"; then
|
2021-05-14 12:55:17 +03:00
|
|
|
echo "Command failed: $*"
|
2016-03-23 04:08:59 +03:00
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
}
|
|
|
|
|
2020-01-03 00:48:06 +03:00
|
|
|
case $(uname) in
|
|
|
|
FreeBSD)
|
|
|
|
coreglob="z*.core"
|
|
|
|
;;
|
|
|
|
Linux)
|
|
|
|
# core file helpers
|
2022-03-16 16:27:04 +03:00
|
|
|
read -r origcorepattern </proc/sys/kernel/core_pattern
|
2020-01-03 00:48:06 +03:00
|
|
|
coreglob="$(grep -E -o '^([^|%[:space:]]*)' /proc/sys/kernel/core_pattern)*"
|
|
|
|
|
|
|
|
if [[ $coreglob = "*" ]]; then
|
|
|
|
echo "Setting core file pattern..."
|
|
|
|
echo "core" > /proc/sys/kernel/core_pattern
|
|
|
|
coreglob="$(grep -E -o '^([^|%[:space:]]*)' \
|
|
|
|
/proc/sys/kernel/core_pattern)*"
|
|
|
|
fi
|
|
|
|
;;
|
|
|
|
*)
|
|
|
|
exit 1
|
|
|
|
;;
|
|
|
|
esac
|
2016-07-24 22:55:48 +03:00
|
|
|
|
|
|
|
function core_file
|
|
|
|
{
|
Trim excess shellcheck annotations. Widen to all non-Korn scripts
Before, make shellcheck checked
scripts/{commitcheck,make_gitrev,man-dates,paxcheck,zfs-helpers,zfs,
zfs-tests,zimport,zloop}.sh
cmd/zed/zed.d/{{all-debug,all-syslog,data-notify,generic-notify,
resilver_finish-start-scrub,scrub_finish-notify,
statechange-led,statechange-notify,trim_finish-notify,
zed-functions}.sh,history_event-zfs-list-cacher.sh.in}
cmd/zpool/zpool.d/{dm-deps,iostat,lsblk,media,ses,smart,upath}
now it also checks
contrib/dracut/{02zfsexpandknowledge/module-setup,
90zfs/{export-zfs,parse-zfs,zfs-needshutdown,
zfs-load-key,zfs-lib,module-setup,
mount-zfs,zfs-generator}}.sh.in
cmd/zed/zed.d/{pool_import-led,vdev_attach-led,
resilver_finish-notify,vdev_clear-led}.sh
contrib/initramfs/{zfsunlock,hooks/zfs.in,scripts/local-top/zfs}
tests/zfs-tests/tests/perf/scripts/prefetch_io.sh
scripts/common.sh.in
contrib/bpftrace/zfs-trace.sh
autogen.sh
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12042
2021-05-14 15:02:11 +03:00
|
|
|
# shellcheck disable=SC2012,SC2086
|
|
|
|
ls -tr1 $coreglob 2>/dev/null | head -1
|
2016-07-24 22:55:48 +03:00
|
|
|
}
|
|
|
|
|
2016-10-04 01:42:13 +03:00
|
|
|
function core_prog
|
|
|
|
{
|
2022-01-13 21:09:19 +03:00
|
|
|
# shellcheck disable=SC2154
|
2016-10-04 01:42:13 +03:00
|
|
|
prog=$ZTEST
|
2017-03-09 21:20:15 +03:00
|
|
|
core_id=$($GDB --batch -c "$1" | grep "Core was generated by" | \
|
2016-10-04 01:42:13 +03:00
|
|
|
tr \' ' ')
|
Trim excess shellcheck annotations. Widen to all non-Korn scripts
Before, make shellcheck checked
scripts/{commitcheck,make_gitrev,man-dates,paxcheck,zfs-helpers,zfs,
zfs-tests,zimport,zloop}.sh
cmd/zed/zed.d/{{all-debug,all-syslog,data-notify,generic-notify,
resilver_finish-start-scrub,scrub_finish-notify,
statechange-led,statechange-notify,trim_finish-notify,
zed-functions}.sh,history_event-zfs-list-cacher.sh.in}
cmd/zpool/zpool.d/{dm-deps,iostat,lsblk,media,ses,smart,upath}
now it also checks
contrib/dracut/{02zfsexpandknowledge/module-setup,
90zfs/{export-zfs,parse-zfs,zfs-needshutdown,
zfs-load-key,zfs-lib,module-setup,
mount-zfs,zfs-generator}}.sh.in
cmd/zed/zed.d/{pool_import-led,vdev_attach-led,
resilver_finish-notify,vdev_clear-led}.sh
contrib/initramfs/{zfsunlock,hooks/zfs.in,scripts/local-top/zfs}
tests/zfs-tests/tests/perf/scripts/prefetch_io.sh
scripts/common.sh.in
contrib/bpftrace/zfs-trace.sh
autogen.sh
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12042
2021-05-14 15:02:11 +03:00
|
|
|
if [[ "$core_id" == *"zdb "* ]]; then
|
2022-01-13 21:09:19 +03:00
|
|
|
# shellcheck disable=SC2154
|
2016-10-04 01:42:13 +03:00
|
|
|
prog=$ZDB
|
|
|
|
fi
|
|
|
|
printf "%s" "$prog"
|
|
|
|
}
|
|
|
|
|
2016-03-23 04:08:59 +03:00
|
|
|
function store_core
|
|
|
|
{
|
2016-07-24 22:55:48 +03:00
|
|
|
core="$(core_file)"
|
|
|
|
if [[ $ztrc -ne 0 ]] || [[ -f "$core" ]]; then
|
2017-09-21 20:17:56 +03:00
|
|
|
df -h "$workdir" >>ztest.out
|
2016-03-23 04:08:59 +03:00
|
|
|
coreid=$(date "+zloop-%y%m%d-%H%M%S")
|
2017-03-09 21:20:15 +03:00
|
|
|
foundcrashes=$((foundcrashes + 1))
|
2016-03-23 04:08:59 +03:00
|
|
|
|
2018-01-22 23:48:39 +03:00
|
|
|
# zdb debugging
|
2020-03-11 20:02:23 +03:00
|
|
|
zdbcmd="$ZDB -U "$workdir/zpool.cache" -dddMmDDG ztest"
|
2018-01-22 23:48:39 +03:00
|
|
|
zdbdebug=$($zdbcmd 2>&1)
|
|
|
|
echo -e "$zdbcmd\n" >>ztest.zdb
|
|
|
|
echo "$zdbdebug" >>ztest.zdb
|
|
|
|
|
2016-03-23 04:08:59 +03:00
|
|
|
dest=$coredir/$coreid
|
2017-03-09 21:20:15 +03:00
|
|
|
or_die mkdir -p "$dest/vdev"
|
2016-03-23 04:08:59 +03:00
|
|
|
|
2018-01-22 23:48:39 +03:00
|
|
|
if [[ $symlink -ne 0 ]]; then
|
2022-01-13 21:09:19 +03:00
|
|
|
or_die ln -sf "$dest" "ztest.core.${foundcrashes}"
|
2018-01-22 23:48:39 +03:00
|
|
|
fi
|
|
|
|
|
2016-03-23 04:08:59 +03:00
|
|
|
echo "*** ztest crash found - moving logs to $dest"
|
|
|
|
|
2022-04-30 02:21:16 +03:00
|
|
|
or_die mv ztest.history ztest.zdb ztest.out "$dest/"
|
|
|
|
or_die mv "$workdir/"ztest* "$dest/vdev/"
|
2017-12-19 01:06:07 +03:00
|
|
|
|
|
|
|
if [[ -e "$workdir/zpool.cache" ]]; then
|
|
|
|
or_die mv "$workdir/zpool.cache" "$dest/vdev/"
|
|
|
|
fi
|
2016-03-23 04:08:59 +03:00
|
|
|
|
|
|
|
# check for core
|
2016-07-24 22:55:48 +03:00
|
|
|
if [[ -f "$core" ]]; then
|
2017-03-09 21:20:15 +03:00
|
|
|
coreprog=$(core_prog "$core")
|
2018-01-22 23:48:39 +03:00
|
|
|
coredebug=$($GDB --batch --quiet \
|
2016-03-23 04:08:59 +03:00
|
|
|
-ex "set print thread-events off" \
|
|
|
|
-ex "printf \"*\n* Backtrace \n*\n\"" \
|
|
|
|
-ex "bt" \
|
|
|
|
-ex "printf \"*\n* Libraries \n*\n\"" \
|
|
|
|
-ex "info sharedlib" \
|
|
|
|
-ex "printf \"*\n* Threads (full) \n*\n\"" \
|
|
|
|
-ex "info threads" \
|
|
|
|
-ex "printf \"*\n* Backtraces \n*\n\"" \
|
|
|
|
-ex "thread apply all bt" \
|
|
|
|
-ex "printf \"*\n* Backtraces (full) \n*\n\"" \
|
|
|
|
-ex "thread apply all bt full" \
|
2018-01-22 23:48:39 +03:00
|
|
|
-ex "quit" "$coreprog" "$core" 2>&1 | \
|
|
|
|
grep -v "New LWP")
|
2016-03-23 04:08:59 +03:00
|
|
|
|
|
|
|
# Dump core + logs to stored directory
|
2018-01-22 23:48:39 +03:00
|
|
|
echo "$coredebug" >>"$dest/ztest.gdb"
|
2017-03-09 21:20:15 +03:00
|
|
|
or_die mv "$core" "$dest/"
|
2016-03-23 04:08:59 +03:00
|
|
|
|
|
|
|
# Record info in cores logfile
|
2016-07-24 22:55:48 +03:00
|
|
|
echo "*** core @ $coredir/$coreid/$core:" | \
|
2016-03-23 04:08:59 +03:00
|
|
|
tee -a ztest.cores
|
|
|
|
fi
|
2018-01-22 23:48:39 +03:00
|
|
|
|
|
|
|
if [[ $coremax -gt 0 ]] &&
|
|
|
|
[[ $foundcrashes -ge $coremax ]]; then
|
|
|
|
echo "exiting... max $coremax allowed cores"
|
|
|
|
exit 1
|
|
|
|
else
|
|
|
|
echo "continuing..."
|
|
|
|
fi
|
2016-03-23 04:08:59 +03:00
|
|
|
fi
|
|
|
|
}
|
|
|
|
|
|
|
|
# parse arguments
|
|
|
|
# expected format: zloop [-t timeout] [-c coredir] [-- extra ztest args]
|
|
|
|
coredir=$DEFAULTCOREDIR
|
2017-09-21 20:17:56 +03:00
|
|
|
basedir=$DEFAULTWORKDIR
|
|
|
|
rundir="zloop-run"
|
2016-03-23 04:08:59 +03:00
|
|
|
timeout=0
|
2017-10-13 22:39:39 +03:00
|
|
|
size="512m"
|
2018-01-22 23:48:39 +03:00
|
|
|
coremax=0
|
|
|
|
symlink=0
|
2021-07-23 00:29:27 +03:00
|
|
|
iterations=0
|
|
|
|
while getopts ":ht:m:I:s:c:f:l" opt; do
|
2016-03-23 04:08:59 +03:00
|
|
|
case $opt in
|
|
|
|
t ) [[ $OPTARG -gt 0 ]] && timeout=$OPTARG ;;
|
2018-01-22 23:48:39 +03:00
|
|
|
m ) [[ $OPTARG -gt 0 ]] && coremax=$OPTARG ;;
|
2022-01-13 21:09:19 +03:00
|
|
|
I ) [[ -n $OPTARG ]] && iterations=$OPTARG ;;
|
|
|
|
s ) [[ -n $OPTARG ]] && size=$OPTARG ;;
|
|
|
|
c ) [[ -n $OPTARG ]] && coredir=$OPTARG ;;
|
|
|
|
f ) [[ -n $OPTARG ]] && basedir=$(readlink -f "$OPTARG") ;;
|
2018-01-22 23:48:39 +03:00
|
|
|
l ) symlink=1 ;;
|
2016-03-23 04:08:59 +03:00
|
|
|
h ) usage
|
|
|
|
exit 2
|
|
|
|
;;
|
|
|
|
* ) echo "Invalid argument: -$OPTARG";
|
|
|
|
usage
|
|
|
|
exit 1
|
|
|
|
esac
|
|
|
|
done
|
|
|
|
# pass remaining arguments on to ztest
|
|
|
|
shift $((OPTIND - 1))
|
|
|
|
|
|
|
|
# enable core dumps
|
|
|
|
ulimit -c unlimited
|
2022-02-04 01:35:38 +03:00
|
|
|
export ASAN_OPTIONS=abort_on_error=true:halt_on_error=true:allocator_may_return_null=true:disable_coredump=false:detect_stack_use_after_return=true
|
|
|
|
export UBSAN_OPTIONS=abort_on_error=true:halt_on_error=true:print_stacktrace=true
|
2016-03-23 04:08:59 +03:00
|
|
|
|
2016-07-24 22:55:48 +03:00
|
|
|
if [[ -f "$(core_file)" ]]; then
|
|
|
|
echo -n "There's a core dump here you might want to look at first... "
|
2017-03-09 21:20:15 +03:00
|
|
|
core_file
|
2018-01-22 23:48:39 +03:00
|
|
|
echo
|
2016-03-23 04:08:59 +03:00
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
|
|
|
|
if [[ ! -d $coredir ]]; then
|
|
|
|
echo "core dump directory ($coredir) does not exist, creating it."
|
2017-03-09 21:20:15 +03:00
|
|
|
or_die mkdir -p "$coredir"
|
2016-03-23 04:08:59 +03:00
|
|
|
fi
|
|
|
|
|
|
|
|
if [[ ! -w $coredir ]]; then
|
|
|
|
echo "core dump directory ($coredir) is not writable."
|
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
|
2022-04-30 02:21:16 +03:00
|
|
|
or_die rm -f ztest.history ztest.zdb ztest.cores
|
2016-03-23 04:08:59 +03:00
|
|
|
|
|
|
|
ztrc=0 # ztest return value
|
|
|
|
foundcrashes=0 # number of crashes found so far
|
|
|
|
starttime=$(date +%s)
|
|
|
|
curtime=$starttime
|
2021-07-23 00:29:27 +03:00
|
|
|
iteration=0
|
2016-03-23 04:08:59 +03:00
|
|
|
|
|
|
|
# if no timeout was specified, loop forever.
|
2021-07-23 00:29:27 +03:00
|
|
|
while (( timeout == 0 )) || (( curtime <= (starttime + timeout) )); do
|
|
|
|
if (( iterations > 0 )) && (( iteration++ == iterations )); then
|
|
|
|
break
|
|
|
|
fi
|
|
|
|
|
2017-12-19 01:06:07 +03:00
|
|
|
zopt="-G -VVVVV"
|
2016-03-23 04:08:59 +03:00
|
|
|
|
2017-09-21 20:17:56 +03:00
|
|
|
# start each run with an empty directory
|
|
|
|
workdir="$basedir/$rundir"
|
|
|
|
or_die rm -rf "$workdir"
|
|
|
|
or_die mkdir "$workdir"
|
|
|
|
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
# ashift range 9 - 15
|
|
|
|
align=$(((RANDOM % 2) * 3 + 9))
|
|
|
|
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
# choose parity value
|
|
|
|
parity=$(((RANDOM % 3) + 1))
|
|
|
|
|
|
|
|
draid_data=0
|
|
|
|
draid_spares=0
|
|
|
|
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
# randomly use special classes
|
|
|
|
class="special=random"
|
|
|
|
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
# choose between four types of configs
|
|
|
|
# (basic, raidz mix, raidz expansion, and draid mix)
|
|
|
|
case $((RANDOM % 4)) in
|
|
|
|
|
|
|
|
# basic mirror configuration
|
|
|
|
0) parity=1
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
mirrors=2
|
|
|
|
raid_children=0
|
2016-03-23 04:08:59 +03:00
|
|
|
vdevs=2
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
raid_type="raidz"
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
;;
|
|
|
|
|
|
|
|
# fully randomized mirror/raidz (sans dRAID)
|
|
|
|
1) mirrors=$(((RANDOM % 3) * 1))
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
raid_children=$((((RANDOM % 9) + parity + 1) * (RANDOM % 2)))
|
2016-03-23 04:08:59 +03:00
|
|
|
vdevs=$(((RANDOM % 3) + 3))
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
raid_type="raidz"
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
;;
|
|
|
|
|
|
|
|
# randomized raidz expansion (one top-level raidz vdev)
|
|
|
|
2) mirrors=0
|
|
|
|
vdevs=1
|
|
|
|
# derive initial raidz disk count based on parity choice
|
|
|
|
# P1: 3 - 7 disks
|
|
|
|
# P2: 5 - 9 disks
|
|
|
|
# P3: 7 - 11 disks
|
|
|
|
raid_children=$(((RANDOM % 5) + (parity * 2) + 1))
|
|
|
|
|
|
|
|
# 1/3 of the time use a dedicated '-X' raidz expansion test
|
|
|
|
if [[ $((RANDOM % 3)) -eq 0 ]]; then
|
|
|
|
zopt="$zopt -X -t 16"
|
|
|
|
raid_type="raidz"
|
|
|
|
else
|
|
|
|
raid_type="eraidz"
|
|
|
|
fi
|
|
|
|
;;
|
|
|
|
|
|
|
|
# fully randomized dRAID (sans mirror/raidz)
|
|
|
|
3) mirrors=0
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
draid_data=$(((RANDOM % 8) + 3))
|
|
|
|
draid_spares=$(((RANDOM % 2) + parity))
|
|
|
|
stripe=$((draid_data + parity))
|
|
|
|
extra=$((draid_spares + (RANDOM % 4)))
|
|
|
|
raid_children=$(((((RANDOM % 4) + 1) * stripe) + extra))
|
|
|
|
vdevs=$((RANDOM % 3))
|
|
|
|
raid_type="draid"
|
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Authored-by: Matthew Ahrens <mahrens@delphix.com>
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
Contributions-by: Don Brady <dev.fs.zfs@gmail.com>
Signed-off-by: Don Brady <dev.fs.zfs@gmail.com>
Closes #15022
2023-11-08 21:19:41 +03:00
|
|
|
;;
|
|
|
|
*)
|
|
|
|
# avoid shellcheck SC2249
|
|
|
|
;;
|
|
|
|
esac
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
|
|
|
|
zopt="$zopt -K $raid_type"
|
2016-03-23 04:08:59 +03:00
|
|
|
zopt="$zopt -m $mirrors"
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
zopt="$zopt -r $raid_children"
|
|
|
|
zopt="$zopt -D $draid_data"
|
|
|
|
zopt="$zopt -S $draid_spares"
|
2016-03-23 04:08:59 +03:00
|
|
|
zopt="$zopt -R $parity"
|
|
|
|
zopt="$zopt -v $vdevs"
|
|
|
|
zopt="$zopt -a $align"
|
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID. This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.
A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`. No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.
zpool create <pool> draid[1,2,3] <vdevs...>
Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons. The supported options include:
zpool create <pool> \
draid[<parity>][:<data>d][:<children>c][:<spares>s] \
<vdevs...>
- draid[parity] - Parity level (default 1)
- draid[:<data>d] - Data devices per group (default 8)
- draid[:<children>c] - Expected number of child vdevs
- draid[:<spares>s] - Distributed hot spares (default 0)
Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.
```
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
slag7 ONLINE 0 0 0
draid2:8d:68c:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
...
U25 ONLINE 0 0 0
U26 ONLINE 0 0 0
spare-53 ONLINE 0 0 0
U27 ONLINE 0 0 0
draid2-0-0 ONLINE 0 0 0
U28 ONLINE 0 0 0
U29 ONLINE 0 0 0
...
U42 ONLINE 0 0 0
U43 ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
L5 ONLINE 0 0 0
U5 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
L6 ONLINE 0 0 0
U6 ONLINE 0 0 0
spares
draid2-0-0 INUSE currently in use
draid2-0-1 AVAIL
```
When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command. These options are leverages
by zloop.sh to test a wide range of dRAID configurations.
-K draid|raidz|random - kind of RAID to test
-D <value> - dRAID data drives per group
-S <value> - dRAID distributed hot spares
-R <value> - RAID parity (raidz or dRAID)
The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.
Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-14 00:51:51 +03:00
|
|
|
zopt="$zopt -C $class"
|
2016-03-23 04:08:59 +03:00
|
|
|
zopt="$zopt -s $size"
|
|
|
|
zopt="$zopt -f $workdir"
|
|
|
|
|
Trim excess shellcheck annotations. Widen to all non-Korn scripts
Before, make shellcheck checked
scripts/{commitcheck,make_gitrev,man-dates,paxcheck,zfs-helpers,zfs,
zfs-tests,zimport,zloop}.sh
cmd/zed/zed.d/{{all-debug,all-syslog,data-notify,generic-notify,
resilver_finish-start-scrub,scrub_finish-notify,
statechange-led,statechange-notify,trim_finish-notify,
zed-functions}.sh,history_event-zfs-list-cacher.sh.in}
cmd/zpool/zpool.d/{dm-deps,iostat,lsblk,media,ses,smart,upath}
now it also checks
contrib/dracut/{02zfsexpandknowledge/module-setup,
90zfs/{export-zfs,parse-zfs,zfs-needshutdown,
zfs-load-key,zfs-lib,module-setup,
mount-zfs,zfs-generator}}.sh.in
cmd/zed/zed.d/{pool_import-led,vdev_attach-led,
resilver_finish-notify,vdev_clear-led}.sh
contrib/initramfs/{zfsunlock,hooks/zfs.in,scripts/local-top/zfs}
tests/zfs-tests/tests/perf/scripts/prefetch_io.sh
scripts/common.sh.in
contrib/bpftrace/zfs-trace.sh
autogen.sh
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Closes #12042
2021-05-14 15:02:11 +03:00
|
|
|
cmd="$ZTEST $zopt $*"
|
2022-04-30 02:21:16 +03:00
|
|
|
echo "$(date '+%m/%d %T') $cmd" | tee -a ztest.history ztest.out
|
2016-03-23 04:08:59 +03:00
|
|
|
$cmd >>ztest.out 2>&1
|
|
|
|
ztrc=$?
|
2018-01-17 21:17:16 +03:00
|
|
|
grep -E '===|WARNING' ztest.out >>ztest.history
|
2016-03-23 04:08:59 +03:00
|
|
|
|
|
|
|
store_core
|
|
|
|
|
|
|
|
curtime=$(date +%s)
|
|
|
|
done
|
|
|
|
|
|
|
|
echo "zloop finished, $foundcrashes crashes found"
|
|
|
|
|
2020-01-03 00:48:06 +03:00
|
|
|
# restore core pattern.
|
|
|
|
case $(uname) in
|
|
|
|
Linux)
|
|
|
|
echo "$origcorepattern" > /proc/sys/kernel/core_pattern
|
|
|
|
;;
|
|
|
|
*)
|
|
|
|
;;
|
|
|
|
esac
|
2016-07-24 22:55:48 +03:00
|
|
|
|
2016-03-23 04:08:59 +03:00
|
|
|
uptime >>ztest.out
|
|
|
|
|
|
|
|
if [[ $foundcrashes -gt 0 ]]; then
|
|
|
|
exit 1
|
|
|
|
fi
|