mirror of
https://git.proxmox.com/git/mirror_zfs.git
synced 2025-05-24 23:45:00 +03:00

This patch adds a new top-level vdev type called dRAID, which stands for Distributed parity RAID. This pool configuration allows all dRAID vdevs to participate when rebuilding to a distributed hot spare device. This can substantially reduce the total time required to restore full parity to pool with a failed device. A dRAID pool can be created using the new top-level `draid` type. Like `raidz`, the desired redundancy is specified after the type: `draid[1,2,3]`. No additional information is required to create the pool and reasonable default values will be chosen based on the number of child vdevs in the dRAID vdev. zpool create <pool> draid[1,2,3] <vdevs...> Unlike raidz, additional optional dRAID configuration values can be provided as part of the draid type as colon separated values. This allows administrators to fully specify a layout for either performance or capacity reasons. The supported options include: zpool create <pool> \ draid[<parity>][:<data>d][:<children>c][:<spares>s] \ <vdevs...> - draid[parity] - Parity level (default 1) - draid[:<data>d] - Data devices per group (default 8) - draid[:<children>c] - Expected number of child vdevs - draid[:<spares>s] - Distributed hot spares (default 0) Abbreviated example `zpool status` output for a 68 disk dRAID pool with two distributed spares using special allocation classes. ``` pool: tank state: ONLINE config: NAME STATE READ WRITE CKSUM slag7 ONLINE 0 0 0 draid2:8d:68c:2s-0 ONLINE 0 0 0 L0 ONLINE 0 0 0 L1 ONLINE 0 0 0 ... U25 ONLINE 0 0 0 U26 ONLINE 0 0 0 spare-53 ONLINE 0 0 0 U27 ONLINE 0 0 0 draid2-0-0 ONLINE 0 0 0 U28 ONLINE 0 0 0 U29 ONLINE 0 0 0 ... U42 ONLINE 0 0 0 U43 ONLINE 0 0 0 special mirror-1 ONLINE 0 0 0 L5 ONLINE 0 0 0 U5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 L6 ONLINE 0 0 0 U6 ONLINE 0 0 0 spares draid2-0-0 INUSE currently in use draid2-0-1 AVAIL ``` When adding test coverage for the new dRAID vdev type the following options were added to the ztest command. These options are leverages by zloop.sh to test a wide range of dRAID configurations. -K draid|raidz|random - kind of RAID to test -D <value> - dRAID data drives per group -S <value> - dRAID distributed hot spares -R <value> - RAID parity (raidz or dRAID) The zpool_create, zpool_import, redundancy, replacement and fault test groups have all been updated provide test coverage for the dRAID feature. Co-authored-by: Isaac Huang <he.huang@intel.com> Co-authored-by: Mark Maybee <mmaybee@cray.com> Co-authored-by: Don Brady <don.brady@delphix.com> Co-authored-by: Matthew Ahrens <mahrens@delphix.com> Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Mark Maybee <mmaybee@cray.com> Reviewed-by: Matt Ahrens <matt@delphix.com> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #10102
85 lines
2.3 KiB
Bash
Executable File
85 lines
2.3 KiB
Bash
Executable File
#!/bin/ksh -p
|
|
#
|
|
# CDDL HEADER START
|
|
#
|
|
# The contents of this file are subject to the terms of the
|
|
# Common Development and Distribution License (the "License").
|
|
# You may not use this file except in compliance with the License.
|
|
#
|
|
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
# or http://www.opensolaris.org/os/licensing.
|
|
# See the License for the specific language governing permissions
|
|
# and limitations under the License.
|
|
#
|
|
# When distributing Covered Code, include this CDDL HEADER in each
|
|
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
# If applicable, add the following below this CDDL HEADER, with the
|
|
# fields enclosed by brackets "[]" replaced with your own identifying
|
|
# information: Portions Copyright [yyyy] [name of copyright owner]
|
|
#
|
|
# CDDL HEADER END
|
|
#
|
|
|
|
#
|
|
# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
|
|
# Use is subject to license terms.
|
|
#
|
|
|
|
#
|
|
# Copyright (c) 2013 by Delphix. All rights reserved.
|
|
#
|
|
|
|
. $STF_SUITE/include/libtest.shlib
|
|
. $STF_SUITE/tests/functional/redundancy/redundancy.kshlib
|
|
|
|
#
|
|
# DESCRIPTION:
|
|
# A raidz3 pool can withstand 3 devices are failing or missing.
|
|
#
|
|
# STRATEGY:
|
|
# 1. Create N(>4,<5) virtual disk files.
|
|
# 2. Create raidz3 pool based on the virtual disk files.
|
|
# 3. Fill the filesystem with directories and files.
|
|
# 4. Record all the files and directories checksum information.
|
|
# 5. Damaged at most two of the virtual disk files.
|
|
# 6. Verify the data is correct to prove raidz3 can withstand 3 devices
|
|
# are failing.
|
|
#
|
|
|
|
verify_runnable "global"
|
|
|
|
log_assert "Verify raidz3 pool can withstand three devices failing."
|
|
log_onexit cleanup
|
|
|
|
typeset -i cnt=$(random_int_between 4 5)
|
|
setup_test_env $TESTPOOL raidz3 $cnt
|
|
|
|
#
|
|
# Inject data corruption errors for raidz3 pool
|
|
#
|
|
for i in 1 2 3; do
|
|
damage_devs $TESTPOOL $i "label"
|
|
log_must is_data_valid $TESTPOOL
|
|
log_must clear_errors $TESTPOOL
|
|
done
|
|
|
|
#
|
|
# Inject bad devices errors for raidz3 pool
|
|
#
|
|
for i in 1 2 3; do
|
|
damage_devs $TESTPOOL $i
|
|
log_must is_data_valid $TESTPOOL
|
|
log_must recover_bad_missing_devs $TESTPOOL $i
|
|
done
|
|
|
|
#
|
|
# Inject missing device errors for raidz3 pool
|
|
#
|
|
for i in 1 2 3; do
|
|
remove_devs $TESTPOOL $i
|
|
log_must is_data_valid $TESTPOOL
|
|
log_must recover_bad_missing_devs $TESTPOOL $i
|
|
done
|
|
|
|
log_pass "raidz3 pool can withstand three devices failing passed."
|