mirror_zfs/tests/zfs-tests/tests/functional/trim/trim_integrity.ksh
Brian Behlendorf b2255edcc0
Distributed Spare (dRAID) Feature
This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID.  This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.

A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`.  No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.

    zpool create <pool> draid[1,2,3] <vdevs...>

Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons.  The supported options include:

    zpool create <pool> \
        draid[<parity>][:<data>d][:<children>c][:<spares>s] \
        <vdevs...>

    - draid[parity]       - Parity level (default 1)
    - draid[:<data>d]     - Data devices per group (default 8)
    - draid[:<children>c] - Expected number of child vdevs
    - draid[:<spares>s]   - Distributed hot spares (default 0)

Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.

```
  pool: tank
 state: ONLINE
config:

    NAME                  STATE     READ WRITE CKSUM
    slag7                 ONLINE       0     0     0
      draid2:8d:68c:2s-0  ONLINE       0     0     0
        L0                ONLINE       0     0     0
        L1                ONLINE       0     0     0
        ...
        U25               ONLINE       0     0     0
        U26               ONLINE       0     0     0
        spare-53          ONLINE       0     0     0
          U27             ONLINE       0     0     0
          draid2-0-0      ONLINE       0     0     0
        U28               ONLINE       0     0     0
        U29               ONLINE       0     0     0
        ...
        U42               ONLINE       0     0     0
        U43               ONLINE       0     0     0
    special
      mirror-1            ONLINE       0     0     0
        L5                ONLINE       0     0     0
        U5                ONLINE       0     0     0
      mirror-2            ONLINE       0     0     0
        L6                ONLINE       0     0     0
        U6                ONLINE       0     0     0
    spares
      draid2-0-0          INUSE     currently in use
      draid2-0-1          AVAIL
```

When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command.  These options are leverages
by zloop.sh to test a wide range of dRAID configurations.

    -K draid|raidz|random - kind of RAID to test
    -D <value>            - dRAID data drives per group
    -S <value>            - dRAID distributed hot spares
    -R <value>            - RAID parity (raidz or dRAID)

The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.

Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
2020-11-13 13:51:51 -08:00

89 lines
2.4 KiB
Bash
Executable File

#!/bin/ksh -p
#
# CDDL HEADER START
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
# CDDL HEADER END
#
#
# Copyright (c) 2019 by Tim Chase. All rights reserved.
# Copyright (c) 2019 Lawrence Livermore National Security, LLC.
#
. $STF_SUITE/include/libtest.shlib
. $STF_SUITE/tests/functional/trim/trim.kshlib
. $STF_SUITE/tests/functional/trim/trim.cfg
#
# DESCRIPTION:
# Verify manual trim pool data integrity.
#
# STRATEGY:
# 1. Create a pool on sparse file vdevs to trim.
# 2. Generate some interesting pool data which can be trimmed.
# 3. Manually trim the pool.
# 4. Verify trim IOs of the expected type were issued for the pool.
# 5. Verify data integrity of the pool after trim.
# 6. Repeat test for striped, mirrored, and RAIDZ pools.
verify_runnable "global"
log_assert "Run 'zpool trim' and verify pool data integrity"
function cleanup
{
if poolexists $TESTPOOL; then
destroy_pool $TESTPOOL
fi
log_must rm -f $TRIM_VDEVS
log_must set_tunable64 TRIM_EXTENT_BYTES_MIN $trim_extent_bytes_min
log_must set_tunable64 TRIM_TXG_BATCH $trim_txg_batch
}
log_onexit cleanup
# Minimum trim size is decreased to verify all trim sizes.
typeset trim_extent_bytes_min=$(get_tunable TRIM_EXTENT_BYTES_MIN)
log_must set_tunable64 TRIM_EXTENT_BYTES_MIN 4096
# Reduced TRIM_TXG_BATCH to make trimming more frequent.
typeset trim_txg_batch=$(get_tunable TRIM_TXG_BATCH)
log_must set_tunable64 TRIM_TXG_BATCH 8
for type in "" "mirror" "raidz" "draid"; do
log_must truncate -s 1G $TRIM_VDEVS
log_must zpool create -f $TESTPOOL $type $TRIM_VDEVS
# Add and remove data from the pool in a random fashion in order
# to generate a variety of interesting ranges to be manually trimmed.
for n in {0..10}; do
dir="/$TESTPOOL/trim-$((RANDOM % 5))"
filesize=$((4096 + ((RANDOM * 691) % 131072) ))
log_must rm -rf $dir
log_must fill_fs $dir 10 10 $filesize 1 R
zpool sync
done
log_must du -hs /$TESTPOOL
log_must timeout 120 zpool trim -w $TESTPOOL
verify_trim_io $TESTPOOL "ind" 10
verify_pool $TESTPOOL
log_must zpool destroy $TESTPOOL
log_must rm -f $TRIM_VDEVS
done
log_pass "Manual trim successfully validated"