mirror_zfs/tests/zfs-tests/include/libtest.shlib

2886 lines
60 KiB
Plaintext
Raw Normal View History

Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
#!/bin/ksh -p
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2009 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#
# Copyright (c) 2012, 2015 by Delphix. All rights reserved.
#
. ${STF_TOOLS}/include/logapi.shlib
# Determine if this is a Linux test system
#
# Return 0 if platform Linux, 1 if otherwise
function is_linux
{
if [[ $($UNAME -o) == "GNU/Linux" ]]; then
return 0
else
return 1
fi
}
# Determine whether a dataset is mounted
#
# $1 dataset name
# $2 filesystem type; optional - defaulted to zfs
#
# Return 0 if dataset is mounted; 1 if unmounted; 2 on error
function ismounted
{
typeset fstype=$2
[[ -z $fstype ]] && fstype=zfs
typeset out dir name ret
case $fstype in
zfs)
if [[ "$1" == "/"* ]] ; then
for out in $($ZFS mount | $AWK '{print $2}'); do
[[ $1 == $out ]] && return 0
done
else
for out in $($ZFS mount | $AWK '{print $1}'); do
[[ $1 == $out ]] && return 0
done
fi
;;
ufs|nfs)
out=$($DF -F $fstype $1 2>/dev/null)
ret=$?
(($ret != 0)) && return $ret
dir=${out%%\(*}
dir=${dir%% *}
name=${out##*\(}
name=${name%%\)*}
name=${name%% *}
[[ "$1" == "$dir" || "$1" == "$name" ]] && return 0
;;
ext2)
out=$($DF -t $fstype $1 2>/dev/null)
return $?
;;
zvol)
if [[ -L "$ZVOL_DEVDIR/$1" ]]; then
link=$(readlink -f $ZVOL_DEVDIR/$1)
[[ -n "$link" ]] && \
$MOUNT | $GREP -q "^$link" && \
return 0
fi
;;
esac
return 1
}
# Return 0 if a dataset is mounted; 1 otherwise
#
# $1 dataset name
# $2 filesystem type; optional - defaulted to zfs
function mounted
{
ismounted $1 $2
(($? == 0)) && return 0
return 1
}
# Return 0 if a dataset is unmounted; 1 otherwise
#
# $1 dataset name
# $2 filesystem type; optional - defaulted to zfs
function unmounted
{
ismounted $1 $2
(($? == 1)) && return 0
return 1
}
# split line on ","
#
# $1 - line to split
function splitline
{
$ECHO $1 | $SED "s/,/ /g"
}
function default_setup
{
default_setup_noexit "$@"
log_pass
}
#
# Given a list of disks, setup storage pools and datasets.
#
function default_setup_noexit
{
typeset disklist=$1
typeset container=$2
typeset volume=$3
OpenZFS 4185 - add new cryptographic checksums to ZFS: SHA-512, Skein, Edon-R Reviewed by: George Wilson <george.wilson@delphix.com> Reviewed by: Prakash Surya <prakash.surya@delphix.com> Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com> Reviewed by: Richard Lowe <richlowe@richlowe.net> Approved by: Garrett D'Amore <garrett@damore.org> Ported by: Tony Hutter <hutter2@llnl.gov> OpenZFS-issue: https://www.illumos.org/issues/4185 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/45818ee Porting Notes: This code is ported on top of the Illumos Crypto Framework code: https://github.com/zfsonlinux/zfs/pull/4329/commits/b5e030c8dbb9cd393d313571dee4756fbba8c22d The list of porting changes includes: - Copied module/icp/include/sha2/sha2.h directly from illumos - Removed from module/icp/algs/sha2/sha2.c: #pragma inline(SHA256Init, SHA384Init, SHA512Init) - Added 'ctx' to lib/libzfs/libzfs_sendrecv.c:zio_checksum_SHA256() since it now takes in an extra parameter. - Added CTASSERT() to assert.h from for module/zfs/edonr_zfs.c - Added skein & edonr to libicp/Makefile.am - Added sha512.S. It was generated from sha512-x86_64.pl in Illumos. - Updated ztest.c with new fletcher_4_*() args; used NULL for new CTX argument. - In icp/algs/edonr/edonr_byteorder.h, Removed the #if defined(__linux) section to not #include the non-existant endian.h. - In skein_test.c, renane NULL to 0 in "no test vector" array entries to get around a compiler warning. - Fixup test files: - Rename <sys/varargs.h> -> <varargs.h>, <strings.h> -> <string.h>, - Remove <note.h> and define NOTE() as NOP. - Define u_longlong_t - Rename "#!/usr/bin/ksh" -> "#!/bin/ksh -p" - Rename NULL to 0 in "no test vector" array entries to get around a compiler warning. - Remove "for isa in $($ISAINFO); do" stuff - Add/update Makefiles - Add some userspace headers like stdio.h/stdlib.h in places of sys/types.h. - EXPORT_SYMBOL *_Init/*_Update/*_Final... routines in ICP modules. - Update scripts/zfs2zol-patch.sed - include <sys/sha2.h> in sha2_impl.h - Add sha2.h to include/sys/Makefile.am - Add skein and edonr dirs to icp Makefile - Add new checksums to zpool_get.cfg - Move checksum switch block from zfs_secpolicy_setprop() to zfs_check_settable() - Fix -Wuninitialized error in edonr_byteorder.h on PPC - Fix stack frame size errors on ARM32 - Don't unroll loops in Skein on 32-bit to save stack space - Add memory barriers in sha2.c on 32-bit to save stack space - Add filetest_001_pos.ksh checksum sanity test - Add option to write psudorandom data in file_write utility
2016-06-16 01:47:05 +03:00
log_note begin default_setup_noexit
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
if is_global_zone; then
if poolexists $TESTPOOL ; then
destroy_pool $TESTPOOL
fi
[[ -d /$TESTPOOL ]] && $RM -rf /$TESTPOOL
OpenZFS 4185 - add new cryptographic checksums to ZFS: SHA-512, Skein, Edon-R Reviewed by: George Wilson <george.wilson@delphix.com> Reviewed by: Prakash Surya <prakash.surya@delphix.com> Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com> Reviewed by: Richard Lowe <richlowe@richlowe.net> Approved by: Garrett D'Amore <garrett@damore.org> Ported by: Tony Hutter <hutter2@llnl.gov> OpenZFS-issue: https://www.illumos.org/issues/4185 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/45818ee Porting Notes: This code is ported on top of the Illumos Crypto Framework code: https://github.com/zfsonlinux/zfs/pull/4329/commits/b5e030c8dbb9cd393d313571dee4756fbba8c22d The list of porting changes includes: - Copied module/icp/include/sha2/sha2.h directly from illumos - Removed from module/icp/algs/sha2/sha2.c: #pragma inline(SHA256Init, SHA384Init, SHA512Init) - Added 'ctx' to lib/libzfs/libzfs_sendrecv.c:zio_checksum_SHA256() since it now takes in an extra parameter. - Added CTASSERT() to assert.h from for module/zfs/edonr_zfs.c - Added skein & edonr to libicp/Makefile.am - Added sha512.S. It was generated from sha512-x86_64.pl in Illumos. - Updated ztest.c with new fletcher_4_*() args; used NULL for new CTX argument. - In icp/algs/edonr/edonr_byteorder.h, Removed the #if defined(__linux) section to not #include the non-existant endian.h. - In skein_test.c, renane NULL to 0 in "no test vector" array entries to get around a compiler warning. - Fixup test files: - Rename <sys/varargs.h> -> <varargs.h>, <strings.h> -> <string.h>, - Remove <note.h> and define NOTE() as NOP. - Define u_longlong_t - Rename "#!/usr/bin/ksh" -> "#!/bin/ksh -p" - Rename NULL to 0 in "no test vector" array entries to get around a compiler warning. - Remove "for isa in $($ISAINFO); do" stuff - Add/update Makefiles - Add some userspace headers like stdio.h/stdlib.h in places of sys/types.h. - EXPORT_SYMBOL *_Init/*_Update/*_Final... routines in ICP modules. - Update scripts/zfs2zol-patch.sed - include <sys/sha2.h> in sha2_impl.h - Add sha2.h to include/sys/Makefile.am - Add skein and edonr dirs to icp Makefile - Add new checksums to zpool_get.cfg - Move checksum switch block from zfs_secpolicy_setprop() to zfs_check_settable() - Fix -Wuninitialized error in edonr_byteorder.h on PPC - Fix stack frame size errors on ARM32 - Don't unroll loops in Skein on 32-bit to save stack space - Add memory barriers in sha2.c on 32-bit to save stack space - Add filetest_001_pos.ksh checksum sanity test - Add option to write psudorandom data in file_write utility
2016-06-16 01:47:05 +03:00
log_note creating pool $TESTPOOL $disklist
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
log_must $ZPOOL create -f $TESTPOOL $disklist
else
reexport_pool
fi
$RM -rf $TESTDIR || log_unresolved Could not remove $TESTDIR
$MKDIR -p $TESTDIR || log_unresolved Could not create $TESTDIR
log_must $ZFS create $TESTPOOL/$TESTFS
log_must $ZFS set mountpoint=$TESTDIR $TESTPOOL/$TESTFS
if [[ -n $container ]]; then
$RM -rf $TESTDIR1 || \
log_unresolved Could not remove $TESTDIR1
$MKDIR -p $TESTDIR1 || \
log_unresolved Could not create $TESTDIR1
log_must $ZFS create $TESTPOOL/$TESTCTR
log_must $ZFS set canmount=off $TESTPOOL/$TESTCTR
log_must $ZFS create $TESTPOOL/$TESTCTR/$TESTFS1
log_must $ZFS set mountpoint=$TESTDIR1 \
$TESTPOOL/$TESTCTR/$TESTFS1
fi
if [[ -n $volume ]]; then
if is_global_zone ; then
log_must $ZFS create -V $VOLSIZE $TESTPOOL/$TESTVOL
block_device_wait
else
log_must $ZFS create $TESTPOOL/$TESTVOL
fi
fi
}
#
# Given a list of disks, setup a storage pool, file system and
# a container.
#
function default_container_setup
{
typeset disklist=$1
default_setup "$disklist" "true"
}
#
# Given a list of disks, setup a storage pool,file system
# and a volume.
#
function default_volume_setup
{
typeset disklist=$1
default_setup "$disklist" "" "true"
}
#
# Given a list of disks, setup a storage pool,file system,
# a container and a volume.
#
function default_container_volume_setup
{
typeset disklist=$1
default_setup "$disklist" "true" "true"
}
#
# Create a snapshot on a filesystem or volume. Defaultly create a snapshot on
# filesystem
#
# $1 Existing filesystem or volume name. Default, $TESTFS
# $2 snapshot name. Default, $TESTSNAP
#
function create_snapshot
{
typeset fs_vol=${1:-$TESTFS}
typeset snap=${2:-$TESTSNAP}
[[ -z $fs_vol ]] && log_fail "Filesystem or volume's name is undefined."
[[ -z $snap ]] && log_fail "Snapshot's name is undefined."
if snapexists $fs_vol@$snap; then
log_fail "$fs_vol@$snap already exists."
fi
datasetexists $fs_vol || \
log_fail "$fs_vol must exist."
log_must $ZFS snapshot $fs_vol@$snap
}
#
# Create a clone from a snapshot, default clone name is $TESTCLONE.
#
# $1 Existing snapshot, $TESTPOOL/$TESTFS@$TESTSNAP is default.
# $2 Clone name, $TESTPOOL/$TESTCLONE is default.
#
function create_clone # snapshot clone
{
typeset snap=${1:-$TESTPOOL/$TESTFS@$TESTSNAP}
typeset clone=${2:-$TESTPOOL/$TESTCLONE}
[[ -z $snap ]] && \
log_fail "Snapshot name is undefined."
[[ -z $clone ]] && \
log_fail "Clone name is undefined."
log_must $ZFS clone $snap $clone
}
function default_mirror_setup
{
default_mirror_setup_noexit $1 $2 $3
log_pass
}
#
# Given a pair of disks, set up a storage pool and dataset for the mirror
# @parameters: $1 the primary side of the mirror
# $2 the secondary side of the mirror
# @uses: ZPOOL ZFS TESTPOOL TESTFS
function default_mirror_setup_noexit
{
readonly func="default_mirror_setup_noexit"
typeset primary=$1
typeset secondary=$2
[[ -z $primary ]] && \
log_fail "$func: No parameters passed"
[[ -z $secondary ]] && \
log_fail "$func: No secondary partition passed"
[[ -d /$TESTPOOL ]] && $RM -rf /$TESTPOOL
log_must $ZPOOL create -f $TESTPOOL mirror $@
log_must $ZFS create $TESTPOOL/$TESTFS
log_must $ZFS set mountpoint=$TESTDIR $TESTPOOL/$TESTFS
}
#
# create a number of mirrors.
# We create a number($1) of 2 way mirrors using the pairs of disks named
# on the command line. These mirrors are *not* mounted
# @parameters: $1 the number of mirrors to create
# $... the devices to use to create the mirrors on
# @uses: ZPOOL ZFS TESTPOOL
function setup_mirrors
{
typeset -i nmirrors=$1
shift
while ((nmirrors > 0)); do
log_must test -n "$1" -a -n "$2"
[[ -d /$TESTPOOL$nmirrors ]] && $RM -rf /$TESTPOOL$nmirrors
log_must $ZPOOL create -f $TESTPOOL$nmirrors mirror $1 $2
shift 2
((nmirrors = nmirrors - 1))
done
}
#
# create a number of raidz pools.
# We create a number($1) of 2 raidz pools using the pairs of disks named
# on the command line. These pools are *not* mounted
# @parameters: $1 the number of pools to create
# $... the devices to use to create the pools on
# @uses: ZPOOL ZFS TESTPOOL
function setup_raidzs
{
typeset -i nraidzs=$1
shift
while ((nraidzs > 0)); do
log_must test -n "$1" -a -n "$2"
[[ -d /$TESTPOOL$nraidzs ]] && $RM -rf /$TESTPOOL$nraidzs
log_must $ZPOOL create -f $TESTPOOL$nraidzs raidz $1 $2
shift 2
((nraidzs = nraidzs - 1))
done
}
#
# Destroy the configured testpool mirrors.
# the mirrors are of the form ${TESTPOOL}{number}
# @uses: ZPOOL ZFS TESTPOOL
function destroy_mirrors
{
default_cleanup_noexit
log_pass
}
#
# Given a minimum of two disks, set up a storage pool and dataset for the raid-z
# $1 the list of disks
#
function default_raidz_setup
{
typeset disklist="$*"
disks=(${disklist[*]})
if [[ ${#disks[*]} -lt 2 ]]; then
log_fail "A raid-z requires a minimum of two disks."
fi
[[ -d /$TESTPOOL ]] && $RM -rf /$TESTPOOL
log_must $ZPOOL create -f $TESTPOOL raidz $1 $2 $3
log_must $ZFS create $TESTPOOL/$TESTFS
log_must $ZFS set mountpoint=$TESTDIR $TESTPOOL/$TESTFS
log_pass
}
#
# Common function used to cleanup storage pools and datasets.
#
# Invoked at the start of the test suite to ensure the system
# is in a known state, and also at the end of each set of
# sub-tests to ensure errors from one set of tests doesn't
# impact the execution of the next set.
function default_cleanup
{
default_cleanup_noexit
log_pass
}
function default_cleanup_noexit
{
typeset exclude=""
typeset pool=""
#
# Destroying the pool will also destroy any
# filesystems it contains.
#
if is_global_zone; then
$ZFS unmount -a > /dev/null 2>&1
[[ -z "$KEEP" ]] && KEEP="rpool"
exclude=`eval $ECHO \"'(${KEEP})'\"`
ALL_POOLS=$($ZPOOL list -H -o name \
| $GREP -v "$NO_POOLS" | $EGREP -v "$exclude")
# Here, we loop through the pools we're allowed to
# destroy, only destroying them if it's safe to do
# so.
while [ ! -z ${ALL_POOLS} ]
do
for pool in ${ALL_POOLS}
do
if safe_to_destroy_pool $pool ;
then
destroy_pool $pool
fi
ALL_POOLS=$($ZPOOL list -H -o name \
| $GREP -v "$NO_POOLS" \
| $EGREP -v "$exclude")
done
done
$ZFS mount -a
else
typeset fs=""
for fs in $($ZFS list -H -o name \
| $GREP "^$ZONE_POOL/$ZONE_CTR[01234]/"); do
datasetexists $fs && \
log_must $ZFS destroy -Rf $fs
done
# Need cleanup here to avoid garbage dir left.
for fs in $($ZFS list -H -o name); do
[[ $fs == /$ZONE_POOL ]] && continue
[[ -d $fs ]] && log_must $RM -rf $fs/*
done
#
# Reset the $ZONE_POOL/$ZONE_CTR[01234] file systems property to
# the default value
#
for fs in $($ZFS list -H -o name); do
if [[ $fs == $ZONE_POOL/$ZONE_CTR[01234] ]]; then
log_must $ZFS set reservation=none $fs
log_must $ZFS set recordsize=128K $fs
log_must $ZFS set mountpoint=/$fs $fs
typeset enc=""
enc=$(get_prop encryption $fs)
if [[ $? -ne 0 ]] || [[ -z "$enc" ]] || \
[[ "$enc" == "off" ]]; then
log_must $ZFS set checksum=on $fs
fi
log_must $ZFS set compression=off $fs
log_must $ZFS set atime=on $fs
log_must $ZFS set devices=off $fs
log_must $ZFS set exec=on $fs
log_must $ZFS set setuid=on $fs
log_must $ZFS set readonly=off $fs
log_must $ZFS set snapdir=hidden $fs
log_must $ZFS set aclmode=groupmask $fs
log_must $ZFS set aclinherit=secure $fs
fi
done
fi
[[ -d $TESTDIR ]] && \
log_must $RM -rf $TESTDIR
Real disk partitioning now enabled in test suite for Linux When using real devices, specify DISKS="sdb sdc sdd" opposed to /dev/sdb in zfs-tests.sh - otherwise errors with directory names and disk names registering as "/dev//dev/sdb" for some tests. The same goes for mpath: DISK="mpatha mpathad mpathb" Expected Usage: $ DISKS="sdb sdc sdd" zfs-tests.sh SLICE_PREFIX is now set as "p" for a loop device (ie loop0p2) or "" for a real device (ie sdb2), or either for multipath devices (ie mpatha1 or mpath1p1) instead of only "p" by default. Note that kpartx partitioning is not currently supported in this patch (ie "partx") and may need to be disabled on Debian distributions. Functions added for determining test directory (/dev or /dev/mapper) as well as slice prefix are determined and exported mostly in the cfg file of each test group directory. Currently zpools cannot be created on whole mpath devices that have been partitioned. In order to fix this tests have either been revised to use a partition instead, or if there is a size constraint and the pool needs to be created on the whole disk, partitions are then deleted if the device is a multipath device. This functionality is added to default_cleanup() or to individual cleanup scripts if a non-default cleanup method is used. The max partitions is currently set at 8 to account for all of the tests thus far. Patch changes are generally encompassed in "if is_linux" construct. Signed-off-by: Sydney Vanda <sydney.m.vanda@intel.com> Reviewed-by: John Salinas <John.Salinas@intel.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: David Quigley <david.quigley@intel.com> Closes #4447 Closes #4964 Closes #5074
2016-07-22 18:07:04 +03:00
disk1=${DISKS%% *}
if is_mpath_device $disk1; then
delete_partitions
fi
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
}
#
# Common function used to cleanup storage pools, file systems
# and containers.
#
function default_container_cleanup
{
if ! is_global_zone; then
reexport_pool
fi
ismounted $TESTPOOL/$TESTCTR/$TESTFS1
[[ $? -eq 0 ]] && \
log_must $ZFS unmount $TESTPOOL/$TESTCTR/$TESTFS1
datasetexists $TESTPOOL/$TESTCTR/$TESTFS1 && \
log_must $ZFS destroy -R $TESTPOOL/$TESTCTR/$TESTFS1
datasetexists $TESTPOOL/$TESTCTR && \
log_must $ZFS destroy -Rf $TESTPOOL/$TESTCTR
[[ -e $TESTDIR1 ]] && \
log_must $RM -rf $TESTDIR1 > /dev/null 2>&1
default_cleanup
}
#
# Common function used to cleanup snapshot of file system or volume. Default to
# delete the file system's snapshot
#
# $1 snapshot name
#
function destroy_snapshot
{
typeset snap=${1:-$TESTPOOL/$TESTFS@$TESTSNAP}
if ! snapexists $snap; then
log_fail "'$snap' does not existed."
fi
#
# For the sake of the value which come from 'get_prop' is not equal
# to the really mountpoint when the snapshot is unmounted. So, firstly
# check and make sure this snapshot's been mounted in current system.
#
typeset mtpt=""
if ismounted $snap; then
mtpt=$(get_prop mountpoint $snap)
(($? != 0)) && \
log_fail "get_prop mountpoint $snap failed."
fi
log_must $ZFS destroy $snap
[[ $mtpt != "" && -d $mtpt ]] && \
log_must $RM -rf $mtpt
}
#
# Common function used to cleanup clone.
#
# $1 clone name
#
function destroy_clone
{
typeset clone=${1:-$TESTPOOL/$TESTCLONE}
if ! datasetexists $clone; then
log_fail "'$clone' does not existed."
fi
# With the same reason in destroy_snapshot
typeset mtpt=""
if ismounted $clone; then
mtpt=$(get_prop mountpoint $clone)
(($? != 0)) && \
log_fail "get_prop mountpoint $clone failed."
fi
log_must $ZFS destroy $clone
[[ $mtpt != "" && -d $mtpt ]] && \
log_must $RM -rf $mtpt
}
# Return 0 if a snapshot exists; $? otherwise
#
# $1 - snapshot name
function snapexists
{
$ZFS list -H -t snapshot "$1" > /dev/null 2>&1
return $?
}
#
# Set a property to a certain value on a dataset.
# Sets a property of the dataset to the value as passed in.
# @param:
# $1 dataset who's property is being set
# $2 property to set
# $3 value to set property to
# @return:
# 0 if the property could be set.
# non-zero otherwise.
# @use: ZFS
#
function dataset_setprop
{
typeset fn=dataset_setprop
if (($# < 3)); then
log_note "$fn: Insufficient parameters (need 3, had $#)"
return 1
fi
typeset output=
output=$($ZFS set $2=$3 $1 2>&1)
typeset rv=$?
if ((rv != 0)); then
log_note "Setting property on $1 failed."
log_note "property $2=$3"
log_note "Return Code: $rv"
log_note "Output: $output"
return $rv
fi
return 0
}
#
# Assign suite defined dataset properties.
# This function is used to apply the suite's defined default set of
# properties to a dataset.
# @parameters: $1 dataset to use
# @uses: ZFS COMPRESSION_PROP CHECKSUM_PROP
# @returns:
# 0 if the dataset has been altered.
# 1 if no pool name was passed in.
# 2 if the dataset could not be found.
# 3 if the dataset could not have it's properties set.
#
function dataset_set_defaultproperties
{
typeset dataset="$1"
[[ -z $dataset ]] && return 1
typeset confset=
typeset -i found=0
for confset in $($ZFS list); do
if [[ $dataset = $confset ]]; then
found=1
break
fi
done
[[ $found -eq 0 ]] && return 2
if [[ -n $COMPRESSION_PROP ]]; then
dataset_setprop $dataset compression $COMPRESSION_PROP || \
return 3
log_note "Compression set to '$COMPRESSION_PROP' on $dataset"
fi
if [[ -n $CHECKSUM_PROP ]]; then
dataset_setprop $dataset checksum $CHECKSUM_PROP || \
return 3
log_note "Checksum set to '$CHECKSUM_PROP' on $dataset"
fi
return 0
}
#
# Check a numeric assertion
# @parameter: $@ the assertion to check
# @output: big loud notice if assertion failed
# @use: log_fail
#
function assert
{
(($@)) || log_fail "$@"
}
#
# Function to format partition size of a disk
# Given a disk cxtxdx reduces all partitions
# to 0 size
#
function zero_partitions #<whole_disk_name>
{
typeset diskname=$1
typeset i
if is_linux; then
log_must $FORMAT $DEV_DSKDIR/$diskname -s -- mklabel gpt
else
for i in 0 1 3 4 5 6 7
do
set_partition $i "" 0mb $diskname
done
fi
}
#
# Given a slice, size and disk, this function
# formats the slice to the specified size.
# Size should be specified with units as per
# the `format` command requirements eg. 100mb 3gb
#
# NOTE: This entire interface is problematic for the Linux parted utilty
# which requires the end of the partition to be specified. It would be
# best to retire this interface and replace it with something more flexible.
# At the moment a best effort is made.
#
function set_partition #<slice_num> <slice_start> <size_plus_units> <whole_disk_name>
{
typeset -i slicenum=$1
typeset start=$2
typeset size=$3
typeset disk=$4
[[ -z $slicenum || -z $size || -z $disk ]] && \
log_fail "The slice, size or disk name is unspecified."
if is_linux; then
typeset size_mb=${size%%[mMgG]}
size_mb=${size_mb%%[mMgG][bB]}
if [[ ${size:1:1} == 'g' ]]; then
((size_mb = size_mb * 1024))
fi
# Create GPT partition table when setting slice 0 or
# when the device doesn't already contain a GPT label.
$FORMAT $DEV_DSKDIR/$disk -s -- print 1 >/dev/null
typeset ret_val=$?
if [[ $slicenum -eq 0 || $ret_val -ne 0 ]]; then
log_must $FORMAT $DEV_DSKDIR/$disk -s -- mklabel gpt
fi
# When no start is given align on the first cylinder.
if [[ -z "$start" ]]; then
start=1
fi
# Determine the cylinder size for the device and using
# that calculate the end offset in cylinders.
typeset -i cly_size_kb=0
cly_size_kb=$($FORMAT -m $DEV_DSKDIR/$disk -s -- \
unit cyl print | $HEAD -3 | $TAIL -1 | \
$AWK -F '[:k.]' '{print $4}')
((end = (size_mb * 1024 / cly_size_kb) + start))
log_must $FORMAT $DEV_DSKDIR/$disk -s -- \
mkpart part$slicenum ${start}cyl ${end}cyl
$BLOCKDEV --rereadpt $DEV_DSKDIR/$disk 2>/dev/null
block_device_wait
else
typeset format_file=/var/tmp/format_in.$$
$ECHO "partition" >$format_file
$ECHO "$slicenum" >> $format_file
$ECHO "" >> $format_file
$ECHO "" >> $format_file
$ECHO "$start" >> $format_file
$ECHO "$size" >> $format_file
$ECHO "label" >> $format_file
$ECHO "" >> $format_file
$ECHO "q" >> $format_file
$ECHO "q" >> $format_file
$FORMAT -e -s -d $disk -f $format_file
fi
typeset ret_val=$?
$RM -f $format_file
[[ $ret_val -ne 0 ]] && \
log_fail "Unable to format $disk slice $slicenum to $size"
return 0
}
Real disk partitioning now enabled in test suite for Linux When using real devices, specify DISKS="sdb sdc sdd" opposed to /dev/sdb in zfs-tests.sh - otherwise errors with directory names and disk names registering as "/dev//dev/sdb" for some tests. The same goes for mpath: DISK="mpatha mpathad mpathb" Expected Usage: $ DISKS="sdb sdc sdd" zfs-tests.sh SLICE_PREFIX is now set as "p" for a loop device (ie loop0p2) or "" for a real device (ie sdb2), or either for multipath devices (ie mpatha1 or mpath1p1) instead of only "p" by default. Note that kpartx partitioning is not currently supported in this patch (ie "partx") and may need to be disabled on Debian distributions. Functions added for determining test directory (/dev or /dev/mapper) as well as slice prefix are determined and exported mostly in the cfg file of each test group directory. Currently zpools cannot be created on whole mpath devices that have been partitioned. In order to fix this tests have either been revised to use a partition instead, or if there is a size constraint and the pool needs to be created on the whole disk, partitions are then deleted if the device is a multipath device. This functionality is added to default_cleanup() or to individual cleanup scripts if a non-default cleanup method is used. The max partitions is currently set at 8 to account for all of the tests thus far. Patch changes are generally encompassed in "if is_linux" construct. Signed-off-by: Sydney Vanda <sydney.m.vanda@intel.com> Reviewed-by: John Salinas <John.Salinas@intel.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: David Quigley <david.quigley@intel.com> Closes #4447 Closes #4964 Closes #5074
2016-07-22 18:07:04 +03:00
#
# Delete all partitions on all disks - this is specifically for the use of multipath
# devices which currently can only be used in the test suite as raw/un-partitioned
# devices (ie a zpool cannot be created on a whole mpath device that has partitions)
#
function delete_partitions
{
typeset -i j=1
if [[ -z $DISK_ARRAY_NUM ]]; then
DISK_ARRAY_NUM=$($ECHO ${DISKS} | $NAWK '{print NF}')
fi
if [[ -z $DISKSARRAY ]]; then
DISKSARRAY=$DISKS
fi
if is_linux; then
if (( $DISK_ARRAY_NUM == 1 )); then
while ((j < MAX_PARTITIONS)); do
$FORMAT $DEV_DSKDIR/$DISK -s rm $j > /dev/null 2>&1
if (( $? == 1 )); then
$LSBLK | $EGREP ${DISK}${SLICE_PREFIX}${j} > /dev/null
if (( $? == 1 )); then
log_note "Partitions for $DISK should be deleted"
else
log_fail "Partition for ${DISK}${SLICE_PREFIX}${j} not deleted"
fi
return 0
else
$LSBLK | $EGREP ${DISK}${SLICE_PREFIX}${j} > /dev/null
if (( $? == 0 )); then
log_fail "Partition for ${DISK}${SLICE_PREFIX}${j} not deleted"
fi
fi
((j = j+1))
done
else
for disk in `$ECHO $DISKSARRAY`; do
while ((j < MAX_PARTITIONS)); do
$FORMAT $DEV_DSKDIR/$disk -s rm $j > /dev/null 2>&1
if (( $? == 1 )); then
$LSBLK | $EGREP ${disk}${SLICE_PREFIX}${j} > /dev/null
if (( $? == 1 )); then
log_note "Partitions for $disk should be deleted"
else
log_fail "Partition for ${disk}${SLICE_PREFIX}${j} not deleted"
fi
j=7
else
$LSBLK | $EGREP ${disk}${SLICE_PREFIX}${j} > /dev/null
if (( $? == 0 )); then
log_fail "Partition for ${disk}${SLICE_PREFIX}${j} not deleted"
fi
fi
((j = j+1))
done
j=1
done
fi
fi
return 0
}
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
#
# Get the end cyl of the given slice
#
function get_endslice #<disk> <slice>
{
typeset disk=$1
typeset slice=$2
if [[ -z $disk || -z $slice ]] ; then
log_fail "The disk name or slice number is unspecified."
fi
if is_linux; then
endcyl=$($FORMAT -s $DEV_DSKDIR/$disk -- unit cyl print | \
$GREP "part${slice}" | \
$AWK '{print $3}' | \
$SED 's,cyl,,')
((endcyl = (endcyl + 1)))
else
disk=${disk#/dev/dsk/}
disk=${disk#/dev/rdsk/}
disk=${disk%s*}
typeset -i ratio=0
ratio=$($PRTVTOC /dev/rdsk/${disk}s2 | \
$GREP "sectors\/cylinder" | \
$AWK '{print $2}')
if ((ratio == 0)); then
return
fi
typeset -i endcyl=$($PRTVTOC -h /dev/rdsk/${disk}s2 |
$NAWK -v token="$slice" '{if ($1==token) print $6}')
((endcyl = (endcyl + 1) / ratio))
fi
echo $endcyl
}
#
# Given a size,disk and total slice number, this function formats the
# disk slices from 0 to the total slice number with the same specified
# size.
#
function partition_disk #<slice_size> <whole_disk_name> <total_slices>
{
typeset -i i=0
typeset slice_size=$1
typeset disk_name=$2
typeset total_slices=$3
typeset cyl
zero_partitions $disk_name
while ((i < $total_slices)); do
if ! is_linux; then
if ((i == 2)); then
((i = i + 1))
continue
fi
fi
set_partition $i "$cyl" $slice_size $disk_name
cyl=$(get_endslice $disk_name $i)
((i = i+1))
done
}
#
# This function continues to write to a filenum number of files into dirnum
# number of directories until either $FILE_WRITE returns an error or the
# maximum number of files per directory have been written.
#
# Usage:
# fill_fs [destdir] [dirnum] [filenum] [bytes] [num_writes] [data]
#
# Return value: 0 on success
# non 0 on error
#
# Where :
# destdir: is the directory where everything is to be created under
# dirnum: the maximum number of subdirectories to use, -1 no limit
# filenum: the maximum number of files per subdirectory
# bytes: number of bytes to write
# num_writes: numer of types to write out bytes
# data: the data that will be writen
#
# E.g.
# file_fs /testdir 20 25 1024 256 0
#
# Note: bytes * num_writes equals the size of the testfile
#
function fill_fs # destdir dirnum filenum bytes num_writes data
{
typeset destdir=${1:-$TESTDIR}
typeset -i dirnum=${2:-50}
typeset -i filenum=${3:-50}
typeset -i bytes=${4:-8192}
typeset -i num_writes=${5:-10240}
typeset -i data=${6:-0}
typeset -i odirnum=1
typeset -i idirnum=0
typeset -i fn=0
typeset -i retval=0
log_must $MKDIR -p $destdir/$idirnum
while (($odirnum > 0)); do
if ((dirnum >= 0 && idirnum >= dirnum)); then
odirnum=0
break
fi
$FILE_WRITE -o create -f $destdir/$idirnum/$TESTFILE.$fn \
-b $bytes -c $num_writes -d $data
retval=$?
if (($retval != 0)); then
odirnum=0
break
fi
if (($fn >= $filenum)); then
fn=0
((idirnum = idirnum + 1))
log_must $MKDIR -p $destdir/$idirnum
else
((fn = fn + 1))
fi
done
return $retval
}
#
# Simple function to get the specified property. If unable to
# get the property then exits.
#
# Note property is in 'parsable' format (-p)
#
function get_prop # property dataset
{
typeset prop_val
typeset prop=$1
typeset dataset=$2
prop_val=$($ZFS get -pH -o value $prop $dataset 2>/dev/null)
if [[ $? -ne 0 ]]; then
log_note "Unable to get $prop property for dataset " \
"$dataset"
return 1
fi
$ECHO $prop_val
return 0
}
#
# Simple function to get the specified property of pool. If unable to
# get the property then exits.
#
function get_pool_prop # property pool
{
typeset prop_val
typeset prop=$1
typeset pool=$2
if poolexists $pool ; then
prop_val=$($ZPOOL get $prop $pool 2>/dev/null | $TAIL -1 | \
$AWK '{print $3}')
if [[ $? -ne 0 ]]; then
log_note "Unable to get $prop property for pool " \
"$pool"
return 1
fi
else
log_note "Pool $pool not exists."
return 1
fi
$ECHO $prop_val
return 0
}
# Return 0 if a pool exists; $? otherwise
#
# $1 - pool name
function poolexists
{
typeset pool=$1
if [[ -z $pool ]]; then
log_note "No pool name given."
return 1
fi
$ZPOOL get name "$pool" > /dev/null 2>&1
return $?
}
# Return 0 if all the specified datasets exist; $? otherwise
#
# $1-n dataset name
function datasetexists
{
if (($# == 0)); then
log_note "No dataset name given."
return 1
fi
while (($# > 0)); do
$ZFS get name $1 > /dev/null 2>&1 || \
return $?
shift
done
return 0
}
# return 0 if none of the specified datasets exists, otherwise return 1.
#
# $1-n dataset name
function datasetnonexists
{
if (($# == 0)); then
log_note "No dataset name given."
return 1
fi
while (($# > 0)); do
$ZFS list -H -t filesystem,snapshot,volume $1 > /dev/null 2>&1 \
&& return 1
shift
done
return 0
}
#
# Given a mountpoint, or a dataset name, determine if it is shared.
#
# Returns 0 if shared, 1 otherwise.
#
function is_shared
{
typeset fs=$1
typeset mtpt
if is_linux; then
log_unsupported "Currently unsupported by the test framework"
return 1
fi
if [[ $fs != "/"* ]] ; then
if datasetnonexists "$fs" ; then
return 1
else
mtpt=$(get_prop mountpoint "$fs")
case $mtpt in
none|legacy|-) return 1
;;
*) fs=$mtpt
;;
esac
fi
fi
for mtpt in `$SHARE | $AWK '{print $2}'` ; do
if [[ $mtpt == $fs ]] ; then
return 0
fi
done
typeset stat=$($SVCS -H -o STA nfs/server:default)
if [[ $stat != "ON" ]]; then
log_note "Current nfs/server status: $stat"
fi
return 1
}
#
# Given a mountpoint, determine if it is not shared.
#
# Returns 0 if not shared, 1 otherwise.
#
function not_shared
{
typeset fs=$1
if is_linux; then
log_unsupported "Currently unsupported by the test framework"
return 1
fi
is_shared $fs
if (($? == 0)); then
return 1
fi
return 0
}
#
# Helper function to unshare a mountpoint.
#
function unshare_fs #fs
{
typeset fs=$1
if is_linux; then
log_unsupported "Currently unsupported by the test framework"
return 1
fi
is_shared $fs
if (($? == 0)); then
log_must $ZFS unshare $fs
fi
return 0
}
#
# Check NFS server status and trigger it online.
#
function setup_nfs_server
{
# Cannot share directory in non-global zone.
#
if ! is_global_zone; then
log_note "Cannot trigger NFS server by sharing in LZ."
return
fi
if is_linux; then
log_unsupported "Currently unsupported by the test framework"
return
fi
typeset nfs_fmri="svc:/network/nfs/server:default"
if [[ $($SVCS -Ho STA $nfs_fmri) != "ON" ]]; then
#
# Only really sharing operation can enable NFS server
# to online permanently.
#
typeset dummy=/tmp/dummy
if [[ -d $dummy ]]; then
log_must $RM -rf $dummy
fi
log_must $MKDIR $dummy
log_must $SHARE $dummy
#
# Waiting for fmri's status to be the final status.
# Otherwise, in transition, an asterisk (*) is appended for
# instances, unshare will reverse status to 'DIS' again.
#
# Waiting for 1's at least.
#
log_must $SLEEP 1
timeout=10
while [[ timeout -ne 0 && $($SVCS -Ho STA $nfs_fmri) == *'*' ]]
do
log_must $SLEEP 1
((timeout -= 1))
done
log_must $UNSHARE $dummy
log_must $RM -rf $dummy
fi
log_note "Current NFS status: '$($SVCS -Ho STA,FMRI $nfs_fmri)'"
}
#
# To verify whether calling process is in global zone
#
# Return 0 if in global zone, 1 in non-global zone
#
function is_global_zone
{
typeset cur_zone=$($ZONENAME 2>/dev/null)
if [[ $cur_zone != "global" ]]; then
return 1
fi
return 0
}
#
# Verify whether test is permitted to run from
# global zone, local zone, or both
#
# $1 zone limit, could be "global", "local", or "both"(no limit)
#
# Return 0 if permitted, otherwise exit with log_unsupported
#
function verify_runnable # zone limit
{
typeset limit=$1
[[ -z $limit ]] && return 0
if is_global_zone ; then
case $limit in
global|both)
;;
local) log_unsupported "Test is unable to run from "\
"global zone."
;;
*) log_note "Warning: unknown limit $limit - " \
"use both."
;;
esac
else
case $limit in
local|both)
;;
global) log_unsupported "Test is unable to run from "\
"local zone."
;;
*) log_note "Warning: unknown limit $limit - " \
"use both."
;;
esac
reexport_pool
fi
return 0
}
# Return 0 if create successfully or the pool exists; $? otherwise
# Note: In local zones, this function should return 0 silently.
#
# $1 - pool name
# $2-n - [keyword] devs_list
function create_pool #pool devs_list
{
typeset pool=${1%%/*}
shift
if [[ -z $pool ]]; then
log_note "Missing pool name."
return 1
fi
if poolexists $pool ; then
destroy_pool $pool
fi
if is_global_zone ; then
[[ -d /$pool ]] && $RM -rf /$pool
log_must $ZPOOL create -f $pool $@
fi
return 0
}
# Return 0 if destroy successfully or the pool exists; $? otherwise
# Note: In local zones, this function should return 0 silently.
#
# $1 - pool name
# Destroy pool with the given parameters.
function destroy_pool #pool
{
typeset pool=${1%%/*}
typeset mtpt
if [[ -z $pool ]]; then
log_note "No pool name given."
return 1
fi
if is_global_zone ; then
if poolexists "$pool" ; then
mtpt=$(get_prop mountpoint "$pool")
# At times, syseventd activity can cause attempts to
# destroy a pool to fail with EBUSY. We retry a few
# times allowing failures before requiring the destroy
# to succeed.
typeset -i wait_time=10 ret=1 count=0
must=""
while [[ $ret -ne 0 ]]; do
$must $ZPOOL destroy -f $pool
ret=$?
[[ $ret -eq 0 ]] && break
log_note "zpool destroy failed with $ret"
[[ count++ -ge 7 ]] && must=log_must
$SLEEP $wait_time
done
[[ -d $mtpt ]] && \
log_must $RM -rf $mtpt
else
log_note "Pool does not exist. ($pool)"
return 1
fi
fi
return 0
}
#
# Firstly, create a pool with 5 datasets. Then, create a single zone and
# export the 5 datasets to it. In addition, we also add a ZFS filesystem
# and a zvol device to the zone.
#
# $1 zone name
# $2 zone root directory prefix
# $3 zone ip
#
function zfs_zones_setup #zone_name zone_root zone_ip
{
typeset zone_name=${1:-$(hostname)-z}
typeset zone_root=${2:-"/zone_root"}
typeset zone_ip=${3:-"10.1.1.10"}
typeset prefix_ctr=$ZONE_CTR
typeset pool_name=$ZONE_POOL
typeset -i cntctr=5
typeset -i i=0
# Create pool and 5 container within it
#
[[ -d /$pool_name ]] && $RM -rf /$pool_name
log_must $ZPOOL create -f $pool_name $DISKS
while ((i < cntctr)); do
log_must $ZFS create $pool_name/$prefix_ctr$i
((i += 1))
done
# create a zvol
log_must $ZFS create -V 1g $pool_name/zone_zvol
block_device_wait
#
# If current system support slog, add slog device for pool
#
if verify_slog_support ; then
typeset sdevs="/var/tmp/sdev1 /var/tmp/sdev2"
log_must $MKFILE 100M $sdevs
log_must $ZPOOL add $pool_name log mirror $sdevs
fi
# this isn't supported just yet.
# Create a filesystem. In order to add this to
# the zone, it must have it's mountpoint set to 'legacy'
# log_must $ZFS create $pool_name/zfs_filesystem
# log_must $ZFS set mountpoint=legacy $pool_name/zfs_filesystem
[[ -d $zone_root ]] && \
log_must $RM -rf $zone_root/$zone_name
[[ ! -d $zone_root ]] && \
log_must $MKDIR -p -m 0700 $zone_root/$zone_name
# Create zone configure file and configure the zone
#
typeset zone_conf=/tmp/zone_conf.$$
$ECHO "create" > $zone_conf
$ECHO "set zonepath=$zone_root/$zone_name" >> $zone_conf
$ECHO "set autoboot=true" >> $zone_conf
i=0
while ((i < cntctr)); do
$ECHO "add dataset" >> $zone_conf
$ECHO "set name=$pool_name/$prefix_ctr$i" >> \
$zone_conf
$ECHO "end" >> $zone_conf
((i += 1))
done
# add our zvol to the zone
$ECHO "add device" >> $zone_conf
$ECHO "set match=$ZVOL_DEVDIR/$pool_name/zone_zvol" >> $zone_conf
$ECHO "end" >> $zone_conf
# add a corresponding zvol rdsk to the zone
$ECHO "add device" >> $zone_conf
$ECHO "set match=$ZVOL_RDEVDIR/$pool_name/zone_zvol" >> $zone_conf
$ECHO "end" >> $zone_conf
# once it's supported, we'll add our filesystem to the zone
# $ECHO "add fs" >> $zone_conf
# $ECHO "set type=zfs" >> $zone_conf
# $ECHO "set special=$pool_name/zfs_filesystem" >> $zone_conf
# $ECHO "set dir=/export/zfs_filesystem" >> $zone_conf
# $ECHO "end" >> $zone_conf
$ECHO "verify" >> $zone_conf
$ECHO "commit" >> $zone_conf
log_must $ZONECFG -z $zone_name -f $zone_conf
log_must $RM -f $zone_conf
# Install the zone
$ZONEADM -z $zone_name install
if (($? == 0)); then
log_note "SUCCESS: $ZONEADM -z $zone_name install"
else
log_fail "FAIL: $ZONEADM -z $zone_name install"
fi
# Install sysidcfg file
#
typeset sysidcfg=$zone_root/$zone_name/root/etc/sysidcfg
$ECHO "system_locale=C" > $sysidcfg
$ECHO "terminal=dtterm" >> $sysidcfg
$ECHO "network_interface=primary {" >> $sysidcfg
$ECHO "hostname=$zone_name" >> $sysidcfg
$ECHO "}" >> $sysidcfg
$ECHO "name_service=NONE" >> $sysidcfg
$ECHO "root_password=mo791xfZ/SFiw" >> $sysidcfg
$ECHO "security_policy=NONE" >> $sysidcfg
$ECHO "timezone=US/Eastern" >> $sysidcfg
# Boot this zone
log_must $ZONEADM -z $zone_name boot
}
#
# Reexport TESTPOOL & TESTPOOL(1-4)
#
function reexport_pool
{
typeset -i cntctr=5
typeset -i i=0
while ((i < cntctr)); do
if ((i == 0)); then
TESTPOOL=$ZONE_POOL/$ZONE_CTR$i
if ! ismounted $TESTPOOL; then
log_must $ZFS mount $TESTPOOL
fi
else
eval TESTPOOL$i=$ZONE_POOL/$ZONE_CTR$i
if eval ! ismounted \$TESTPOOL$i; then
log_must eval $ZFS mount \$TESTPOOL$i
fi
fi
((i += 1))
done
}
#
# Verify a given disk is online or offline
#
# Return 0 is pool/disk matches expected state, 1 otherwise
#
function check_state # pool disk state{online,offline}
{
typeset pool=$1
typeset disk=${2#$DEV_DSKDIR/}
typeset state=$3
$ZPOOL status -v $pool | grep "$disk" \
| grep -i "$state" > /dev/null 2>&1
return $?
}
#
# Get the mountpoint of snapshot
# For the snapshot use <mp_filesystem>/.zfs/snapshot/<snap>
# as its mountpoint
#
function snapshot_mountpoint
{
typeset dataset=${1:-$TESTPOOL/$TESTFS@$TESTSNAP}
if [[ $dataset != *@* ]]; then
log_fail "Error name of snapshot '$dataset'."
fi
typeset fs=${dataset%@*}
typeset snap=${dataset#*@}
if [[ -z $fs || -z $snap ]]; then
log_fail "Error name of snapshot '$dataset'."
fi
$ECHO $(get_prop mountpoint $fs)/.zfs/snapshot/$snap
}
#
# Given a pool and file system, this function will verify the file system
# using the zdb internal tool. Note that the pool is exported and imported
# to ensure it has consistent state.
#
function verify_filesys # pool filesystem dir
{
typeset pool="$1"
typeset filesys="$2"
typeset zdbout="/tmp/zdbout.$$"
shift
shift
typeset dirs=$@
typeset search_path=""
log_note "Calling $ZDB to verify filesystem '$filesys'"
$ZFS unmount -a > /dev/null 2>&1
log_must $ZPOOL export $pool
if [[ -n $dirs ]] ; then
for dir in $dirs ; do
search_path="$search_path -d $dir"
done
fi
log_must $ZPOOL import $search_path $pool
$ZDB -cudi $filesys > $zdbout 2>&1
if [[ $? != 0 ]]; then
log_note "Output: $ZDB -cudi $filesys"
$CAT $zdbout
log_fail "$ZDB detected errors with: '$filesys'"
fi
log_must $ZFS mount -a
log_must $RM -rf $zdbout
}
#
# Given a pool, and this function list all disks in the pool
#
function get_disklist # pool
{
typeset disklist=""
disklist=$($ZPOOL iostat -v $1 | $NAWK '(NR >4) {print $1}' | \
$GREP -v "\-\-\-\-\-" | \
$EGREP -v -e "^(mirror|raidz1|raidz2|spare|log|cache)$")
$ECHO $disklist
}
OpenZFS 4185 - add new cryptographic checksums to ZFS: SHA-512, Skein, Edon-R Reviewed by: George Wilson <george.wilson@delphix.com> Reviewed by: Prakash Surya <prakash.surya@delphix.com> Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com> Reviewed by: Richard Lowe <richlowe@richlowe.net> Approved by: Garrett D'Amore <garrett@damore.org> Ported by: Tony Hutter <hutter2@llnl.gov> OpenZFS-issue: https://www.illumos.org/issues/4185 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/45818ee Porting Notes: This code is ported on top of the Illumos Crypto Framework code: https://github.com/zfsonlinux/zfs/pull/4329/commits/b5e030c8dbb9cd393d313571dee4756fbba8c22d The list of porting changes includes: - Copied module/icp/include/sha2/sha2.h directly from illumos - Removed from module/icp/algs/sha2/sha2.c: #pragma inline(SHA256Init, SHA384Init, SHA512Init) - Added 'ctx' to lib/libzfs/libzfs_sendrecv.c:zio_checksum_SHA256() since it now takes in an extra parameter. - Added CTASSERT() to assert.h from for module/zfs/edonr_zfs.c - Added skein & edonr to libicp/Makefile.am - Added sha512.S. It was generated from sha512-x86_64.pl in Illumos. - Updated ztest.c with new fletcher_4_*() args; used NULL for new CTX argument. - In icp/algs/edonr/edonr_byteorder.h, Removed the #if defined(__linux) section to not #include the non-existant endian.h. - In skein_test.c, renane NULL to 0 in "no test vector" array entries to get around a compiler warning. - Fixup test files: - Rename <sys/varargs.h> -> <varargs.h>, <strings.h> -> <string.h>, - Remove <note.h> and define NOTE() as NOP. - Define u_longlong_t - Rename "#!/usr/bin/ksh" -> "#!/bin/ksh -p" - Rename NULL to 0 in "no test vector" array entries to get around a compiler warning. - Remove "for isa in $($ISAINFO); do" stuff - Add/update Makefiles - Add some userspace headers like stdio.h/stdlib.h in places of sys/types.h. - EXPORT_SYMBOL *_Init/*_Update/*_Final... routines in ICP modules. - Update scripts/zfs2zol-patch.sed - include <sys/sha2.h> in sha2_impl.h - Add sha2.h to include/sys/Makefile.am - Add skein and edonr dirs to icp Makefile - Add new checksums to zpool_get.cfg - Move checksum switch block from zfs_secpolicy_setprop() to zfs_check_settable() - Fix -Wuninitialized error in edonr_byteorder.h on PPC - Fix stack frame size errors on ARM32 - Don't unroll loops in Skein on 32-bit to save stack space - Add memory barriers in sha2.c on 32-bit to save stack space - Add filetest_001_pos.ksh checksum sanity test - Add option to write psudorandom data in file_write utility
2016-06-16 01:47:05 +03:00
#
# Given a pool, and this function list all disks in the pool with their full
# path (like "/dev/sda" instead of "sda").
#
function get_disklist_fullpath # pool
{
args="-P $1"
get_disklist $args
}
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
# /**
# This function kills a given list of processes after a time period. We use
# this in the stress tests instead of STF_TIMEOUT so that we can have processes
# run for a fixed amount of time, yet still pass. Tests that hit STF_TIMEOUT
# would be listed as FAIL, which we don't want : we're happy with stress tests
# running for a certain amount of time, then finishing.
#
# @param $1 the time in seconds after which we should terminate these processes
# @param $2..$n the processes we wish to terminate.
# */
function stress_timeout
{
typeset -i TIMEOUT=$1
shift
typeset cpids="$@"
log_note "Waiting for child processes($cpids). " \
"It could last dozens of minutes, please be patient ..."
log_must $SLEEP $TIMEOUT
log_note "Killing child processes after ${TIMEOUT} stress timeout."
typeset pid
for pid in $cpids; do
$PS -p $pid > /dev/null 2>&1
if (($? == 0)); then
log_must $KILL -USR1 $pid
fi
done
}
#
# Verify a given hotspare disk is inuse or avail
#
# Return 0 is pool/disk matches expected state, 1 otherwise
#
function check_hotspare_state # pool disk state{inuse,avail}
{
typeset pool=$1
typeset disk=${2#$DEV_DSKDIR/}
typeset state=$3
cur_state=$(get_device_state $pool $disk "spares")
if [[ $state != ${cur_state} ]]; then
return 1
fi
return 0
}
#
# Verify a given slog disk is inuse or avail
#
# Return 0 is pool/disk matches expected state, 1 otherwise
#
function check_slog_state # pool disk state{online,offline,unavail}
{
typeset pool=$1
typeset disk=${2#$DEV_DSKDIR/}
typeset state=$3
cur_state=$(get_device_state $pool $disk "logs")
if [[ $state != ${cur_state} ]]; then
return 1
fi
return 0
}
#
# Verify a given vdev disk is inuse or avail
#
# Return 0 is pool/disk matches expected state, 1 otherwise
#
function check_vdev_state # pool disk state{online,offline,unavail}
{
typeset pool=$1
typeset disk=${2#$/DEV_DSKDIR/}
typeset state=$3
cur_state=$(get_device_state $pool $disk)
if [[ $state != ${cur_state} ]]; then
return 1
fi
return 0
}
#
# Check the output of 'zpool status -v <pool>',
# and to see if the content of <token> contain the <keyword> specified.
#
# Return 0 is contain, 1 otherwise
#
function check_pool_status # pool token keyword
{
typeset pool=$1
typeset token=$2
typeset keyword=$3
$ZPOOL status -v "$pool" 2>/dev/null | $NAWK -v token="$token:" '
($1==token) {print $0}' \
| $GREP -i "$keyword" > /dev/null 2>&1
return $?
}
#
# These 5 following functions are instance of check_pool_status()
# is_pool_resilvering - to check if the pool is resilver in progress
# is_pool_resilvered - to check if the pool is resilver completed
# is_pool_scrubbing - to check if the pool is scrub in progress
# is_pool_scrubbed - to check if the pool is scrub completed
# is_pool_scrub_stopped - to check if the pool is scrub stopped
#
function is_pool_resilvering #pool
{
check_pool_status "$1" "scan" "resilver in progress since "
return $?
}
function is_pool_resilvered #pool
{
check_pool_status "$1" "scan" "resilvered "
return $?
}
function is_pool_scrubbing #pool
{
check_pool_status "$1" "scan" "scrub in progress since "
return $?
}
function is_pool_scrubbed #pool
{
check_pool_status "$1" "scan" "scrub repaired"
return $?
}
function is_pool_scrub_stopped #pool
{
check_pool_status "$1" "scan" "scrub canceled"
return $?
}
#
# Use create_pool()/destroy_pool() to clean up the infomation in
# in the given disk to avoid slice overlapping.
#
function cleanup_devices #vdevs
{
typeset pool="foopool$$"
if poolexists $pool ; then
destroy_pool $pool
fi
create_pool $pool $@
destroy_pool $pool
return 0
}
#
# Verify the rsh connectivity to each remote host in RHOSTS.
#
# Return 0 if remote host is accessible; otherwise 1.
# $1 remote host name
# $2 username
#
function verify_rsh_connect #rhost, username
{
typeset rhost=$1
typeset username=$2
typeset rsh_cmd="$RSH -n"
typeset cur_user=
$GETENT hosts $rhost >/dev/null 2>&1
if (($? != 0)); then
log_note "$rhost cannot be found from" \
"administrative database."
return 1
fi
$PING $rhost 3 >/dev/null 2>&1
if (($? != 0)); then
log_note "$rhost is not reachable."
return 1
fi
if ((${#username} != 0)); then
rsh_cmd="$rsh_cmd -l $username"
cur_user="given user \"$username\""
else
cur_user="current user \"`$LOGNAME`\""
fi
if ! $rsh_cmd $rhost $TRUE; then
log_note "$RSH to $rhost is not accessible" \
"with $cur_user."
return 1
fi
return 0
}
#
# Verify the remote host connection via rsh after rebooting
# $1 remote host
#
function verify_remote
{
rhost=$1
#
# The following loop waits for the remote system rebooting.
# Each iteration will wait for 150 seconds. there are
# total 5 iterations, so the total timeout value will
# be 12.5 minutes for the system rebooting. This number
# is an approxiate number.
#
typeset -i count=0
while ! verify_rsh_connect $rhost; do
sleep 150
((count = count + 1))
if ((count > 5)); then
return 1
fi
done
return 0
}
#
# Replacement function for /usr/bin/rsh. This function will include
# the /usr/bin/rsh and meanwhile return the execution status of the
# last command.
#
# $1 usrname passing down to -l option of /usr/bin/rsh
# $2 remote machine hostname
# $3... command string
#
function rsh_status
{
typeset ruser=$1
typeset rhost=$2
typeset -i ret=0
typeset cmd_str=""
typeset rsh_str=""
shift; shift
cmd_str="$@"
err_file=/tmp/${rhost}.$$.err
if ((${#ruser} == 0)); then
rsh_str="$RSH -n"
else
rsh_str="$RSH -n -l $ruser"
fi
$rsh_str $rhost /bin/ksh -c "'$cmd_str; \
print -u 2 \"status=\$?\"'" \
>/dev/null 2>$err_file
ret=$?
if (($ret != 0)); then
$CAT $err_file
$RM -f $std_file $err_file
log_fail "$RSH itself failed with exit code $ret..."
fi
ret=$($GREP -v 'print -u 2' $err_file | $GREP 'status=' | \
$CUT -d= -f2)
(($ret != 0)) && $CAT $err_file >&2
$RM -f $err_file >/dev/null 2>&1
return $ret
}
#
# Get the SUNWstc-fs-zfs package installation path in a remote host
# $1 remote host name
#
function get_remote_pkgpath
{
typeset rhost=$1
typeset pkgpath=""
pkgpath=$($RSH -n $rhost "$PKGINFO -l SUNWstc-fs-zfs | $GREP BASEDIR: |\
$CUT -d: -f2")
$ECHO $pkgpath
}
#/**
# A function to find and locate free disks on a system or from given
# disks as the parameter. It works by locating disks that are in use
# as swap devices and dump devices, and also disks listed in /etc/vfstab
#
# $@ given disks to find which are free, default is all disks in
# the test system
#
# @return a string containing the list of available disks
#*/
function find_disks
{
# Trust provided list, no attempt is made to locate unused devices.
if is_linux; then
$ECHO "$@"
return
fi
sfi=/tmp/swaplist.$$
dmpi=/tmp/dumpdev.$$
max_finddisksnum=${MAX_FINDDISKSNUM:-6}
$SWAP -l > $sfi
$DUMPADM > $dmpi 2>/dev/null
# write an awk script that can process the output of format
# to produce a list of disks we know about. Note that we have
# to escape "$2" so that the shell doesn't interpret it while
# we're creating the awk script.
# -------------------
$CAT > /tmp/find_disks.awk <<EOF
#!/bin/nawk -f
BEGIN { FS="."; }
/^Specify disk/{
searchdisks=0;
}
{
if (searchdisks && \$2 !~ "^$"){
split(\$2,arr," ");
print arr[1];
}
}
/^AVAILABLE DISK SELECTIONS:/{
searchdisks=1;
}
EOF
#---------------------
$CHMOD 755 /tmp/find_disks.awk
disks=${@:-$($ECHO "" | $FORMAT -e 2>/dev/null | /tmp/find_disks.awk)}
$RM /tmp/find_disks.awk
unused=""
for disk in $disks; do
# Check for mounted
$GREP "${disk}[sp]" /etc/mnttab >/dev/null
(($? == 0)) && continue
# Check for swap
$GREP "${disk}[sp]" $sfi >/dev/null
(($? == 0)) && continue
# check for dump device
$GREP "${disk}[sp]" $dmpi >/dev/null
(($? == 0)) && continue
# check to see if this disk hasn't been explicitly excluded
# by a user-set environment variable
$ECHO "${ZFS_HOST_DEVICES_IGNORE}" | $GREP "${disk}" > /dev/null
(($? == 0)) && continue
unused_candidates="$unused_candidates $disk"
done
$RM $sfi
$RM $dmpi
# now just check to see if those disks do actually exist
# by looking for a device pointing to the first slice in
# each case. limit the number to max_finddisksnum
count=0
for disk in $unused_candidates; do
if [ -b $DEV_DSKDIR/${disk}s0 ]; then
if [ $count -lt $max_finddisksnum ]; then
unused="$unused $disk"
# do not impose limit if $@ is provided
[[ -z $@ ]] && ((count = count + 1))
fi
fi
done
# finally, return our disk list
$ECHO $unused
}
#
# Add specified user to specified group
#
# $1 group name
# $2 user name
# $3 base of the homedir (optional)
#
function add_user #<group_name> <user_name> <basedir>
{
typeset gname=$1
typeset uname=$2
typeset basedir=${3:-"/var/tmp"}
if ((${#gname} == 0 || ${#uname} == 0)); then
log_fail "group name or user name are not defined."
fi
log_must $USERADD -g $gname -d $basedir/$uname -m $uname
Add `zfs allow` and `zfs unallow` support ZFS allows for specific permissions to be delegated to normal users with the `zfs allow` and `zfs unallow` commands. In addition, non- privileged users should be able to run all of the following commands: * zpool [list | iostat | status | get] * zfs [list | get] Historically this functionality was not available on Linux. In order to add it the secpolicy_* functions needed to be implemented and mapped to the equivalent Linux capability. Only then could the permissions on the `/dev/zfs` be relaxed and the internal ZFS permission checks used. Even with this change some limitations remain. Under Linux only the root user is allowed to modify the namespace (unless it's a private namespace). This means the mount, mountpoint, canmount, unmount, and remount delegations cannot be supported with the existing code. It may be possible to add this functionality in the future. This functionality was validated with the cli_user and delegation test cases from the ZFS Test Suite. These tests exhaustively verify each of the supported permissions which can be delegated and ensures only an authorized user can perform it. Two minor bug fixes were required for test-running.py. First, the Timer() object cannot be safely created in a `try:` block when there is an unconditional `finally` block which references it. Second, when running as a normal user also check for scripts using the both the .ksh and .sh suffixes. Finally, existing users who are simulating delegations by setting group permissions on the /dev/zfs device should revert that customization when updating to a version with this change. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tony Hutter <hutter2@llnl.gov> Closes #362 Closes #434 Closes #4100 Closes #4394 Closes #4410 Closes #4487
2016-06-07 19:16:52 +03:00
# Add new users to the same group and the command line utils.
# This allows them to be run out of the original users home
# directory as long as it permissioned to be group readable.
if is_linux; then
cmd_group=$(stat --format="%G" $ZFS)
log_must $USERMOD -a -G $cmd_group $uname
fi
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
return 0
}
#
# Delete the specified user.
#
# $1 login name
# $2 base of the homedir (optional)
#
function del_user #<logname> <basedir>
{
typeset user=$1
typeset basedir=${2:-"/var/tmp"}
if ((${#user} == 0)); then
log_fail "login name is necessary."
fi
if $ID $user > /dev/null 2>&1; then
log_must $USERDEL $user
fi
[[ -d $basedir/$user ]] && $RM -fr $basedir/$user
return 0
}
#
# Select valid gid and create specified group.
#
# $1 group name
#
function add_group #<group_name>
{
typeset group=$1
if ((${#group} == 0)); then
log_fail "group name is necessary."
fi
# Assign 100 as the base gid, a larger value is selected for
# Linux because for many distributions 1000 and under are reserved.
if is_linux; then
while true; do
Add `zfs allow` and `zfs unallow` support ZFS allows for specific permissions to be delegated to normal users with the `zfs allow` and `zfs unallow` commands. In addition, non- privileged users should be able to run all of the following commands: * zpool [list | iostat | status | get] * zfs [list | get] Historically this functionality was not available on Linux. In order to add it the secpolicy_* functions needed to be implemented and mapped to the equivalent Linux capability. Only then could the permissions on the `/dev/zfs` be relaxed and the internal ZFS permission checks used. Even with this change some limitations remain. Under Linux only the root user is allowed to modify the namespace (unless it's a private namespace). This means the mount, mountpoint, canmount, unmount, and remount delegations cannot be supported with the existing code. It may be possible to add this functionality in the future. This functionality was validated with the cli_user and delegation test cases from the ZFS Test Suite. These tests exhaustively verify each of the supported permissions which can be delegated and ensures only an authorized user can perform it. Two minor bug fixes were required for test-running.py. First, the Timer() object cannot be safely created in a `try:` block when there is an unconditional `finally` block which references it. Second, when running as a normal user also check for scripts using the both the .ksh and .sh suffixes. Finally, existing users who are simulating delegations by setting group permissions on the /dev/zfs device should revert that customization when updating to a version with this change. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tony Hutter <hutter2@llnl.gov> Closes #362 Closes #434 Closes #4100 Closes #4394 Closes #4410 Closes #4487
2016-06-07 19:16:52 +03:00
$GROUPADD $group > /dev/null 2>&1
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
typeset -i ret=$?
case $ret in
0) return 0 ;;
*) return 1 ;;
esac
done
else
typeset -i gid=100
while true; do
$GROUPADD -g $gid $group > /dev/null 2>&1
typeset -i ret=$?
case $ret in
0) return 0 ;;
# The gid is not unique
4) ((gid += 1)) ;;
*) return 1 ;;
esac
done
fi
}
#
# Delete the specified group.
#
# $1 group name
#
function del_group #<group_name>
{
typeset grp=$1
if ((${#grp} == 0)); then
log_fail "group name is necessary."
fi
if is_linux; then
$GETENT group $grp > /dev/null 2>&1
typeset -i ret=$?
case $ret in
# Group does not exist.
2) return 0 ;;
# Name already exists as a group name
0) log_must $GROUPDEL $grp ;;
*) return 1 ;;
esac
else
$GROUPMOD -n $grp $grp > /dev/null 2>&1
typeset -i ret=$?
case $ret in
# Group does not exist.
6) return 0 ;;
# Name already exists as a group name
9) log_must $GROUPDEL $grp ;;
*) return 1 ;;
esac
fi
return 0
}
#
# This function will return true if it's safe to destroy the pool passed
# as argument 1. It checks for pools based on zvols and files, and also
# files contained in a pool that may have a different mountpoint.
#
function safe_to_destroy_pool { # $1 the pool name
typeset pool=""
typeset DONT_DESTROY=""
# We check that by deleting the $1 pool, we're not
# going to pull the rug out from other pools. Do this
# by looking at all other pools, ensuring that they
# aren't built from files or zvols contained in this pool.
for pool in $($ZPOOL list -H -o name)
do
ALTMOUNTPOOL=""
# this is a list of the top-level directories in each of the
# files that make up the path to the files the pool is based on
FILEPOOL=$($ZPOOL status -v $pool | $GREP /$1/ | \
$AWK '{print $1}')
# this is a list of the zvols that make up the pool
ZVOLPOOL=$($ZPOOL status -v $pool | $GREP "$ZVOL_DEVDIR/$1$" \
| $AWK '{print $1}')
# also want to determine if it's a file-based pool using an
# alternate mountpoint...
POOL_FILE_DIRS=$($ZPOOL status -v $pool | \
$GREP / | $AWK '{print $1}' | \
$AWK -F/ '{print $2}' | $GREP -v "dev")
for pooldir in $POOL_FILE_DIRS
do
OUTPUT=$($ZFS list -H -r -o mountpoint $1 | \
$GREP "${pooldir}$" | $AWK '{print $1}')
ALTMOUNTPOOL="${ALTMOUNTPOOL}${OUTPUT}"
done
if [ ! -z "$ZVOLPOOL" ]
then
DONT_DESTROY="true"
log_note "Pool $pool is built from $ZVOLPOOL on $1"
fi
if [ ! -z "$FILEPOOL" ]
then
DONT_DESTROY="true"
log_note "Pool $pool is built from $FILEPOOL on $1"
fi
if [ ! -z "$ALTMOUNTPOOL" ]
then
DONT_DESTROY="true"
log_note "Pool $pool is built from $ALTMOUNTPOOL on $1"
fi
done
if [ -z "${DONT_DESTROY}" ]
then
return 0
else
log_note "Warning: it is not safe to destroy $1!"
return 1
fi
}
#
# Get the available ZFS compression options
# $1 option type zfs_set|zfs_compress
#
function get_compress_opts
{
typeset COMPRESS_OPTS
typeset GZIP_OPTS="gzip gzip-1 gzip-2 gzip-3 gzip-4 gzip-5 \
gzip-6 gzip-7 gzip-8 gzip-9"
if [[ $1 == "zfs_compress" ]] ; then
COMPRESS_OPTS="on lzjb"
elif [[ $1 == "zfs_set" ]] ; then
COMPRESS_OPTS="on off lzjb"
fi
typeset valid_opts="$COMPRESS_OPTS"
$ZFS get 2>&1 | $GREP gzip >/dev/null 2>&1
if [[ $? -eq 0 ]]; then
valid_opts="$valid_opts $GZIP_OPTS"
fi
$ECHO "$valid_opts"
}
#
# Verify zfs operation with -p option work as expected
# $1 operation, value could be create, clone or rename
# $2 dataset type, value could be fs or vol
# $3 dataset name
# $4 new dataset name
#
function verify_opt_p_ops
{
typeset ops=$1
typeset datatype=$2
typeset dataset=$3
typeset newdataset=$4
if [[ $datatype != "fs" && $datatype != "vol" ]]; then
log_fail "$datatype is not supported."
fi
# check parameters accordingly
case $ops in
create)
newdataset=$dataset
dataset=""
if [[ $datatype == "vol" ]]; then
ops="create -V $VOLSIZE"
fi
;;
clone)
if [[ -z $newdataset ]]; then
log_fail "newdataset should not be empty" \
"when ops is $ops."
fi
log_must datasetexists $dataset
log_must snapexists $dataset
;;
rename)
if [[ -z $newdataset ]]; then
log_fail "newdataset should not be empty" \
"when ops is $ops."
fi
log_must datasetexists $dataset
log_mustnot snapexists $dataset
;;
*)
log_fail "$ops is not supported."
;;
esac
# make sure the upper level filesystem does not exist
if datasetexists ${newdataset%/*} ; then
log_must $ZFS destroy -rRf ${newdataset%/*}
fi
# without -p option, operation will fail
log_mustnot $ZFS $ops $dataset $newdataset
log_mustnot datasetexists $newdataset ${newdataset%/*}
# with -p option, operation should succeed
log_must $ZFS $ops -p $dataset $newdataset
block_device_wait
if ! datasetexists $newdataset ; then
log_fail "-p option does not work for $ops"
fi
# when $ops is create or clone, redo the operation still return zero
if [[ $ops != "rename" ]]; then
log_must $ZFS $ops -p $dataset $newdataset
fi
return 0
}
#
# Get configuration of pool
# $1 pool name
# $2 config name
#
function get_config
{
typeset pool=$1
typeset config=$2
typeset alt_root
if ! poolexists "$pool" ; then
return 1
fi
alt_root=$($ZPOOL list -H $pool | $AWK '{print $NF}')
if [[ $alt_root == "-" ]]; then
value=$($ZDB -C $pool | $GREP "$config:" | $AWK -F: \
'{print $2}')
else
value=$($ZDB -e $pool | $GREP "$config:" | $AWK -F: \
'{print $2}')
fi
if [[ -n $value ]] ; then
value=${value#'}
value=${value%'}
fi
echo $value
return 0
}
#
# Privated function. Random select one of items from arguments.
#
# $1 count
# $2-n string
#
function _random_get
{
typeset cnt=$1
shift
typeset str="$@"
typeset -i ind
((ind = RANDOM % cnt + 1))
typeset ret=$($ECHO "$str" | $CUT -f $ind -d ' ')
$ECHO $ret
}
#
# Random select one of item from arguments which include NONE string
#
function random_get_with_non
{
typeset -i cnt=$#
((cnt =+ 1))
_random_get "$cnt" "$@"
}
#
# Random select one of item from arguments which doesn't include NONE string
#
function random_get
{
_random_get "$#" "$@"
}
#
# Detect if the current system support slog
#
function verify_slog_support
{
typeset dir=/tmp/disk.$$
typeset pool=foo.$$
typeset vdev=$dir/a
typeset sdev=$dir/b
$MKDIR -p $dir
$MKFILE 64M $vdev $sdev
typeset -i ret=0
if ! $ZPOOL create -n $pool $vdev log $sdev > /dev/null 2>&1; then
ret=1
fi
$RM -r $dir
return $ret
}
#
# The function will generate a dataset name with specific length
# $1, the length of the name
# $2, the base string to construct the name
#
function gen_dataset_name
{
typeset -i len=$1
typeset basestr="$2"
typeset -i baselen=${#basestr}
typeset -i iter=0
typeset l_name=""
if ((len % baselen == 0)); then
((iter = len / baselen))
else
((iter = len / baselen + 1))
fi
while ((iter > 0)); do
l_name="${l_name}$basestr"
((iter -= 1))
done
$ECHO $l_name
}
#
# Get cksum tuple of dataset
# $1 dataset name
#
# sample zdb output:
# Dataset data/test [ZPL], ID 355, cr_txg 2413856, 31.0K, 7 objects, rootbp
# DVA[0]=<0:803046400:200> DVA[1]=<0:81199000:200> [L0 DMU objset] fletcher4
# lzjb LE contiguous unique double size=800L/200P birth=2413856L/2413856P
# fill=7 cksum=11ce125712:643a9c18ee2:125e25238fca0:254a3f74b59744
function datasetcksum
{
typeset cksum
$SYNC
cksum=$($ZDB -vvv $1 | $GREP "^Dataset $1 \[" | $GREP "cksum" \
| $AWK -F= '{print $7}')
$ECHO $cksum
}
#
# Get cksum of file
# #1 file path
#
function checksum
{
typeset cksum
cksum=$($CKSUM $1 | $AWK '{print $1}')
$ECHO $cksum
}
#
# Get the given disk/slice state from the specific field of the pool
#
function get_device_state #pool disk field("", "spares","logs")
{
typeset pool=$1
typeset disk=${2#$DEV_DSKDIR/}
typeset field=${3:-$pool}
state=$($ZPOOL status -v "$pool" 2>/dev/null | \
$NAWK -v device=$disk -v pool=$pool -v field=$field \
'BEGIN {startconfig=0; startfield=0; }
/config:/ {startconfig=1}
(startconfig==1) && ($1==field) {startfield=1; next;}
(startfield==1) && ($1==device) {print $2; exit;}
(startfield==1) &&
($1==field || $1 ~ "^spares$" || $1 ~ "^logs$") {startfield=0}')
echo $state
}
#
# print the given directory filesystem type
#
# $1 directory name
#
function get_fstype
{
typeset dir=$1
if [[ -z $dir ]]; then
log_fail "Usage: get_fstype <directory>"
fi
#
# $ df -n /
# / : ufs
#
$DF -n $dir | $AWK '{print $3}'
}
#
# Given a disk, label it to VTOC regardless what label was on the disk
# $1 disk
#
function labelvtoc
{
typeset disk=$1
if [[ -z $disk ]]; then
log_fail "The disk name is unspecified."
fi
typeset label_file=/var/tmp/labelvtoc.$$
typeset arch=$($UNAME -p)
if is_linux; then
log_note "Currently unsupported by the test framework"
return 1
fi
if [[ $arch == "i386" ]]; then
$ECHO "label" > $label_file
$ECHO "0" >> $label_file
$ECHO "" >> $label_file
$ECHO "q" >> $label_file
$ECHO "q" >> $label_file
$FDISK -B $disk >/dev/null 2>&1
# wait a while for fdisk finishes
$SLEEP 60
elif [[ $arch == "sparc" ]]; then
$ECHO "label" > $label_file
$ECHO "0" >> $label_file
$ECHO "" >> $label_file
$ECHO "" >> $label_file
$ECHO "" >> $label_file
$ECHO "q" >> $label_file
else
log_fail "unknown arch type"
fi
$FORMAT -e -s -d $disk -f $label_file
typeset -i ret_val=$?
$RM -f $label_file
#
# wait the format to finish
#
$SLEEP 60
if ((ret_val != 0)); then
log_fail "unable to label $disk as VTOC."
fi
return 0
}
#
# check if the system was installed as zfsroot or not
# return: 0 ture, otherwise false
#
function is_zfsroot
{
$DF -n / | $GREP zfs > /dev/null 2>&1
return $?
}
#
# get the root filesystem name if it's zfsroot system.
#
# return: root filesystem name
function get_rootfs
{
typeset rootfs=""
rootfs=$($AWK '{if ($2 == "/" && $3 == "zfs") print $1}' \
/etc/mnttab)
if [[ -z "$rootfs" ]]; then
log_fail "Can not get rootfs"
fi
$ZFS list $rootfs > /dev/null 2>&1
if (($? == 0)); then
$ECHO $rootfs
else
log_fail "This is not a zfsroot system."
fi
}
#
# get the rootfs's pool name
# return:
# rootpool name
#
function get_rootpool
{
typeset rootfs=""
typeset rootpool=""
rootfs=$($AWK '{if ($2 == "/" && $3 =="zfs") print $1}' \
/etc/mnttab)
if [[ -z "$rootfs" ]]; then
log_fail "Can not get rootpool"
fi
$ZFS list $rootfs > /dev/null 2>&1
if (($? == 0)); then
rootpool=`$ECHO $rootfs | awk -F\/ '{print $1}'`
$ECHO $rootpool
else
log_fail "This is not a zfsroot system."
fi
}
#
# Get the sub string from specified source string
#
# $1 source string
# $2 start position. Count from 1
# $3 offset
#
function get_substr #src_str pos offset
{
typeset pos offset
$ECHO $1 | \
$NAWK -v pos=$2 -v offset=$3 '{print substr($0, pos, offset)}'
}
#
# Check if the given device is physical device
#
function is_physical_device #device
{
typeset device=${1#$DEV_DSKDIR}
device=${device#$DEV_RDSKDIR}
if is_linux; then
[[ -b "$DEV_DSKDIR/$device" ]] && \
[[ -f /sys/module/loop/parameters/max_part ]]
return $?
else
$ECHO $device | $EGREP "^c[0-F]+([td][0-F]+)+$" > /dev/null 2>&1
return $?
fi
}
Real disk partitioning now enabled in test suite for Linux When using real devices, specify DISKS="sdb sdc sdd" opposed to /dev/sdb in zfs-tests.sh - otherwise errors with directory names and disk names registering as "/dev//dev/sdb" for some tests. The same goes for mpath: DISK="mpatha mpathad mpathb" Expected Usage: $ DISKS="sdb sdc sdd" zfs-tests.sh SLICE_PREFIX is now set as "p" for a loop device (ie loop0p2) or "" for a real device (ie sdb2), or either for multipath devices (ie mpatha1 or mpath1p1) instead of only "p" by default. Note that kpartx partitioning is not currently supported in this patch (ie "partx") and may need to be disabled on Debian distributions. Functions added for determining test directory (/dev or /dev/mapper) as well as slice prefix are determined and exported mostly in the cfg file of each test group directory. Currently zpools cannot be created on whole mpath devices that have been partitioned. In order to fix this tests have either been revised to use a partition instead, or if there is a size constraint and the pool needs to be created on the whole disk, partitions are then deleted if the device is a multipath device. This functionality is added to default_cleanup() or to individual cleanup scripts if a non-default cleanup method is used. The max partitions is currently set at 8 to account for all of the tests thus far. Patch changes are generally encompassed in "if is_linux" construct. Signed-off-by: Sydney Vanda <sydney.m.vanda@intel.com> Reviewed-by: John Salinas <John.Salinas@intel.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: David Quigley <david.quigley@intel.com> Closes #4447 Closes #4964 Closes #5074
2016-07-22 18:07:04 +03:00
#
# Check if the given device is a real device (ie SCSI device)
#
function is_real_device #disk
{
typeset disk=$1
[[ -z $disk ]] && log_fail "No argument for disk given."
if is_linux; then
$LSBLK $DEV_RDSKDIR/$disk -o TYPE | $EGREP disk > /dev/null 2>&1
return $?
fi
}
#
# Check if the given device is a loop device
#
function is_loop_device #disk
{
typeset disk=$1
[[ -z $disk ]] && log_fail "No argument for disk given."
if is_linux; then
$LSBLK $DEV_RDSKDIR/$disk -o TYPE | $EGREP loop > /dev/null 2>&1
return $?
fi
}
#
# Check if the given device is a multipath device and if there is a sybolic
# link to a device mapper and to a disk
# Currently no support for dm devices alone without multipath
#
function is_mpath_device #disk
{
typeset disk=$1
[[ -z $disk ]] && log_fail "No argument for disk given."
if is_linux; then
$LSBLK $DEV_MPATHDIR/$disk -o TYPE | $EGREP mpath > /dev/null 2>&1
if (($? == 0)); then
$READLINK $DEV_MPATHDIR/$disk > /dev/null 2>&1
return $?
else
return $?
fi
fi
}
# Set the slice prefix for disk partitioning depending
# on whether the device is a real, multipath, or loop device.
# Currently all disks have to be of the same type, so only
# checks first disk to determine slice prefix.
#
function set_slice_prefix
{
typeset disk
typeset -i i=0
if is_linux; then
while (( i < $DISK_ARRAY_NUM )); do
disk="$($ECHO $DISKS | $NAWK '{print $(i + 1)}')"
if ( is_mpath_device $disk ) && [[ -z $($ECHO $disk | awk 'substr($1,18,1)\
~ /^[[:digit:]]+$/') ]] || ( is_real_device $disk ); then
export SLICE_PREFIX=""
return 0
elif ( is_mpath_device $disk || is_loop_device $disk ); then
export SLICE_PREFIX="p"
return 0
else
log_fail "$disk not supported for partitioning."
fi
(( i = i + 1))
done
fi
}
#
# Set the directory path of the listed devices in $DISK_ARRAY_NUM
# Currently all disks have to be of the same type, so only
# checks first disk to determine device directory
# default = /dev (linux)
# real disk = /dev (linux)
# multipath device = /dev/mapper (linux)
#
function set_device_dir
{
typeset disk
typeset -i i=0
if is_linux; then
while (( i < $DISK_ARRAY_NUM )); do
disk="$($ECHO $DISKS | $NAWK '{print $(i + 1)}')"
if is_mpath_device $disk; then
export DEV_DSKDIR=$DEV_MPATHDIR
return 0
else
export DEV_DSKDIR=$DEV_RDSKDIR
return 0
fi
(( i = i + 1))
done
else
export DEV_DSKDIR=$DEV_RDSKDIR
fi
}
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
#
# Get the directory path of given device
#
function get_device_dir #device
{
typeset device=$1
if ! $(is_physical_device $device) ; then
if [[ $device != "/" ]]; then
device=${device%/*}
fi
if [[ -b "$DEV_DSKDIR/$device" ]]; then
device="$DEV_DSKDIR"
fi
$ECHO $device
else
$ECHO "$DEV_DSKDIR"
fi
}
#
# Get the package name
#
function get_package_name
{
typeset dirpath=${1:-$STC_NAME}
echo "SUNWstc-${dirpath}" | /usr/bin/sed -e "s/\//-/g"
}
#
# Get the word numbers from a string separated by white space
#
function get_word_count
{
$ECHO $1 | $WC -w
}
#
# To verify if the require numbers of disks is given
#
function verify_disk_count
{
typeset -i min=${2:-1}
typeset -i count=$(get_word_count "$1")
if ((count < min)); then
log_untested "A minimum of $min disks is required to run." \
" You specified $count disk(s)"
fi
}
function ds_is_volume
{
typeset type=$(get_prop type $1)
[[ $type = "volume" ]] && return 0
return 1
}
function ds_is_filesystem
{
typeset type=$(get_prop type $1)
[[ $type = "filesystem" ]] && return 0
return 1
}
function ds_is_snapshot
{
typeset type=$(get_prop type $1)
[[ $type = "snapshot" ]] && return 0
return 1
}
#
# Check if Trusted Extensions are installed and enabled
#
function is_te_enabled
{
$SVCS -H -o state labeld 2>/dev/null | $GREP "enabled"
if (($? != 0)); then
return 1
else
return 0
fi
}
# Utility function to determine if a system has multiple cpus.
function is_mp
{
if is_linux; then
(($($NPROC) > 1))
else
(($($PSRINFO | $WC -l) > 1))
fi
return $?
}
function get_cpu_freq
{
if is_linux; then
lscpu | $AWK '/CPU MHz/ { print $3 }'
else
$PSRINFO -v 0 | $AWK '/processor operates at/ {print $6}'
fi
}
# Run the given command as the user provided.
function user_run
{
typeset user=$1
shift
Add `zfs allow` and `zfs unallow` support ZFS allows for specific permissions to be delegated to normal users with the `zfs allow` and `zfs unallow` commands. In addition, non- privileged users should be able to run all of the following commands: * zpool [list | iostat | status | get] * zfs [list | get] Historically this functionality was not available on Linux. In order to add it the secpolicy_* functions needed to be implemented and mapped to the equivalent Linux capability. Only then could the permissions on the `/dev/zfs` be relaxed and the internal ZFS permission checks used. Even with this change some limitations remain. Under Linux only the root user is allowed to modify the namespace (unless it's a private namespace). This means the mount, mountpoint, canmount, unmount, and remount delegations cannot be supported with the existing code. It may be possible to add this functionality in the future. This functionality was validated with the cli_user and delegation test cases from the ZFS Test Suite. These tests exhaustively verify each of the supported permissions which can be delegated and ensures only an authorized user can perform it. Two minor bug fixes were required for test-running.py. First, the Timer() object cannot be safely created in a `try:` block when there is an unconditional `finally` block which references it. Second, when running as a normal user also check for scripts using the both the .ksh and .sh suffixes. Finally, existing users who are simulating delegations by setting group permissions on the /dev/zfs device should revert that customization when updating to a version with this change. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tony Hutter <hutter2@llnl.gov> Closes #362 Closes #434 Closes #4100 Closes #4394 Closes #4410 Closes #4487
2016-06-07 19:16:52 +03:00
log_note "user:$user $@"
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
eval \$SU \$user -c \"$@\" > /tmp/out 2>/tmp/err
return $?
}
#
# Check if the pool contains the specified vdevs
#
# $1 pool
# $2..n <vdev> ...
#
# Return 0 if the vdevs are contained in the pool, 1 if any of the specified
# vdevs is not in the pool, and 2 if pool name is missing.
#
function vdevs_in_pool
{
typeset pool=$1
typeset vdev
if [[ -z $pool ]]; then
log_note "Missing pool name."
return 2
fi
shift
typeset tmpfile=$($MKTEMP)
$ZPOOL list -Hv "$pool" >$tmpfile
for vdev in $@; do
$GREP -w ${vdev##*/} $tmpfile >/dev/null 2>&1
[[ $? -ne 0 ]] && return 1
done
$RM -f $tmpfile
return 0;
}
OpenZFS - Performance regression suite for zfstest Author: John Wren Kennedy <john.kennedy@delphix.com> Reviewed by: Prakash Surya <prakash.surya@delphix.com> Reviewed by: Dan Kimmel <dan.kimmel@delphix.com> Reviewed by: Matt Ahrens <mahrens@delphix.com> Reviewed by: Paul Dagnelie <pcd@delphix.com> Reviewed by: Don Brady <don.brady@intel.com> Reviewed by: Richard Elling <Richard.Elling@RichardElling.com> Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: David Quigley <david.quigley@intel.com> Approved by: Richard Lowe <richlowe@richlowe.net> Ported-by: Don Brady <don.brady@intel.com> OpenZFS-issue: https://www.illumos.org/issues/6950 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/dcbf3bd6 Delphix-commit: https://github.com/delphix/delphix-os/commit/978ed49 Closes #4929 ZFS Test Suite Performance Regression Tests This was pulled into OpenZFS via the compressed arc featureand was separated out in zfsonlinux as a separate pull request from PR-4768. It originally came in as QA-4903 in Delphix-OS from John Kennedy. Expected Usage: $ DISKS="sdb sdc sdd" zfs-tests.sh -r perf-regression.run Porting Notes: 1. Added assertions in the setup script to make sure required tools (fio, mpstat, ...) are present. 2. For the config.json generation in perf.shlib used arcstats and other binaries instead of dtrace to query the values. 3. For the perf data collection: - use "zpool iostat -lpvyL" instead of the io.d dtrace script (currently not collecting zfs_read/write latency stats) - mpstat and iostat take different arguments - prefetch_io.sh is a placeholder that uses arcstats instead of dtrace 4. Build machines require fio, mdadm and sysstat pakage (YMMV). Future Work: - Need a way to measure zfs_read and zfs_write latencies per pool. - Need tools to takes two sets of output and display/graph the differences - Bring over additional regression tests from Delphix
2016-08-04 00:26:15 +03:00
function get_max
{
typeset -l i max=$1
shift
for i in "$@"; do
max=$(echo $((max > i ? max : i)))
done
echo $max
}
function get_min
{
typeset -l i min=$1
shift
for i in "$@"; do
min=$(echo $((min < i ? min : i)))
done
echo $min
}
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
#
# Wait for newly created block devices to have their minors created.
#
function block_device_wait
{
if is_linux; then
$UDEVADM trigger
$UDEVADM settle
fi
}
#
# Synchronize all the data in pool
#
# $1 pool name
#
function sync_pool #pool
{
typeset pool=${1:-$TESTPOOL}
log_must $SYNC
log_must $SLEEP 2
# Flush all the pool data.
typeset -i ret
$ZPOOL scrub $pool >/dev/null 2>&1
ret=$?
(( $ret != 0 )) && \
log_fail "$ZPOOL scrub $pool failed."
while ! is_pool_scrubbed $pool; do
if is_pool_resilvered $pool ; then
log_fail "$pool should not be resilver completed."
fi
log_must $SLEEP 2
done
}