mirror_zfs/tests
Brian Behlendorf 8c54ddd33a Enable additional test cases
Enable additional test cases, in most cases this required a few
minor modifications to the test scripts.  In a few cases a real
bug was uncovered and fixed.  And in a handful of cases where pools
are layered on pools the test case will be skipped until this is
supported.  Details below for each test case.

* zpool_add_004_pos - Skip test on Linux until adding zvols to pools
  is fully supported and deadlock free.

* zpool_add_005_pos.ksh - Skip dumpadm portion of the test which isn't
  relevant for Linux.  The find_vfstab_dev, find_mnttab_dev, and
  save_dump_dev functions were updated accordingly for Linux.  Add
  O_EXCL to the in-use check to prevent the -f (force) option from
  working for mounted filesystems and improve the resulting error.

* zpool_add_006_pos - Update test case such that it doesn't depend
  on nested pools.  Switch to truncate from mkfile to reduce space
  requirements and speed up the test case.

* zpool_clear_001_pos - Speed up test case by filling filesystem to
  25% capacity.

* zpool_create_002_pos, zpool_create_004_pos - Use sparse files for
  file vdevs in order to avoid increasing the partition size.

* zpool_create_006_pos - 6ba1ce9 allows raidz+mirror configs with
  similar redundancy.  Updating the valid_args and forced_args cases.

* zpool_create_008_pos - Disable overlapping partition portion.

* zpool_create_011_neg - Fix to correctly create the extra partition.
  Modified zpool_vdev.c to use fstat64_blk() wrapper which includes
  the st_size even for block devices.

* zpool_create_012_neg - Updated to properly find swap devices.

* zpool_create_014_neg, zpool_create_015_neg - Updated to use
  swap_setup() and swap_cleanup() wrappers which do the right thing
  on Linux and Illumos.  Removed '-n' option which succeeds under
  Linux due to differences in the in-use checks.

* zpool_create_016_pos.ksh - Skipped test case isn't useful.

* zpool_create_020_pos - Added missing / to cleanup() function.
  Remove cache file prior to test to ensure a clean environment
  and avoid false positives.

* zpool_destroy_001_pos - Removed test case which creates a pool on
  a zvol.  This is more likely to deadlock under Linux and has never
  been completely supported on any platform.

* zpool_destroy_002_pos - 'zpool destroy -f' is unsupported on Linux.
  Mount point must not be busy in order to unmount them.

* zfs_destroy_001_pos - Handle EBUSY error which can occur with
  volumes when racing with udev.

* zpool_expand_001_pos, zpool_expand_003_neg - Skip test on Linux
  until adding zvols to pools is fully supported and deadlock free.
  The test could be modified to use loop-back devices but it would
  be preferable to use the test case as is for improved coverage.

* zpool_export_004_pos - Updated test case to such that it doesn't
  depend on nested pools.  Normal file vdev under /var/tmp are fine.

* zpool_import_all_001_pos - Updated to skip partition 1, which is
  known as slice 2, on Illumos.  This prevents overwriting the
  default TESTPOOL which was causing the failure.

* zpool_import_002_pos, zpool_import_012_pos - No changes needed.

* zpool_remove_003_pos - No changes needed

* zpool_upgrade_002_pos, zpool_upgrade_004_pos - Root cause addressed
  by upstream OpenZFS commit 3b7f360.

* zpool_upgrade_007_pos - Disabled in test case due to known failure.
  Opened issue https://github.com/zfsonlinux/zfs/issues/6112

* zvol_misc_002_pos - Updated to to use ext2.

* zvol_misc_001_neg, zvol_misc_003_neg, zvol_misc_004_pos,
  zvol_misc_005_neg, zvol_misc_006_pos - Moved to skip list, these
  test case could be updated to use Linux's crash dump facility.

* zvol_swap_* - Updated to use swap_setup/swap_cleanup helpers.
  File creation switched from /tmp to /var/tmp.  Enabled minimal
  useful tests for Linux, skip test cases which aren't applicable.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: loli10K <ezomori.nozomu@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #3484
Issue #5634
Issue #2437
Issue #5202
Issue #4034
Closes #6095
2017-05-11 14:27:57 -07:00
..
runfiles Enable additional test cases 2017-05-11 14:27:57 -07:00
test-runner OpenZFS 7503 - zfs-test should tail ::zfs_dbgmsg on test failure 2017-04-12 13:36:48 -07:00
zfs-tests Enable additional test cases 2017-05-11 14:27:57 -07:00
Makefile.am Add the ZFS Test Suite 2016-03-16 13:46:16 -07:00
README.md Add the ZFS Test Suite 2016-03-16 13:46:16 -07:00

ZFS Test Suite README

  1. Building and installing the ZFS Test Suite

The ZFS Test Suite runs under the test-runner framework. This framework is built along side the standard ZFS utilities and is included as part of zfs-test package. The zfs-test package can be built from source as follows:

$ ./configure
$ make pkg-utils

The resulting packages can be installed using the rpm or dpkg command as appropriate for your distributions. Alternately, if you have installed ZFS from a distributions repository (not from source) the zfs-test package may be provided for your distribution.

- Installed from source
$ rpm -ivh ./zfs-test*.rpm, or
$ dpkg -i ./zfs-test*.deb,

- Installed from package repository
$ yum install zfs-test
$ apt-get install zfs-test
  1. Running the ZFS Test Suite

The pre-requisites for running the ZFS Test Suite are:

  • Three scratch disks
    • Specify the disks you wish to use in the $DISKS variable, as a space delimited list like this: DISKS='vdb vdc vdd'. By default the zfs-tests.sh sciprt will construct three loopback devices to be used for testing: DISKS='loop0 loop1 loop2'.
  • A non-root user with a full set of basic privileges and the ability to sudo(8) to root without a password to run the test.
  • Specify any pools you wish to preserve as a space delimited list in the $KEEP variable. All pools detected at the start of testing are added automatically.
  • The ZFS Test Suite will add users and groups to test machine to verify functionality. Therefore it is strongly advised that a dedicated test machine, which can be a VM, be used for testing.

Once the pre-requisites are satisfied simply run the zfs-tests.sh script:

$ /usr/share/zfs/zfs-tests.sh

Alternately, the zfs-tests.sh script can be run from the source tree to allow developers to rapidly validate their work. In this mode the ZFS utilities and modules from the source tree will be used (rather than those installed on the system). In order to avoid certain types of failures you will need to ensure the ZFS udev rules are installed. This can be done manually or by ensuring some version of ZFS is installed on the system.

$ ./scripts/zfs-tests.sh

The following zfs-tests.sh options are supported:

-v          Verbose zfs-tests.sh output When specified additional
            information describing the test environment will be logged
            prior to invoking test-runner.  This includes the runfile
            being used, the DISKS targeted, pools to keep, etc.

-q          Quiet test-runner output.  When specified it is passed to
            test-runner(1) which causes output to be written to the
            console only for tests that do not pass and the results
            summary.

-x          Remove all testpools, dm, lo, and files (unsafe).  When
            specified the script will attempt to remove any leftover
            configuration from a previous test run.  This includes
            destroying any pools named testpool, unused DM devices,
            and loopback devices backed by file-vdevs.  This operation
            can be DANGEROUS because it is possible that the script
            will mistakenly remove a resource not related to the testing.

-k          Disable cleanup after test failure.  When specified the
            zfs-tests.sh script will not perform any additional cleanup
            when test-runner exists.  This is useful when the results of
            a specific test need to be preserved for further analysis.

-f          Use sparse files directly instread of loopback devices for
            the testing.  When running in this mode certain tests will
            be skipped which depend on real block devices.

-d DIR      Create sparse files for vdevs in the DIR directory.  By
            default these files are created under /var/tmp/.

-s SIZE     Use vdevs of SIZE (default: 2G)

-r RUNFILE  Run tests in RUNFILE (default: linux.run)

The ZFS Test Suite allows the user to specify a subset of the tests via a runfile. The format of the runfile is explained in test-runner(1), and the files that zfs-tests.sh uses are available for reference under /usr/share/zfs/runfiles. To specify a custom runfile, use the -r option:

$ /usr/share/zfs/zfs-tests.sh -r my_tests.run
  1. Test results

While the ZFS Test Suite is running, one informational line is printed at the end of each test, and a results summary is printed at the end of the run. The results summary includes the location of the complete logs, which is logged in the form /var/tmp/test_results/[ISO 8601 date]. A normal test run launched with the zfs-tests.sh wrapper script will look something like this:

$ /usr/share/zfs/zfs-tests.sh -v -d /mnt

--- Configuration --- Runfile: /usr/share/zfs/runfiles/linux.run STF_TOOLS: /usr/share/zfs/test-runner STF_SUITE: /usr/share/zfs/zfs-tests FILEDIR: /mnt FILES: /mnt/file-vdev0 /mnt/file-vdev1 /mnt/file-vdev2 LOOPBACKS: /dev/loop0 /dev/loop1 /dev/loop2 DISKS: loop0 loop1 loop2 NUM_DISKS: 3 FILESIZE: 2G Keep pool(s): rpool

/usr/share/zfs/test-runner/bin/test-runner.py -c
/usr/share/zfs/runfiles/linux.run -i /usr/share/zfs/zfs-tests Test: .../tests/functional/acl/posix/setup (run as root) [00:00] [PASS] ...470 additional tests... Test: .../tests/functional/zvol/zvol_cli/cleanup (run as root) [00:00] [PASS]

Results Summary PASS 472

Running Time: 00:45:09 Percent passed: 100.0% Log directory: /var/tmp/test_results/20160316T181651