mirror_zfs/tests/zfs-tests/include
LOLi a8fa31b50b Fix 'zpool add' handling of nested interior VDEVs
When replacing a faulted device which was previously handled by a spare
multiple levels of nested interior VDEVs will be present in the pool
configuration; the following example illustrates one of the possible
situations:

   NAME                          STATE     READ WRITE CKSUM
   testpool                      DEGRADED     0     0     0
     raidz1-0                    DEGRADED     0     0     0
       spare-0                   DEGRADED     0     0     0
         replacing-0             DEGRADED     0     0     0
           /var/tmp/fault-dev    UNAVAIL      0     0     0  cannot open
           /var/tmp/replace-dev  ONLINE       0     0     0
         /var/tmp/spare-dev1     ONLINE       0     0     0
       /var/tmp/safe-dev         ONLINE       0     0     0
   spares
     /var/tmp/spare-dev1         INUSE     currently in use

This is safe and allowed, but get_replication() needs to handle this
situation gracefully to let zpool add new devices to the pool.

Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Closes #6678
Closes #6996
2018-01-30 10:27:31 -06:00
..
.gitignore Add zpool events tests 2017-05-22 12:34:42 -04:00
commands.cfg Fix volume WR_INDIRECT log replay (#6620) 2017-09-13 16:04:16 -07:00
default.cfg.in Emit history events for 'zpool create' 2017-12-04 17:21:03 -08:00
libtest.shlib Fix 'zpool add' handling of nested interior VDEVs 2018-01-30 10:27:31 -06:00
Makefile.am Fix bug in distclean which removes needed files 2018-01-30 10:27:30 -06:00
math.shlib Fix truncate(2) mtime and ctime handling 2017-11-21 13:11:29 -06:00
properties.shlib Disable nbmand tests on kernels w/o support 2017-07-24 11:03:50 -07:00
zpool_script.shlib Prebaked scripts for zpool status/iostat -c 2017-04-21 09:27:04 -07:00