mirror of
https://git.proxmox.com/git/mirror_zfs.git
synced 2024-11-17 10:01:01 +03:00
330c6c0523
The RAIDZ and DRAID code is responsible for reporting checksum errors on their child vdevs. Checksum errors represent events where a disk returned data or parity that should have been correct, but was not. In other words, these are instances of silent data corruption. The checksum errors show up in the vdev stats (and thus `zpool status`'s CKSUM column), and in the event log (`zpool events`). Note, this is in contrast with the more common "noisy" errors where a disk goes offline, in which case ZFS knows that the disk is bad and doesn't try to read it, or the device returns an error on the requested read or write operation. RAIDZ/DRAID generate checksum errors via three code paths: 1. When RAIDZ/DRAID reconstructs a damaged block, checksum errors are reported on any children whose data was not used during the reconstruction. This is handled in `raidz_reconstruct()`. This is the most common type of RAIDZ/DRAID checksum error. 2. When RAIDZ/DRAID is not able to reconstruct a damaged block, that means that the data has been lost. The zio fails and an error is returned to the consumer (e.g. the read(2) system call). This would happen if, for example, three different disks in a RAIDZ2 group are silently damaged. Since the damage is silent, it isn't possible to know which three disks are damaged, so a checksum error is reported against every child that returned data or parity for this read. (For DRAID, typically only one "group" of children is involved in each io.) This case is handled in `vdev_raidz_cksum_finish()`. This is the next most common type of RAIDZ/DRAID checksum error. 3. If RAIDZ/DRAID is not able to reconstruct a damaged block (like in case 2), but there happens to be additional copies of this block due to "ditto blocks" (i.e. multiple DVA's in this blkptr_t), and one of those copies is good, then RAIDZ/DRAID compares each sector of the data or parity that it retrieved with the good data from the other DVA, and if they differ then it reports a checksum error on this child. This differs from case 2 in that the checksum error is reported on only the subset of children that actually have bad data or parity. This case happens very rarely, since normally only metadata has ditto blocks. If the silent damage is extensive, there will be many instances of case 2, and the pool will likely be unrecoverable. The code for handling case 3 is considerably more complicated than the other cases, for two reasons: 1. It needs to run after the main raidz read logic has completed. The data RAIDZ read needs to be preserved until after the alternate DVA has been read, which necessitates refcounts and callbacks managed by the non-raidz-specific zio layer. 2. It's nontrivial to map the sections of data read by RAIDZ to the correct data. For example, the correct data does not include the parity information, so the parity must be recalculated based on the correct data, and then compared to the parity that was read from the RAIDZ children. Due to the complexity of case 3, the rareness of hitting it, and the minimal benefit it provides above case 2, this commit removes the code for case 3. These types of errors will now be handled the same as case 2, i.e. the checksum error will be reported against all children that returned data or parity. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Matthew Ahrens <mahrens@delphix.com> Closes #11735 |
||
---|---|---|
.. | ||
.gitignore | ||
fsck.zfs.8 | ||
Makefile.am | ||
mount.zfs.8 | ||
vdev_id.8 | ||
zdb.8 | ||
zed.8.in | ||
zfs_ids_to_path.8 | ||
zfs-allow.8 | ||
zfs-bookmark.8 | ||
zfs-change-key.8 | ||
zfs-clone.8 | ||
zfs-create.8 | ||
zfs-destroy.8 | ||
zfs-diff.8 | ||
zfs-get.8 | ||
zfs-groupspace.8 | ||
zfs-hold.8 | ||
zfs-inherit.8 | ||
zfs-jail.8 | ||
zfs-list.8 | ||
zfs-load-key.8 | ||
zfs-mount-generator.8.in | ||
zfs-mount.8 | ||
zfs-program.8 | ||
zfs-project.8 | ||
zfs-projectspace.8 | ||
zfs-promote.8 | ||
zfs-receive.8 | ||
zfs-recv.8 | ||
zfs-redact.8 | ||
zfs-release.8 | ||
zfs-rename.8 | ||
zfs-rollback.8 | ||
zfs-send.8 | ||
zfs-set.8 | ||
zfs-share.8 | ||
zfs-snapshot.8 | ||
zfs-unallow.8 | ||
zfs-unjail.8 | ||
zfs-unload-key.8 | ||
zfs-unmount.8 | ||
zfs-upgrade.8 | ||
zfs-userspace.8 | ||
zfs-wait.8 | ||
zfs.8 | ||
zfsconcepts.8 | ||
zfsprops.8 | ||
zgenhostid.8 | ||
zinject.8 | ||
zpool_influxdb.8 | ||
zpool-add.8 | ||
zpool-attach.8 | ||
zpool-checkpoint.8 | ||
zpool-clear.8 | ||
zpool-create.8 | ||
zpool-destroy.8 | ||
zpool-detach.8 | ||
zpool-events.8 | ||
zpool-export.8 | ||
zpool-get.8 | ||
zpool-history.8 | ||
zpool-import.8 | ||
zpool-initialize.8 | ||
zpool-iostat.8 | ||
zpool-labelclear.8 | ||
zpool-list.8 | ||
zpool-offline.8 | ||
zpool-online.8 | ||
zpool-reguid.8 | ||
zpool-remove.8 | ||
zpool-reopen.8 | ||
zpool-replace.8 | ||
zpool-resilver.8 | ||
zpool-scrub.8 | ||
zpool-set.8 | ||
zpool-split.8 | ||
zpool-status.8 | ||
zpool-sync.8 | ||
zpool-trim.8 | ||
zpool-upgrade.8 | ||
zpool-wait.8 | ||
zpool.8 | ||
zpoolconcepts.8 | ||
zpoolprops.8 | ||
zstream.8 | ||
zstreamdump.8 |