mirror_zfs/man/man8/zdb.8

571 lines
17 KiB
Groff
Raw Normal View History

.\"
.\" This file and its contents are supplied under the terms of the
.\" Common Development and Distribution License ("CDDL"), version 1.0.
.\" You may only use this file in accordance with the terms of version
.\" 1.0 of the CDDL.
.\"
.\" A full copy of the text of the CDDL should have accompanied this
.\" source. A copy of the CDDL is also available via the Internet at
.\" http://www.illumos.org/license/CDDL.
.\"
.\" Copyright 2012, Richard Lowe.
Extend zdb to print inconsistencies in livelists and metaslabs Livelists and spacemaps are data structures that are logs of allocations and frees. Livelists entries are block pointers (blkptr_t). Spacemaps entries are ranges of numbers, most often used as to track allocated/freed regions of metaslabs/vdevs. These data structures can become self-inconsistent, for example if a block or range can be "double allocated" (two allocation records without an intervening free) or "double freed" (two free records without an intervening allocation). ZDB (as well as zfs running in the kernel) can detect these inconsistencies when loading livelists and metaslab. However, it generally halts processing when the error is detected. When analyzing an on-disk problem, we often want to know the entire set of inconsistencies, which is not possible with the current behavior. This commit adds a new flag, `zdb -y`, which analyzes the livelist and metaslab data structures and displays all of their inconsistencies. Note that this is different from the leak detection performed by `zdb -b`, which checks for inconsistencies between the spacemaps and the tree of block pointers, but assumes the spacemaps are self-consistent. The specific checks added are: Verify livelists by iterating through each sublivelists and: - report leftover FREEs - report double ALLOCs and double FREEs - record leftover ALLOCs together with their TXG [see Cross Check] Verify spacemaps by iterating over each metaslab and: - iterate over spacemap and then the metaslab's entries in the spacemap log, then report any double FREEs and double ALLOCs Verify that livelists are consistenet with spacemaps. The space referenced by livelists (after using the FREE's to cancel out corresponding ALLOCs) should be allocated, according to the spacemaps. Reviewed-by: Serapheim Dimitropoulos <serapheim@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Co-authored-by: Sara Hartse <sara.hartse@delphix.com> Signed-off-by: Matthew Ahrens <mahrens@delphix.com> External-issue: DLPX-66031 Closes #10515
2020-07-15 03:51:05 +03:00
.\" Copyright (c) 2012, 2019 by Delphix. All rights reserved.
.\" Copyright 2017 Nexenta Systems, Inc.
.\" Copyright (c) 2017 Lawrence Livermore National Security, LLC.
.\" Copyright (c) 2017 Intel Corporation.
.\"
.Dd November 18, 2023
.Dt ZDB 8
.Os
.
.Sh NAME
.Nm zdb
.Nd display ZFS storage pool debugging and consistency information
.Sh SYNOPSIS
.Nm
.Op Fl AbcdDFGhikLMNPsTvXYy
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl I Ar inflight-I/O-ops
.Oo Fl o Ar var Ns = Ns Ar value Oc Ns
.Op Fl t Ar txg
.Op Fl U Ar cache
.Op Fl x Ar dumpdir
.Op Fl K Ar key
.Op Ar poolname Ns Op / Ns Ar dataset Ns | Ns Ar objset-ID
.Op Ar object Ns | Ns Ar range Ns
.Nm
.Op Fl AdiPv
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl U Ar cache
.Op Fl K Ar key
.Ar poolname Ns Op Ar / Ns Ar dataset Ns | Ns Ar objset-ID
.Op Ar object Ns | Ns Ar range Ns
.Nm
.Fl B
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl U Ar cache
.Op Fl K Ar key
.Ar poolname Ns Ar / Ns Ar objset-ID
.Op Ar backup-flags
.Nm
.Fl C
.Op Fl A
.Op Fl U Ar cache
.Op Ar poolname
.Nm
.Fl E
.Op Fl A
.Ar word0 : Ns Ar word1 Ns :…: Ns Ar word15
.Nm
.Fl l
.Op Fl Aqu
.Ar device
.Nm
.Fl m
ztest: split block reconstruction Increase the default allowed number of reconstruction attempts. There's not an exact right number for this setting. It needs to be set large enough to cover any realistic failure scenarios and small enough to avoid stalling the IO pipeline and invoking the dead man detection. The current value of 256 was empirically determined to be too low based on multi-day runs of ztest. The fault injection code would inject more damage than could be reconstructed given the relatively small number of attempts. However, in all observed cases the block could be reconstructed using a slightly higher limit. Based on local testing increasing the default value to 4096 was determined to strike the best balance. Checking all combinations takes less than 10s in the worst case, and has so far eliminated the vast majority of false positives detected by ztest. This delay is roughly on par with how long retries may be performed to a misbehaving HDD and was deemed to be reasonable. Better to err on the side of a brief delay rather than fail to reconstruct the data. Lastly, the -Y flag has been added to zdb to make it easy to try all possible combinations when performing split block reconstruction. For badly damaged blocks with 18 splits, they can be fully enumerated within a few minutes. This has been done to ensure permanent errors are never incorrectly reported when ztest verifies the pool with zdb. Reviewed by: Tom Caputi <tcaputi@datto.com> Reviewed by: Matt Ahrens <mahrens@delphix.com> Reviewed by: Serapheim Dimitropoulos <serapheim@delphix.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #8271
2019-01-17 01:10:02 +03:00
.Op Fl AFLPXY
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl t Ar txg
.Op Fl U Ar cache
.Ar poolname Op Ar vdev Oo Ar metaslab Oc Ns
.Nm
.Fl O
.Op Fl K Ar key
.Ar dataset path
.Nm
.Fl r
.Op Fl K Ar key
.Ar dataset path destination
.Nm
.Fl R
.Op Fl A
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl U Ar cache
.Ar poolname vdev : Ns Ar offset : Ns Oo Ar lsize Ns / Oc Ns Ar psize Ns Op : Ns Ar flags
.Nm
.Fl S
.Op Fl AP
.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns
.Op Fl U Ar cache
.Ar poolname
.
.Sh DESCRIPTION
The
.Nm
utility displays information about a ZFS pool useful for debugging and performs
some amount of consistency checking.
It is a not a general purpose tool and options
.Pq and facilities
may change.
It is not a
.Xr fsck 8
utility.
.Pp
The output of this command in general reflects the on-disk structure of a ZFS
pool, and is inherently unstable.
The precise output of most invocations is not documented, a knowledge of ZFS
internals is assumed.
.Pp
If the
.Ar dataset
argument does not contain any
.Qq Sy /
or
.Qq Sy @
characters, it is interpreted as a pool name.
The root dataset can be specified as
.Qq Ar pool Ns / .
.Pp
.Nm
is an
.Qq offline
tool; it accesses the block devices underneath the pools directly from
userspace and does not care if the pool is imported or datasets are mounted
(or even if the system understands ZFS at all).
When operating on an imported and active pool it is possible, though unlikely,
that zdb may interpret inconsistent pool data and behave erratically.
.
.Sh OPTIONS
Display options:
.Bl -tag -width Ds
.It Fl b , -block-stats
Display statistics regarding the number, size
.Pq logical, physical and allocated
and deduplication of blocks.
.It Fl B , -backup
Generate a backup stream, similar to
.Nm zfs Cm send ,
but for the numeric objset ID, and without opening the dataset.
This can be useful in recovery scenarios if dataset metadata has become
corrupted but the dataset itself is readable.
The optional
.Ar flags
argument is a string of one or more of the letters
.Sy e ,
.Sy L ,
.Sy c ,
and
.Sy w ,
which correspond to the same flags in
.Xr zfs-send 8 .
.It Fl c , -checksum
Verify the checksum of all metadata blocks while printing block statistics
.Po see
.Fl b
.Pc .
.Pp
If specified multiple times, verify the checksums of all blocks.
.It Fl C , -config
Display information about the configuration.
If specified with no other options, instead display information about the cache
file
.Pq Pa /etc/zfs/zpool.cache .
To specify the cache file to display, see
.Fl U .
.Pp
If specified multiple times, and a pool name is also specified display both the
cached configuration and the on-disk configuration.
If specified multiple times with
.Fl e
also display the configuration that would be used were the pool to be imported.
.It Fl d , -datasets
Display information about datasets.
Specified once, displays basic dataset information: ID, create transaction,
size, and object count.
See
.Fl N
for determining if
.Ar poolname Ns Op / Ns Ar dataset Ns | Ns Ar objset-ID
is to use the specified
.Ar dataset Ns | Ns Ar objset-ID
as a string (dataset name) or a number (objset ID) when
datasets have numeric names.
.Pp
If specified multiple times provides greater and greater verbosity.
.Pp
2020-01-24 22:00:46 +03:00
If object IDs or object ID ranges are specified, display information about
those specific objects or ranges only.
.Pp
An object ID range is specified in terms of a colon-separated tuple of
the form
.Ao start Ac : Ns Ao end Ac Ns Op : Ns Ao flags Ac .
2020-01-24 22:00:46 +03:00
The fields
.Ar start
and
.Ar end
are integer object identifiers that denote the upper and lower bounds
of the range.
An
2020-01-24 22:00:46 +03:00
.Ar end
value of -1 specifies a range with no upper bound.
The
2020-01-24 22:00:46 +03:00
.Ar flags
field optionally specifies a set of flags, described below, that control
which object types are dumped.
By default, all object types are dumped.
A minus sign
2020-01-24 22:00:46 +03:00
.Pq -
negates the effect of the flag that follows it and has no effect unless
preceded by the
.Ar A
flag.
For example, the range 0:-1:A-d will dump all object types except for
directories.
2020-01-24 22:00:46 +03:00
.Pp
.Bl -tag -compact -width Ds
2020-01-24 22:00:46 +03:00
.It Sy A
Dump all objects (this is the default)
.It Sy d
Dump ZFS directory objects
.It Sy f
Dump ZFS plain file objects
.It Sy m
Dump SPA space map objects
.It Sy z
Dump ZAP objects
.It Sy -
Negate the effect of next flag
.El
.It Fl D , -dedup-stats
Display deduplication statistics, including the deduplication ratio
.Pq Sy dedup ,
compression ratio
.Pq Sy compress ,
inflation due to the zfs copies property
.Pq Sy copies ,
and an overall effective ratio
.Pq Sy dedup No \(mu Sy compress No / Sy copies .
.It Fl DD
Display a histogram of deduplication statistics, showing the allocated
.Pq physically present on disk
and referenced
.Pq logically referenced in the pool
block counts and sizes by reference count.
.It Fl DDD
Display the statistics independently for each deduplication table.
.It Fl DDDD
Dump the contents of the deduplication tables describing duplicate blocks.
.It Fl DDDDD
Also dump the contents of the deduplication tables describing unique blocks.
.It Fl E , -embedded-block-pointer Ns = Ns Ar word0 : Ns Ar word1 Ns :…: Ns Ar word15
Decode and display block from an embedded block pointer specified by the
.Ar word
arguments.
.It Fl h , -history
Display pool history similar to
.Nm zpool Cm history ,
but include internal changes, transaction, and dataset information.
.It Fl i , -intent-logs
Display information about intent log
.Pq ZIL
entries relating to each dataset.
If specified multiple times, display counts of each intent log transaction type.
.It Fl k , -checkpointed-state
OpenZFS 9166 - zfs storage pool checkpoint Details about the motivation of this feature and its usage can be found in this blogpost: https://sdimitro.github.io/post/zpool-checkpoint/ A lightning talk of this feature can be found here: https://www.youtube.com/watch?v=fPQA8K40jAM Implementation details can be found in big block comment of spa_checkpoint.c Side-changes that are relevant to this commit but not explained elsewhere: * renames members of "struct metaslab trees to be shorter without losing meaning * space_map_{alloc,truncate}() accept a block size as a parameter. The reason is that in the current state all space maps that we allocate through the DMU use a global tunable (space_map_blksz) which defauls to 4KB. This is ok for metaslab space maps in terms of bandwirdth since they are scattered all over the disk. But for other space maps this default is probably not what we want. Examples are device removal's vdev_obsolete_sm or vdev_chedkpoint_sm from this review. Both of these have a 1:1 relationship with each vdev and could benefit from a bigger block size. Porting notes: * The part of dsl_scan_sync() which handles async destroys has been moved into the new dsl_process_async_destroys() function. * Remove "VERIFY(!(flags & FWRITE))" in "kernel.c" so zhack can write to block device backed pools. * ZTS: * Fix get_txg() in zpool_sync_001_pos due to "checkpoint_txg". * Don't use large dd block sizes on /dev/urandom under Linux in checkpoint_capacity. * Adopt Delphix-OS's setting of 4 (spa_asize_inflation = SPA_DVAS_PER_BP + 1) for the checkpoint_capacity test to speed its attempts to fill the pool * Create the base and nested pools with sync=disabled to speed up the "setup" phase. * Clear labels in test pool between checkpoint tests to avoid duplicate pool issues. * The import_rewind_device_replaced test has been marked as "known to fail" for the reasons listed in its DISCLAIMER. * New module parameters: zfs_spa_discard_memory_limit, zfs_remove_max_bytes_pause (not documented - debugging only) vdev_max_ms_count (formerly metaslabs_per_vdev) vdev_min_ms_count Authored by: Serapheim Dimitropoulos <serapheim.dimitro@delphix.com> Reviewed by: Matthew Ahrens <mahrens@delphix.com> Reviewed by: John Kennedy <john.kennedy@delphix.com> Reviewed by: Dan Kimmel <dan.kimmel@delphix.com> Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov> Approved by: Richard Lowe <richlowe@richlowe.net> Ported-by: Tim Chase <tim@chase2k.com> Signed-off-by: Tim Chase <tim@chase2k.com> OpenZFS-issue: https://illumos.org/issues/9166 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/7159fdb8 Closes #7570
2016-12-17 01:11:29 +03:00
Examine the checkpointed state of the pool.
Note, the on disk format of the pool is not reverted to the checkpointed state.
.It Fl l , -label Ns = Ns Ar device
Read the vdev labels and L2ARC header from the specified device.
.Nm Fl l
will return 0 if valid label was found, 1 if error occurred, and 2 if no valid
labels were found.
The presence of L2ARC header is indicated by a specific
sequence (L2ARC_DEV_HDR_MAGIC).
If there is an accounting error in the size or the number of L2ARC log blocks
Improvements on persistent L2ARC Functional changes: We implement refcounts of log blocks and their aligned size on the cache device along with two corresponding arcstats. The refcounts are reflected in the header of the device and provide valuable information as to whether log blocks are accounted for correctly. These are dynamically adjusted as log blocks are committed/evicted. zdb also uses this information in the device header and compares it to the corresponding values as reported by dump_l2arc_log_blocks() which emulates l2arc_rebuild(). If the refcounts saved in the device header report higher values, zdb exits with an error. For this feature to work correctly there should be no active writes on the device. This is also employed in the tests of persistent L2ARC. We extend the structure of the cache device header by adding the two new variables mirroring the refcounts after the existing variables to preserve backward compatibility in terms of persistent L2ARC. 1) a new arcstat "l2_log_blk_asize" and refcount "l2ad_lb_asize" which reflect the total aligned size of log blocks on the device. This is also reflected in the header of the cache device as "dh_lb_asize". 2) a new arcstat "l2arc_log_blk_count" and refcount "l2ad_lb_count" which reflect the total number of L2ARC log blocks present on cache devices. It is also reflected in the header of the cache device as "dh_lb_count". In l2arc_rebuild_vdev() if the amount of committed log entries in a log block is 0 and the device header is valid we update the device header. This will facilitate trimming of the whole device in this case when TRIM for L2ARC is implemented. Improve loop protection in l2arc_rebuild() by using the starting offset of the payload of each log block instead of the starting offset of the log block. If the zio in l2arc_write_buffers() fails, restore the lbps array in the header of the device to its previous state in l2arc_write_done(). If l2arc_rebuild() ends the rebuild process without restoring any L2ARC log blocks in ARC and without any other error, this means that the lbps array in the header is pointing to non-existent or invalid log blocks. Reset the device header in this case. In l2arc_rebuild() change the zfs_dbgmsg messages to spa_history_log_internal() making them user visible with zpool history command. Non-functional changes: Make the first test in persistent L2ARC use `zdb -lll` to increase coverage in `zdb.c`. Rename psize with asize when referring to log blocks, since L2ARC_SET_PSIZE stores the vdev aligned size for log blocks. Also rename dh_log_blk_entries to dh_log_entries to make it clear that it is a mirror of l2ad_log_entries. Added comments for both changes. Fix inaccurate comments for example in l2arc_log_blk_restore(). Add asserts at the end in l2arc_evict() and l2arc_write_buffers(). Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: George Amanakis <gamanakis@gmail.com> Closes #10228
2020-05-08 02:34:03 +03:00
.Nm Fl l
will return 1.
Each unique configuration is displayed only once.
.It Fl ll Ar device
In addition display label space usage stats.
If a valid L2ARC header was found
also display the properties of log blocks used for restoring L2ARC contents
(persistent L2ARC).
.It Fl lll Ar device
Display every configuration, unique or not.
If a valid L2ARC header was found
also display the properties of log entries in log blocks used for restoring
L2ARC contents (persistent L2ARC).
.Pp
If the
.Fl q
option is also specified, don't print the labels or the L2ARC header.
.Pp
If the
.Fl u
option is also specified, also display the uberblocks on this device.
Specify multiple times to increase verbosity.
.It Fl L , -disable-leak-tracking
Disable leak detection and the loading of space maps.
By default,
.Nm
verifies that all non-free blocks are referenced, which can be very expensive.
.It Fl m , -metaslabs
Log Spacemap Project = Motivation At Delphix we've seen a lot of customer systems where fragmentation is over 75% and random writes take a performance hit because a lot of time is spend on I/Os that update on-disk space accounting metadata. Specifically, we seen cases where 20% to 40% of sync time is spend after sync pass 1 and ~30% of the I/Os on the system is spent updating spacemaps. The problem is that these pools have existed long enough that we've touched almost every metaslab at least once, and random writes scatter frees across all metaslabs every TXG, thus appending to their spacemaps and resulting in many I/Os. To give an example, assuming that every VDEV has 200 metaslabs and our writes fit within a single spacemap block (generally 4K) we have 200 I/Os. Then if we assume 2 levels of indirection, we need 400 additional I/Os and since we are talking about metadata for which we keep 2 extra copies for redundancy we need to triple that number, leading to a total of 1800 I/Os per VDEV every TXG. We could try and decrease the number of metaslabs so we have less I/Os per TXG but then each metaslab would cover a wider range on disk and thus would take more time to be loaded in memory from disk. In addition, after it's loaded, it's range tree would consume more memory. Another idea would be to just increase the spacemap block size which would allow us to fit more entries within an I/O block resulting in fewer I/Os per metaslab and a speedup in loading time. The problem is still that we don't deal with the number of I/Os going up as the number of metaslabs is increasing and the fact is that we generally write a lot to a few metaslabs and a little to the rest of them. Thus, just increasing the block size would actually waste bandwidth because we won't be utilizing our bigger block size. = About this patch This patch introduces the Log Spacemap project which provides the solution to the above problem while taking into account all the aforementioned tradeoffs. The details on how it achieves that can be found in the references sections below and in the code (see Big Theory Statement in spa_log_spacemap.c). Even though the change is fairly constraint within the metaslab and lower-level SPA codepaths, there is a side-change that is user-facing. The change is that VDEV IDs from VDEV holes will no longer be reused. To give some background and reasoning for this, when a log device is removed and its VDEV structure was replaced with a hole (or was compacted; if at the end of the vdev array), its vdev_id could be reused by devices added after that. Now with the pool-wide space maps recording the vdev ID, this behavior can cause problems (e.g. is this entry referring to a segment in the new vdev or the removed log?). Thus, to simplify things the ID reuse behavior is gone and now vdev IDs for top-level vdevs are truly unique within a pool. = Testing The illumos implementation of this feature has been used internally for a year and has been in production for ~6 months. For this patch specifically there don't seem to be any regressions introduced to ZTS and I have been running zloop for a week without any related problems. = Performance Analysis (Linux Specific) All performance results and analysis for illumos can be found in the links of the references. Redoing the same experiments in Linux gave similar results. Below are the specifics of the Linux run. After the pool reached stable state the percentage of the time spent in pass 1 per TXG was 64% on average for the stock bits while the log spacemap bits stayed at 95% during the experiment (graph: sdimitro.github.io/img/linux-lsm/PercOfSyncInPassOne.png). Sync times per TXG were 37.6 seconds on average for the stock bits and 22.7 seconds for the log spacemap bits (related graph: sdimitro.github.io/img/linux-lsm/SyncTimePerTXG.png). As a result the log spacemap bits were able to push more TXGs, which is also the reason why all graphs quantified per TXG have more entries for the log spacemap bits. Another interesting aspect in terms of txg syncs is that the stock bits had 22% of their TXGs reach sync pass 7, 55% reach sync pass 8, and 20% reach 9. The log space map bits reached sync pass 4 in 79% of their TXGs, sync pass 7 in 19%, and sync pass 8 at 1%. This emphasizes the fact that not only we spend less time on metadata but we also iterate less times to convergence in spa_sync() dirtying objects. [related graphs: stock- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGStock.png lsm- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGLSM.png] Finally, the improvement in IOPs that the userland gains from the change is approximately 40%. There is a consistent win in IOPS as you can see from the graphs below but the absolute amount of improvement that the log spacemap gives varies within each minute interval. sdimitro.github.io/img/linux-lsm/StockVsLog3Days.png sdimitro.github.io/img/linux-lsm/StockVsLog10Hours.png = Porting to Other Platforms For people that want to port this commit to other platforms below is a list of ZoL commits that this patch depends on: Make zdb results for checkpoint tests consistent db587941c5ff6dea01932bb78f70db63cf7f38ba Update vdev_is_spacemap_addressable() for new spacemap encoding 419ba5914552c6185afbe1dd17b3ed4b0d526547 Simplify spa_sync by breaking it up to smaller functions 8dc2197b7b1e4d7ebc1420ea30e51c6541f1d834 Factor metaslab_load_wait() in metaslab_load() b194fab0fb6caad18711abccaff3c69ad8b3f6d3 Rename range_tree_verify to range_tree_verify_not_present df72b8bebe0ebac0b20e0750984bad182cb6564a Change target size of metaslabs from 256GB to 16GB c853f382db731e15a87512f4ef1101d14d778a55 zdb -L should skip leak detection altogether 21e7cf5da89f55ce98ec1115726b150e19eefe89 vs_alloc can underflow in L2ARC vdevs 7558997d2f808368867ca7e5234e5793446e8f3f Simplify log vdev removal code 6c926f426a26ffb6d7d8e563e33fc176164175cb Get rid of space_map_update() for ms_synced_length 425d3237ee88abc53d8522a7139c926d278b4b7f Introduce auxiliary metaslab histograms 928e8ad47d3478a3d5d01f0dd6ae74a9371af65e Error path in metaslab_load_impl() forgets to drop ms_sync_lock 8eef997679ba54547f7d361553d21b3291f41ae7 = References Background, Motivation, and Internals of the Feature - OpenZFS 2017 Presentation: youtu.be/jj2IxRkl5bQ - Slides: slideshare.net/SerapheimNikolaosDim/zfs-log-spacemaps-project Flushing Algorithm Internals & Performance Results (Illumos Specific) - Blogpost: sdimitro.github.io/post/zfs-lsm-flushing/ - OpenZFS 2018 Presentation: youtu.be/x6D2dHRjkxw - Slides: slideshare.net/SerapheimNikolaosDim/zfs-log-spacemap-flushing-algorithm Upstream Delphix Issues: DLPX-51539, DLPX-59659, DLPX-57783, DLPX-61438, DLPX-41227, DLPX-59320 DLPX-63385 Reviewed-by: Sean Eric Fagan <sef@ixsystems.com> Reviewed-by: Matt Ahrens <matt@delphix.com> Reviewed-by: George Wilson <gwilson@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Serapheim Dimitropoulos <serapheim@delphix.com> Closes #8442
2019-07-16 20:11:49 +03:00
Display the offset, spacemap, free space of each metaslab, all the log
spacemaps and their obsolete entry statistics.
.It Fl mm
Also display information about the on-disk free space histogram associated with
each metaslab.
.It Fl mmm
Display the maximum contiguous free space, the in-core free space histogram, and
the percentage of free space in each space map.
.It Fl mmmm
Display every spacemap record.
.It Fl M , -metaslab-groups
Display all "normal" vdev metaslab group information - per-vdev metaslab count,
fragmentation,
and free space histogram, as well as overall pool fragmentation and histogram.
.It Fl MM
"Special" vdevs are added to -M's normal output.
Also display information about the maximum contiguous free space and the
percentage of free space in each space map.
.It Fl MMM
Display every spacemap record.
.It Fl N
Same as
.Fl d
but force zdb to interpret the
.Op Ar dataset Ns | Ns Ar objset-ID
in
.Op Ar poolname Ns Op / Ns Ar dataset Ns | Ns Ar objset-ID
as a numeric objset ID.
.It Fl O , -object-lookups Ns = Ns Ar dataset path
Look up the specified
.Ar path
inside of the
.Ar dataset
and display its metadata and indirect blocks.
Specified
.Ar path
must be relative to the root of
.Ar dataset .
This option can be combined with
.Fl v
for increasing verbosity.
.It Fl r , -copy-object Ns = Ns Ar dataset path destination
Copy the specified
.Ar path
inside of the
.Ar dataset
to the specified destination.
Specified
.Ar path
must be relative to the root of
.Ar dataset .
This option can be combined with
.Fl v
for increasing verbosity.
.It Xo
.Fl R , -read-block Ns = Ns Ar poolname vdev : Ns Ar offset : Ns Oo Ar lsize Ns / Oc Ns Ar psize Ns Op : Ns Ar flags
.Xc
Read and display a block from the specified device.
By default the block is displayed as a hex dump, but see the description of the
.Sy r
flag, below.
.Pp
The block is specified in terms of a colon-separated tuple
.Ar vdev
.Pq an integer vdev identifier
.Ar offset
.Pq the offset within the vdev
.Ar size
.Pq the physical size, or logical size / physical size
of the block to read and, optionally,
.Ar flags
.Pq a set of flags, described below .
.Pp
.Bl -tag -compact -width "b offset"
.It Sy b Ar offset
Print block pointer at hex offset
.It Sy c
Calculate and display checksums
.It Sy d
Decompress the block.
Set environment variable
.Nm ZDB_NO_ZLE
to skip zle when guessing.
.It Sy e
Byte swap the block
.It Sy g
Dump gang block header
.It Sy i
Dump indirect block
.It Sy r
Dump raw uninterpreted block data
.It Sy v
Verbose output for guessing compression algorithm
.El
.It Fl s , -io-stats
Report statistics on
.Nm zdb
I/O.
Display operation counts, bandwidth, and error counts of I/O to the pool from
.Nm .
.It Fl S , -simulate-dedup
Simulate the effects of deduplication, constructing a DDT and then display
that DDT as with
.Fl DD .
.It Fl T , -brt-stats
Display block reference table (BRT) statistics, including the size of uniques
blocks cloned, the space saving as a result of cloning, and the saving ratio.
.It Fl TT
Display the per-vdev BRT statistics, including total references.
.It Fl TTT
Dump the contents of the block reference tables.
.It Fl u , -uberblock
Display the current uberblock.
.El
.Pp
Other options:
.Bl -tag -width Ds
.It Fl A , -ignore-assertions
Do not abort should any assertion fail.
.It Fl AA
Enable panic recovery, certain errors which would otherwise be fatal are
demoted to warnings.
.It Fl AAA
Do not abort if asserts fail and also enable panic recovery.
.It Fl e , -exported Ns = Ns Oo Fl p Ar path Oc Ns
Operate on an exported pool, not present in
.Pa /etc/zfs/zpool.cache .
The
.Fl p
flag specifies the path under which devices are to be searched.
.It Fl x , -dump-blocks Ns = Ns Ar dumpdir
All blocks accessed will be copied to files in the specified directory.
The blocks will be placed in sparse files whose name is the same as
that of the file or device read.
.Nm
can be then run on the generated files.
Note that the
.Fl bbc
flags are sufficient to access
.Pq and thus copy
all metadata on the pool.
.It Fl F , -automatic-rewind
Attempt to make an unreadable pool readable by trying progressively older
transactions.
.It Fl G , -dump-debug-msg
Dump the contents of the zfs_dbgmsg buffer before exiting
.Nm .
zfs_dbgmsg is a buffer used by ZFS to dump advanced debug information.
.It Fl I , -inflight Ns = Ns Ar inflight-I/O-ops
Limit the number of outstanding checksum I/O operations to the specified value.
The default value is 200.
This option affects the performance of the
.Fl c
Illumos #3306, #3321 3306 zdb should be able to issue reads in parallel 3321 'zpool reopen' command should be documented in the man page and help Reviewed by: Adam Leventhal <ahl@delphix.com> Reviewed by: Matt Ahrens <matthew.ahrens@delphix.com> Reviewed by: Christopher Siden <chris.siden@delphix.com> Approved by: Garrett D'Amore <garrett@damore.org> References: illumos/illumos-gate@31d7e8fa33fae995f558673adb22641b5aa8b6e1 https://www.illumos.org/issues/3306 https://www.illumos.org/issues/3321 The vdev_file.c implementation in this patch diverges significantly from the upstream version. For consistenty with the vdev_disk.c code the upstream version leverages the Illumos bio interfaces. This makes sense for Illumos but not for ZoL for two reasons. 1) The vdev_disk.c code in ZoL has been rewritten to use the Linux block device interfaces which differ significantly from those in Illumos. Therefore, updating the vdev_file.c to use the Illumos interfaces doesn't get you consistency with vdev_disk.c. 2) Using the upstream patch as is would requiring implementing compatibility code for those Solaris block device interfaces in user and kernel space. That additional complexity could lead to confusion and doesn't buy us anything. For these reasons I've opted to simply move the existing vn_rdwr() as is in to the taskq function. This has the advantage of being low risk and easy to understand. Moving the vn_rdwr() function in to its own taskq thread also neatly avoids the possibility of a stack overflow. Finally, because of the additional work which is being handled by the free taskq the number of threads has been increased. The thread count under Illumos defaults to 100 but was decreased to 2 in commit 08d08e due to contention. We increase it to 8 until the contention can be address by porting Illumos #3581. Ported-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #1354
2013-05-03 03:36:32 +04:00
option.
.It Fl K , -key Ns = Ns Ar key
Decryption key needed to access an encrypted dataset.
This will cause
.Nm
to attempt to unlock the dataset using the encryption root, key format and other
encryption parameters on the given dataset.
.Nm
can still inspect pool and dataset structures on encrypted datasets without
unlocking them, but will not be able to access file names and attributes and
object contents. \fBWARNING:\fP The raw decryption key and any decrypted data
will be in user memory while
.Nm
is running.
Other user programs may be able to extract it by inspecting
.Nm
as it runs.
Exercise extreme caution when using this option in shared or uncontrolled
environments.
.It Fl o , -option Ns = Ns Ar var Ns = Ns Ar value Ns
Set the given global libzpool variable to the provided value.
The value must be an unsigned 32-bit integer.
Currently only little-endian systems are supported to avoid accidentally setting
the high 32 bits of 64-bit variables.
.It Fl P , -parseable
Print numbers in an unscaled form more amenable to parsing, e.g.\&
.Sy 1000000
rather than
.Sy 1M .
.It Fl t , -txg Ns = Ns Ar transaction
Specify the highest transaction to use when searching for uberblocks.
See also the
.Fl u
and
.Fl l
options for a means to see the available uberblocks and their associated
transaction numbers.
.It Fl U , -cachefile Ns = Ns Ar cachefile
Use a cache file other than
.Pa /etc/zfs/zpool.cache .
.It Fl v , -verbose
Enable verbosity.
Specify multiple times for increased verbosity.
.It Fl V , -verbatim
Attempt verbatim import.
This mimics the behavior of the kernel when loading a pool from a cachefile.
Only usable with
.Fl e .
.It Fl X , -extreme-rewind
Attempt
.Qq extreme
transaction rewind, that is attempt the same recovery as
.Fl F
but read transactions otherwise deemed too old.
.It Fl Y , -all-reconstruction
ztest: split block reconstruction Increase the default allowed number of reconstruction attempts. There's not an exact right number for this setting. It needs to be set large enough to cover any realistic failure scenarios and small enough to avoid stalling the IO pipeline and invoking the dead man detection. The current value of 256 was empirically determined to be too low based on multi-day runs of ztest. The fault injection code would inject more damage than could be reconstructed given the relatively small number of attempts. However, in all observed cases the block could be reconstructed using a slightly higher limit. Based on local testing increasing the default value to 4096 was determined to strike the best balance. Checking all combinations takes less than 10s in the worst case, and has so far eliminated the vast majority of false positives detected by ztest. This delay is roughly on par with how long retries may be performed to a misbehaving HDD and was deemed to be reasonable. Better to err on the side of a brief delay rather than fail to reconstruct the data. Lastly, the -Y flag has been added to zdb to make it easy to try all possible combinations when performing split block reconstruction. For badly damaged blocks with 18 splits, they can be fully enumerated within a few minutes. This has been done to ensure permanent errors are never incorrectly reported when ztest verifies the pool with zdb. Reviewed by: Tom Caputi <tcaputi@datto.com> Reviewed by: Matt Ahrens <mahrens@delphix.com> Reviewed by: Serapheim Dimitropoulos <serapheim@delphix.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #8271
2019-01-17 01:10:02 +03:00
Attempt all possible combinations when reconstructing indirect split blocks.
This flag disables the individual I/O deadman timer in order to allow as
much time as required for the attempted reconstruction.
.It Fl y , -livelist
Extend zdb to print inconsistencies in livelists and metaslabs Livelists and spacemaps are data structures that are logs of allocations and frees. Livelists entries are block pointers (blkptr_t). Spacemaps entries are ranges of numbers, most often used as to track allocated/freed regions of metaslabs/vdevs. These data structures can become self-inconsistent, for example if a block or range can be "double allocated" (two allocation records without an intervening free) or "double freed" (two free records without an intervening allocation). ZDB (as well as zfs running in the kernel) can detect these inconsistencies when loading livelists and metaslab. However, it generally halts processing when the error is detected. When analyzing an on-disk problem, we often want to know the entire set of inconsistencies, which is not possible with the current behavior. This commit adds a new flag, `zdb -y`, which analyzes the livelist and metaslab data structures and displays all of their inconsistencies. Note that this is different from the leak detection performed by `zdb -b`, which checks for inconsistencies between the spacemaps and the tree of block pointers, but assumes the spacemaps are self-consistent. The specific checks added are: Verify livelists by iterating through each sublivelists and: - report leftover FREEs - report double ALLOCs and double FREEs - record leftover ALLOCs together with their TXG [see Cross Check] Verify spacemaps by iterating over each metaslab and: - iterate over spacemap and then the metaslab's entries in the spacemap log, then report any double FREEs and double ALLOCs Verify that livelists are consistenet with spacemaps. The space referenced by livelists (after using the FREE's to cancel out corresponding ALLOCs) should be allocated, according to the spacemaps. Reviewed-by: Serapheim Dimitropoulos <serapheim@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Co-authored-by: Sara Hartse <sara.hartse@delphix.com> Signed-off-by: Matthew Ahrens <mahrens@delphix.com> External-issue: DLPX-66031 Closes #10515
2020-07-15 03:51:05 +03:00
Perform validation for livelists that are being deleted.
Scans through the livelist and metaslabs, checking for duplicate entries
and compares the two, checking for potential double frees.
If it encounters issues, warnings will be printed, but the command will not
necessarily fail.
.El
.Pp
Specifying a display option more than once enables verbosity for only that
option, with more occurrences enabling more verbosity.
.Pp
If no options are specified, all information about the named pool will be
displayed at default verbosity.
.
.Sh EXAMPLES
.Ss Example 1 : No Display the configuration of imported pool Ar rpool
.Bd -literal
.No # Nm zdb Fl C Ar rpool
MOS Configuration:
version: 28
name: 'rpool'
.Ed
.
.Ss Example 2 : No Display basic dataset information about Ar rpool
.Bd -literal
.No # Nm zdb Fl d Ar rpool
Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects
Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects
.Ed
.
.Ss Example 3 : No Display basic information about object 0 in Ar rpool/export/home
.Bd -literal
.No # Nm zdb Fl d Ar rpool/export/home 0
Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects
Object lvl iblk dblk dsize lsize %full type
0 7 16K 16K 15.0K 16K 25.00 DMU dnode
.Ed
.
.Ss Example 4 : No Display the predicted effect of enabling deduplication on Ar rpool
.Bd -literal
.No # Nm zdb Fl S Ar rpool
Simulated DDT histogram:
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 694K 27.1G 15.0G 15.0G 694K 27.1G 15.0G 15.0G
2 35.0K 1.33G 699M 699M 74.7K 2.79G 1.45G 1.45G
dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00
.Ed
.
.Sh SEE ALSO
.Xr zfs 8 ,
.Xr zpool 8