2019-11-13 20:21:07 +03:00
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER START
|
|
|
|
.\"
|
|
|
|
.\" The contents of this file are subject to the terms of the
|
|
|
|
.\" Common Development and Distribution License (the "License").
|
|
|
|
.\" You may not use this file except in compliance with the License.
|
|
|
|
.\"
|
|
|
|
.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
2022-07-12 00:16:13 +03:00
|
|
|
.\" or https://opensource.org/licenses/CDDL-1.0.
|
2019-11-13 20:21:07 +03:00
|
|
|
.\" See the License for the specific language governing permissions
|
|
|
|
.\" and limitations under the License.
|
|
|
|
.\"
|
|
|
|
.\" When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
.\" If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
.\" fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
.\" information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
.\"
|
|
|
|
.\" CDDL HEADER END
|
|
|
|
.\"
|
|
|
|
.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
|
|
|
|
.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
|
|
|
|
.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
|
|
|
|
.\" Copyright (c) 2017 Datto Inc.
|
|
|
|
.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
|
|
|
|
.\" Copyright 2017 Nexenta Systems, Inc.
|
|
|
|
.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
|
2024-03-21 22:10:04 +03:00
|
|
|
.\" Copyright (c) 2024, Klara Inc.
|
2019-11-13 20:21:07 +03:00
|
|
|
.\"
|
2024-03-21 22:10:04 +03:00
|
|
|
.Dd February 28, 2024
|
2019-11-13 20:21:07 +03:00
|
|
|
.Dt ZPOOL-EVENTS 8
|
2020-08-21 21:55:47 +03:00
|
|
|
.Os
|
2021-05-27 03:46:40 +03:00
|
|
|
.
|
2019-11-13 20:21:07 +03:00
|
|
|
.Sh NAME
|
2020-10-22 21:28:10 +03:00
|
|
|
.Nm zpool-events
|
2021-05-27 03:46:40 +03:00
|
|
|
.Nd list recent events generated by kernel
|
2019-11-13 20:21:07 +03:00
|
|
|
.Sh SYNOPSIS
|
2020-10-22 21:28:10 +03:00
|
|
|
.Nm zpool
|
2019-11-13 20:21:07 +03:00
|
|
|
.Cm events
|
2021-05-27 03:46:40 +03:00
|
|
|
.Op Fl vHf
|
|
|
|
.Op Ar pool
|
2020-10-22 21:28:10 +03:00
|
|
|
.Nm zpool
|
2019-11-13 20:21:07 +03:00
|
|
|
.Cm events
|
2021-05-27 03:46:40 +03:00
|
|
|
.Fl c
|
|
|
|
.
|
|
|
|
.Sh DESCRIPTION
|
|
|
|
Lists all recent events generated by the ZFS kernel modules.
|
|
|
|
These events are consumed by the
|
2019-11-13 20:21:07 +03:00
|
|
|
.Xr zed 8
|
|
|
|
and used to automate administrative tasks such as replacing a failed device
|
2021-05-27 03:46:40 +03:00
|
|
|
with a hot spare.
|
|
|
|
For more information about the subclasses and event payloads
|
2021-06-04 23:29:26 +03:00
|
|
|
that can be generated see
|
|
|
|
.Sx EVENTS
|
|
|
|
and the following sections.
|
|
|
|
.
|
|
|
|
.Sh OPTIONS
|
2021-05-27 03:46:40 +03:00
|
|
|
.Bl -tag -compact -width Ds
|
2019-11-13 20:21:07 +03:00
|
|
|
.It Fl c
|
|
|
|
Clear all previous events.
|
|
|
|
.It Fl f
|
|
|
|
Follow mode.
|
|
|
|
.It Fl H
|
2021-05-27 03:46:40 +03:00
|
|
|
Scripted mode.
|
|
|
|
Do not display headers, and separate fields by a
|
2019-11-13 20:21:07 +03:00
|
|
|
single tab instead of arbitrary space.
|
|
|
|
.It Fl v
|
|
|
|
Print the entire payload for each event.
|
|
|
|
.El
|
2021-05-27 03:46:40 +03:00
|
|
|
.
|
2021-06-04 23:29:26 +03:00
|
|
|
.Sh EVENTS
|
2022-01-06 22:00:01 +03:00
|
|
|
These are the different event subclasses.
|
2021-06-04 23:29:26 +03:00
|
|
|
The full event name would be
|
|
|
|
.Sy ereport.fs.zfs.\& Ns Em SUBCLASS ,
|
|
|
|
but only the last part is listed here.
|
|
|
|
.Pp
|
|
|
|
.Bl -tag -compact -width "vdev.bad_guid_sum"
|
|
|
|
.It Sy checksum
|
|
|
|
Issued when a checksum error has been detected.
|
|
|
|
.It Sy io
|
|
|
|
Issued when there is an I/O error in a vdev in the pool.
|
|
|
|
.It Sy data
|
|
|
|
Issued when there have been data errors in the pool.
|
|
|
|
.It Sy deadman
|
|
|
|
Issued when an I/O request is determined to be "hung", this can be caused
|
|
|
|
by lost completion events due to flaky hardware or drivers.
|
|
|
|
See
|
|
|
|
.Sy zfs_deadman_failmode
|
|
|
|
in
|
|
|
|
.Xr zfs 4
|
|
|
|
for additional information regarding "hung" I/O detection and configuration.
|
|
|
|
.It Sy delay
|
|
|
|
Issued when a completed I/O request exceeds the maximum allowed time
|
|
|
|
specified by the
|
|
|
|
.Sy zio_slow_io_ms
|
|
|
|
module parameter.
|
|
|
|
This can be an indicator of problems with the underlying storage device.
|
|
|
|
The number of delay events is ratelimited by the
|
|
|
|
.Sy zfs_slow_io_events_per_second
|
|
|
|
module parameter.
|
Adding Direct IO Support
Adding O_DIRECT support to ZFS to bypass the ARC for writes/reads.
O_DIRECT support in ZFS will always ensure there is coherency between
buffered and O_DIRECT IO requests. This ensures that all IO requests,
whether buffered or direct, will see the same file contents at all
times. Just as in other FS's , O_DIRECT does not imply O_SYNC. While
data is written directly to VDEV disks, metadata will not be synced
until the associated TXG is synced.
For both O_DIRECT read and write request the offset and request sizes,
at a minimum, must be PAGE_SIZE aligned. In the event they are not,
then EINVAL is returned unless the direct property is set to always (see
below).
For O_DIRECT writes:
The request also must be block aligned (recordsize) or the write
request will take the normal (buffered) write path. In the event that
request is block aligned and a cached copy of the buffer in the ARC,
then it will be discarded from the ARC forcing all further reads to
retrieve the data from disk.
For O_DIRECT reads:
The only alignment restrictions are PAGE_SIZE alignment. In the event
that the requested data is in buffered (in the ARC) it will just be
copied from the ARC into the user buffer.
For both O_DIRECT writes and reads the O_DIRECT flag will be ignored in
the event that file contents are mmap'ed. In this case, all requests
that are at least PAGE_SIZE aligned will just fall back to the buffered
paths. If the request however is not PAGE_SIZE aligned, EINVAL will
be returned as always regardless if the file's contents are mmap'ed.
Since O_DIRECT writes go through the normal ZIO pipeline, the
following operations are supported just as with normal buffered writes:
Checksum
Compression
Encryption
Erasure Coding
There is one caveat for the data integrity of O_DIRECT writes that is
distinct for each of the OS's supported by ZFS.
FreeBSD - FreeBSD is able to place user pages under write protection so
any data in the user buffers and written directly down to the
VDEV disks is guaranteed to not change. There is no concern
with data integrity and O_DIRECT writes.
Linux - Linux is not able to place anonymous user pages under write
protection. Because of this, if the user decides to manipulate
the page contents while the write operation is occurring, data
integrity can not be guaranteed. However, there is a module
parameter `zfs_vdev_direct_write_verify` that controls the
if a O_DIRECT writes that can occur to a top-level VDEV before
a checksum verify is run before the contents of the I/O buffer
are committed to disk. In the event of a checksum verification
failure the write will return EIO. The number of O_DIRECT write
checksum verification errors can be observed by doing
`zpool status -d`, which will list all verification errors that
have occurred on a top-level VDEV. Along with `zpool status`, a
ZED event will be issues as `dio_verify` when a checksum
verification error occurs.
ZVOLs and dedup is not currently supported with Direct I/O.
A new dataset property `direct` has been added with the following 3
allowable values:
disabled - Accepts O_DIRECT flag, but silently ignores it and treats
the request as a buffered IO request.
standard - Follows the alignment restrictions outlined above for
write/read IO requests when the O_DIRECT flag is used.
always - Treats every write/read IO request as though it passed
O_DIRECT and will do O_DIRECT if the alignment restrictions
are met otherwise will redirect through the ARC. This
property will not allow a request to fail.
There is also a module parameter zfs_dio_enabled that can be used to
force all reads and writes through the ARC. By setting this module
parameter to 0, it mimics as if the direct dataset property is set to
disabled.
Reviewed-by: Brian Behlendorf <behlendorf@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Co-authored-by: Matt Macy <mmacy@FreeBSD.org>
Co-authored-by: Brian Behlendorf <behlendorf@llnl.gov>
Closes #10018
2024-09-14 23:47:59 +03:00
|
|
|
.It Sy dio_verify
|
|
|
|
Issued when there was a checksum verify error after a Direct I/O write has been
|
|
|
|
issued.
|
|
|
|
This event can only take place if the module parameter
|
|
|
|
.Sy zfs_vdev_direct_write_verify
|
|
|
|
is not set to zero.
|
|
|
|
See
|
|
|
|
.Xr zfs 4
|
|
|
|
for more details on the
|
|
|
|
.Sy zfs_vdev_direct_write_verify
|
|
|
|
module paramter.
|
2021-06-04 23:29:26 +03:00
|
|
|
.It Sy config
|
|
|
|
Issued every time a vdev change have been done to the pool.
|
|
|
|
.It Sy zpool
|
|
|
|
Issued when a pool cannot be imported.
|
|
|
|
.It Sy zpool.destroy
|
|
|
|
Issued when a pool is destroyed.
|
|
|
|
.It Sy zpool.export
|
|
|
|
Issued when a pool is exported.
|
|
|
|
.It Sy zpool.import
|
|
|
|
Issued when a pool is imported.
|
|
|
|
.It Sy zpool.reguid
|
2022-11-12 15:23:30 +03:00
|
|
|
Issued when a REGUID (new unique identifier for the pool have been regenerated)
|
|
|
|
have been detected.
|
2021-06-04 23:29:26 +03:00
|
|
|
.It Sy vdev.unknown
|
|
|
|
Issued when the vdev is unknown.
|
|
|
|
Such as trying to clear device errors on a vdev that have failed/been kicked
|
|
|
|
from the system/pool and is no longer available.
|
|
|
|
.It Sy vdev.open_failed
|
|
|
|
Issued when a vdev could not be opened (because it didn't exist for example).
|
|
|
|
.It Sy vdev.corrupt_data
|
|
|
|
Issued when corrupt data have been detected on a vdev.
|
|
|
|
.It Sy vdev.no_replicas
|
|
|
|
Issued when there are no more replicas to sustain the pool.
|
|
|
|
This would lead to the pool being
|
|
|
|
.Em DEGRADED .
|
|
|
|
.It Sy vdev.bad_guid_sum
|
|
|
|
Issued when a missing device in the pool have been detected.
|
|
|
|
.It Sy vdev.too_small
|
|
|
|
Issued when the system (kernel) have removed a device, and ZFS
|
|
|
|
notices that the device isn't there any more.
|
|
|
|
This is usually followed by a
|
|
|
|
.Sy probe_failure
|
|
|
|
event.
|
|
|
|
.It Sy vdev.bad_label
|
|
|
|
Issued when the label is OK but invalid.
|
|
|
|
.It Sy vdev.bad_ashift
|
|
|
|
Issued when the ashift alignment requirement has increased.
|
|
|
|
.It Sy vdev.remove
|
|
|
|
Issued when a vdev is detached from a mirror (or a spare detached from a
|
|
|
|
vdev where it have been used to replace a failed drive - only works if
|
2022-01-06 22:00:01 +03:00
|
|
|
the original drive have been re-added).
|
2021-06-04 23:29:26 +03:00
|
|
|
.It Sy vdev.clear
|
|
|
|
Issued when clearing device errors in a pool.
|
|
|
|
Such as running
|
|
|
|
.Nm zpool Cm clear
|
|
|
|
on a device in the pool.
|
|
|
|
.It Sy vdev.check
|
|
|
|
Issued when a check to see if a given vdev could be opened is started.
|
|
|
|
.It Sy vdev.spare
|
|
|
|
Issued when a spare have kicked in to replace a failed device.
|
|
|
|
.It Sy vdev.autoexpand
|
|
|
|
Issued when a vdev can be automatically expanded.
|
|
|
|
.It Sy io_failure
|
|
|
|
Issued when there is an I/O failure in a vdev in the pool.
|
|
|
|
.It Sy probe_failure
|
|
|
|
Issued when a probe fails on a vdev.
|
|
|
|
This would occur if a vdev
|
|
|
|
have been kicked from the system outside of ZFS (such as the kernel
|
|
|
|
have removed the device).
|
|
|
|
.It Sy log_replay
|
|
|
|
Issued when the intent log cannot be replayed.
|
|
|
|
The can occur in the case of a missing or damaged log device.
|
|
|
|
.It Sy resilver.start
|
|
|
|
Issued when a resilver is started.
|
|
|
|
.It Sy resilver.finish
|
|
|
|
Issued when the running resilver have finished.
|
|
|
|
.It Sy scrub.start
|
|
|
|
Issued when a scrub is started on a pool.
|
|
|
|
.It Sy scrub.finish
|
|
|
|
Issued when a pool has finished scrubbing.
|
|
|
|
.It Sy scrub.abort
|
|
|
|
Issued when a scrub is aborted on a pool.
|
|
|
|
.It Sy scrub.resume
|
|
|
|
Issued when a scrub is resumed on a pool.
|
|
|
|
.It Sy scrub.paused
|
|
|
|
Issued when a scrub is paused on a pool.
|
|
|
|
.It Sy bootfs.vdev.attach
|
|
|
|
.El
|
|
|
|
.
|
|
|
|
.Sh PAYLOADS
|
|
|
|
This is the payload (data, information) that accompanies an
|
|
|
|
event.
|
|
|
|
.Pp
|
|
|
|
For
|
|
|
|
.Xr zed 8 ,
|
|
|
|
these are set to uppercase and prefixed with
|
|
|
|
.Sy ZEVENT_ .
|
|
|
|
.Pp
|
|
|
|
.Bl -tag -compact -width "vdev_cksum_errors"
|
|
|
|
.It Sy pool
|
|
|
|
Pool name.
|
|
|
|
.It Sy pool_failmode
|
|
|
|
Failmode -
|
|
|
|
.Sy wait ,
|
|
|
|
.Sy continue ,
|
|
|
|
or
|
|
|
|
.Sy panic .
|
|
|
|
See the
|
|
|
|
.Sy failmode
|
|
|
|
property in
|
|
|
|
.Xr zpoolprops 7
|
|
|
|
for more information.
|
|
|
|
.It Sy pool_guid
|
|
|
|
The GUID of the pool.
|
|
|
|
.It Sy pool_context
|
|
|
|
The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, 4=recover
|
|
|
|
5=error).
|
|
|
|
.It Sy vdev_guid
|
|
|
|
The GUID of the vdev in question (the vdev failing or operated upon with
|
|
|
|
.Nm zpool Cm clear ,
|
|
|
|
etc.).
|
|
|
|
.It Sy vdev_type
|
|
|
|
Type of vdev -
|
|
|
|
.Sy disk ,
|
|
|
|
.Sy file ,
|
|
|
|
.Sy mirror ,
|
|
|
|
etc.
|
|
|
|
See the
|
|
|
|
.Sy Virtual Devices
|
|
|
|
section of
|
|
|
|
.Xr zpoolconcepts 7
|
|
|
|
for more information on possible values.
|
|
|
|
.It Sy vdev_path
|
|
|
|
Full path of the vdev, including any
|
|
|
|
.Em -partX .
|
|
|
|
.It Sy vdev_devid
|
|
|
|
ID of vdev (if any).
|
|
|
|
.It Sy vdev_fru
|
|
|
|
Physical FRU location.
|
|
|
|
.It Sy vdev_state
|
2022-11-12 15:23:30 +03:00
|
|
|
State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed to
|
|
|
|
open, 5=faulted, 6=degraded, 7=healthy).
|
2021-06-04 23:29:26 +03:00
|
|
|
.It Sy vdev_ashift
|
|
|
|
The ashift value of the vdev.
|
|
|
|
.It Sy vdev_complete_ts
|
|
|
|
The time the last I/O request completed for the specified vdev.
|
|
|
|
.It Sy vdev_delta_ts
|
|
|
|
The time since the last I/O request completed for the specified vdev.
|
|
|
|
.It Sy vdev_spare_paths
|
|
|
|
List of spares, including full path and any
|
|
|
|
.Em -partX .
|
|
|
|
.It Sy vdev_spare_guids
|
|
|
|
GUID(s) of spares.
|
|
|
|
.It Sy vdev_read_errors
|
|
|
|
How many read errors that have been detected on the vdev.
|
|
|
|
.It Sy vdev_write_errors
|
|
|
|
How many write errors that have been detected on the vdev.
|
|
|
|
.It Sy vdev_cksum_errors
|
|
|
|
How many checksum errors that have been detected on the vdev.
|
|
|
|
.It Sy parent_guid
|
|
|
|
GUID of the vdev parent.
|
|
|
|
.It Sy parent_type
|
|
|
|
Type of parent.
|
|
|
|
See
|
|
|
|
.Sy vdev_type .
|
|
|
|
.It Sy parent_path
|
|
|
|
Path of the vdev parent (if any).
|
|
|
|
.It Sy parent_devid
|
|
|
|
ID of the vdev parent (if any).
|
|
|
|
.It Sy zio_objset
|
|
|
|
The object set number for a given I/O request.
|
|
|
|
.It Sy zio_object
|
|
|
|
The object number for a given I/O request.
|
|
|
|
.It Sy zio_level
|
|
|
|
The indirect level for the block.
|
|
|
|
Level 0 is the lowest level and includes data blocks.
|
|
|
|
Values > 0 indicate metadata blocks at the appropriate level.
|
|
|
|
.It Sy zio_blkid
|
|
|
|
The block ID for a given I/O request.
|
|
|
|
.It Sy zio_err
|
|
|
|
The error number for a failure when handling a given I/O request,
|
|
|
|
compatible with
|
|
|
|
.Xr errno 3
|
|
|
|
with the value of
|
|
|
|
.Sy EBADE
|
|
|
|
used to indicate a ZFS checksum error.
|
|
|
|
.It Sy zio_offset
|
|
|
|
The offset in bytes of where to write the I/O request for the specified vdev.
|
|
|
|
.It Sy zio_size
|
|
|
|
The size in bytes of the I/O request.
|
|
|
|
.It Sy zio_flags
|
|
|
|
The current flags describing how the I/O request should be handled.
|
|
|
|
See the
|
|
|
|
.Sy I/O FLAGS
|
|
|
|
section for the full list of I/O flags.
|
|
|
|
.It Sy zio_stage
|
|
|
|
The current stage of the I/O in the pipeline.
|
|
|
|
See the
|
|
|
|
.Sy I/O STAGES
|
|
|
|
section for a full list of all the I/O stages.
|
|
|
|
.It Sy zio_pipeline
|
|
|
|
The valid pipeline stages for the I/O.
|
|
|
|
See the
|
|
|
|
.Sy I/O STAGES
|
|
|
|
section for a full list of all the I/O stages.
|
|
|
|
.It Sy zio_delay
|
|
|
|
The time elapsed (in nanoseconds) waiting for the block layer to complete the
|
|
|
|
I/O request.
|
|
|
|
Unlike
|
|
|
|
.Sy zio_delta ,
|
|
|
|
this does not include any vdev queuing time and is
|
|
|
|
therefore solely a measure of the block layer performance.
|
|
|
|
.It Sy zio_timestamp
|
|
|
|
The time when a given I/O request was submitted.
|
|
|
|
.It Sy zio_delta
|
|
|
|
The time required to service a given I/O request.
|
|
|
|
.It Sy prev_state
|
|
|
|
The previous state of the vdev.
|
|
|
|
.It Sy cksum_algorithm
|
|
|
|
Checksum algorithm used.
|
|
|
|
See
|
|
|
|
.Xr zfsprops 7
|
|
|
|
for more information on the available checksum algorithms.
|
|
|
|
.It Sy cksum_byteswap
|
|
|
|
Whether or not the data is byteswapped.
|
|
|
|
.It Sy bad_ranges
|
|
|
|
.No [\& Ns Ar start , end )
|
|
|
|
pairs of corruption offsets.
|
|
|
|
Offsets are always aligned on a 64-bit boundary,
|
|
|
|
and can include some gaps of non-corruption.
|
|
|
|
(See
|
|
|
|
.Sy bad_ranges_min_gap )
|
|
|
|
.It Sy bad_ranges_min_gap
|
|
|
|
In order to bound the size of the
|
|
|
|
.Sy bad_ranges
|
|
|
|
array, gaps of non-corruption
|
|
|
|
less than or equal to
|
|
|
|
.Sy bad_ranges_min_gap
|
|
|
|
bytes have been merged with
|
|
|
|
adjacent corruption.
|
|
|
|
Always at least 8 bytes, since corruption is detected on a 64-bit word basis.
|
|
|
|
.It Sy bad_range_sets
|
|
|
|
This array has one element per range in
|
|
|
|
.Sy bad_ranges .
|
|
|
|
Each element contains
|
|
|
|
the count of bits in that range which were clear in the good data and set
|
|
|
|
in the bad data.
|
|
|
|
.It Sy bad_range_clears
|
|
|
|
This array has one element per range in
|
|
|
|
.Sy bad_ranges .
|
|
|
|
Each element contains
|
|
|
|
the count of bits for that range which were set in the good data and clear in
|
|
|
|
the bad data.
|
|
|
|
.It Sy bad_set_bits
|
|
|
|
If this field exists, it is an array of
|
|
|
|
.Pq Ar bad data No & ~( Ns Ar good data ) ;
|
|
|
|
that is, the bits set in the bad data which are cleared in the good data.
|
|
|
|
Each element corresponds a byte whose offset is in a range in
|
|
|
|
.Sy bad_ranges ,
|
|
|
|
and the array is ordered by offset.
|
|
|
|
Thus, the first element is the first byte in the first
|
|
|
|
.Sy bad_ranges
|
|
|
|
range, and the last element is the last byte in the last
|
|
|
|
.Sy bad_ranges
|
|
|
|
range.
|
|
|
|
.It Sy bad_cleared_bits
|
|
|
|
Like
|
|
|
|
.Sy bad_set_bits ,
|
|
|
|
but contains
|
|
|
|
.Pq Ar good data No & ~( Ns Ar bad data ) ;
|
|
|
|
that is, the bits set in the good data which are cleared in the bad data.
|
|
|
|
.El
|
|
|
|
.
|
|
|
|
.Sh I/O STAGES
|
|
|
|
The ZFS I/O pipeline is comprised of various stages which are defined below.
|
|
|
|
The individual stages are used to construct these basic I/O
|
2024-04-04 14:35:00 +03:00
|
|
|
operations: Read, Write, Free, Claim, Flush and Trim.
|
2021-06-04 23:29:26 +03:00
|
|
|
These stages may be
|
|
|
|
set on an event to describe the life cycle of a given I/O request.
|
|
|
|
.Pp
|
|
|
|
.TS
|
|
|
|
tab(:);
|
|
|
|
l l l .
|
|
|
|
Stage:Bit Mask:Operations
|
|
|
|
_:_:_
|
2024-04-04 14:35:00 +03:00
|
|
|
ZIO_STAGE_OPEN:0x00000001:RWFCXT
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_READ_BP_INIT:0x00000002:R-----
|
|
|
|
ZIO_STAGE_WRITE_BP_INIT:0x00000004:-W----
|
|
|
|
ZIO_STAGE_FREE_BP_INIT:0x00000008:--F---
|
|
|
|
ZIO_STAGE_ISSUE_ASYNC:0x00000010:-WF--T
|
|
|
|
ZIO_STAGE_WRITE_COMPRESS:0x00000020:-W----
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_ENCRYPT:0x00000040:-W----
|
|
|
|
ZIO_STAGE_CHECKSUM_GENERATE:0x00000080:-W----
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_NOP_WRITE:0x00000100:-W----
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_BRT_FREE:0x00000200:--F---
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_DDT_READ_START:0x00000400:R-----
|
|
|
|
ZIO_STAGE_DDT_READ_DONE:0x00000800:R-----
|
|
|
|
ZIO_STAGE_DDT_WRITE:0x00001000:-W----
|
|
|
|
ZIO_STAGE_DDT_FREE:0x00002000:--F---
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_GANG_ASSEMBLE:0x00004000:RWFC--
|
|
|
|
ZIO_STAGE_GANG_ISSUE:0x00008000:RWFC--
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_DVA_THROTTLE:0x00010000:-W----
|
|
|
|
ZIO_STAGE_DVA_ALLOCATE:0x00020000:-W----
|
|
|
|
ZIO_STAGE_DVA_FREE:0x00040000:--F---
|
|
|
|
ZIO_STAGE_DVA_CLAIM:0x00080000:---C--
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_READY:0x00100000:RWFCIT
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-04-04 14:35:00 +03:00
|
|
|
ZIO_STAGE_VDEV_IO_START:0x00200000:RW--XT
|
|
|
|
ZIO_STAGE_VDEV_IO_DONE:0x00400000:RW--XT
|
|
|
|
ZIO_STAGE_VDEV_IO_ASSESS:0x00800000:RW--XT
|
2021-06-04 23:29:26 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_CHECKSUM_VERIFY:0x01000000:R-----
|
Adding Direct IO Support
Adding O_DIRECT support to ZFS to bypass the ARC for writes/reads.
O_DIRECT support in ZFS will always ensure there is coherency between
buffered and O_DIRECT IO requests. This ensures that all IO requests,
whether buffered or direct, will see the same file contents at all
times. Just as in other FS's , O_DIRECT does not imply O_SYNC. While
data is written directly to VDEV disks, metadata will not be synced
until the associated TXG is synced.
For both O_DIRECT read and write request the offset and request sizes,
at a minimum, must be PAGE_SIZE aligned. In the event they are not,
then EINVAL is returned unless the direct property is set to always (see
below).
For O_DIRECT writes:
The request also must be block aligned (recordsize) or the write
request will take the normal (buffered) write path. In the event that
request is block aligned and a cached copy of the buffer in the ARC,
then it will be discarded from the ARC forcing all further reads to
retrieve the data from disk.
For O_DIRECT reads:
The only alignment restrictions are PAGE_SIZE alignment. In the event
that the requested data is in buffered (in the ARC) it will just be
copied from the ARC into the user buffer.
For both O_DIRECT writes and reads the O_DIRECT flag will be ignored in
the event that file contents are mmap'ed. In this case, all requests
that are at least PAGE_SIZE aligned will just fall back to the buffered
paths. If the request however is not PAGE_SIZE aligned, EINVAL will
be returned as always regardless if the file's contents are mmap'ed.
Since O_DIRECT writes go through the normal ZIO pipeline, the
following operations are supported just as with normal buffered writes:
Checksum
Compression
Encryption
Erasure Coding
There is one caveat for the data integrity of O_DIRECT writes that is
distinct for each of the OS's supported by ZFS.
FreeBSD - FreeBSD is able to place user pages under write protection so
any data in the user buffers and written directly down to the
VDEV disks is guaranteed to not change. There is no concern
with data integrity and O_DIRECT writes.
Linux - Linux is not able to place anonymous user pages under write
protection. Because of this, if the user decides to manipulate
the page contents while the write operation is occurring, data
integrity can not be guaranteed. However, there is a module
parameter `zfs_vdev_direct_write_verify` that controls the
if a O_DIRECT writes that can occur to a top-level VDEV before
a checksum verify is run before the contents of the I/O buffer
are committed to disk. In the event of a checksum verification
failure the write will return EIO. The number of O_DIRECT write
checksum verification errors can be observed by doing
`zpool status -d`, which will list all verification errors that
have occurred on a top-level VDEV. Along with `zpool status`, a
ZED event will be issues as `dio_verify` when a checksum
verification error occurs.
ZVOLs and dedup is not currently supported with Direct I/O.
A new dataset property `direct` has been added with the following 3
allowable values:
disabled - Accepts O_DIRECT flag, but silently ignores it and treats
the request as a buffered IO request.
standard - Follows the alignment restrictions outlined above for
write/read IO requests when the O_DIRECT flag is used.
always - Treats every write/read IO request as though it passed
O_DIRECT and will do O_DIRECT if the alignment restrictions
are met otherwise will redirect through the ARC. This
property will not allow a request to fail.
There is also a module parameter zfs_dio_enabled that can be used to
force all reads and writes through the ARC. By setting this module
parameter to 0, it mimics as if the direct dataset property is set to
disabled.
Reviewed-by: Brian Behlendorf <behlendorf@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Co-authored-by: Matt Macy <mmacy@FreeBSD.org>
Co-authored-by: Brian Behlendorf <behlendorf@llnl.gov>
Closes #10018
2024-09-14 23:47:59 +03:00
|
|
|
ZIO_STAGE_DIO_CHECKSUM_VERIFY:0x02000000:-W----
|
2023-03-24 20:14:39 +03:00
|
|
|
|
Adding Direct IO Support
Adding O_DIRECT support to ZFS to bypass the ARC for writes/reads.
O_DIRECT support in ZFS will always ensure there is coherency between
buffered and O_DIRECT IO requests. This ensures that all IO requests,
whether buffered or direct, will see the same file contents at all
times. Just as in other FS's , O_DIRECT does not imply O_SYNC. While
data is written directly to VDEV disks, metadata will not be synced
until the associated TXG is synced.
For both O_DIRECT read and write request the offset and request sizes,
at a minimum, must be PAGE_SIZE aligned. In the event they are not,
then EINVAL is returned unless the direct property is set to always (see
below).
For O_DIRECT writes:
The request also must be block aligned (recordsize) or the write
request will take the normal (buffered) write path. In the event that
request is block aligned and a cached copy of the buffer in the ARC,
then it will be discarded from the ARC forcing all further reads to
retrieve the data from disk.
For O_DIRECT reads:
The only alignment restrictions are PAGE_SIZE alignment. In the event
that the requested data is in buffered (in the ARC) it will just be
copied from the ARC into the user buffer.
For both O_DIRECT writes and reads the O_DIRECT flag will be ignored in
the event that file contents are mmap'ed. In this case, all requests
that are at least PAGE_SIZE aligned will just fall back to the buffered
paths. If the request however is not PAGE_SIZE aligned, EINVAL will
be returned as always regardless if the file's contents are mmap'ed.
Since O_DIRECT writes go through the normal ZIO pipeline, the
following operations are supported just as with normal buffered writes:
Checksum
Compression
Encryption
Erasure Coding
There is one caveat for the data integrity of O_DIRECT writes that is
distinct for each of the OS's supported by ZFS.
FreeBSD - FreeBSD is able to place user pages under write protection so
any data in the user buffers and written directly down to the
VDEV disks is guaranteed to not change. There is no concern
with data integrity and O_DIRECT writes.
Linux - Linux is not able to place anonymous user pages under write
protection. Because of this, if the user decides to manipulate
the page contents while the write operation is occurring, data
integrity can not be guaranteed. However, there is a module
parameter `zfs_vdev_direct_write_verify` that controls the
if a O_DIRECT writes that can occur to a top-level VDEV before
a checksum verify is run before the contents of the I/O buffer
are committed to disk. In the event of a checksum verification
failure the write will return EIO. The number of O_DIRECT write
checksum verification errors can be observed by doing
`zpool status -d`, which will list all verification errors that
have occurred on a top-level VDEV. Along with `zpool status`, a
ZED event will be issues as `dio_verify` when a checksum
verification error occurs.
ZVOLs and dedup is not currently supported with Direct I/O.
A new dataset property `direct` has been added with the following 3
allowable values:
disabled - Accepts O_DIRECT flag, but silently ignores it and treats
the request as a buffered IO request.
standard - Follows the alignment restrictions outlined above for
write/read IO requests when the O_DIRECT flag is used.
always - Treats every write/read IO request as though it passed
O_DIRECT and will do O_DIRECT if the alignment restrictions
are met otherwise will redirect through the ARC. This
property will not allow a request to fail.
There is also a module parameter zfs_dio_enabled that can be used to
force all reads and writes through the ARC. By setting this module
parameter to 0, it mimics as if the direct dataset property is set to
disabled.
Reviewed-by: Brian Behlendorf <behlendorf@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Co-authored-by: Matt Macy <mmacy@FreeBSD.org>
Co-authored-by: Brian Behlendorf <behlendorf@llnl.gov>
Closes #10018
2024-09-14 23:47:59 +03:00
|
|
|
ZIO_STAGE_DONE:0x04000000:RWFCXT
|
2021-06-04 23:29:26 +03:00
|
|
|
.TE
|
|
|
|
.
|
|
|
|
.Sh I/O FLAGS
|
|
|
|
Every I/O request in the pipeline contains a set of flags which describe its
|
|
|
|
function and are used to govern its behavior.
|
|
|
|
These flags will be set in an event as a
|
|
|
|
.Sy zio_flags
|
|
|
|
payload entry.
|
|
|
|
.Pp
|
|
|
|
.TS
|
|
|
|
tab(:);
|
|
|
|
l l .
|
|
|
|
Flag:Bit Mask
|
|
|
|
_:_
|
|
|
|
ZIO_FLAG_DONT_AGGREGATE:0x00000001
|
|
|
|
ZIO_FLAG_IO_REPAIR:0x00000002
|
|
|
|
ZIO_FLAG_SELF_HEAL:0x00000004
|
|
|
|
ZIO_FLAG_RESILVER:0x00000008
|
|
|
|
ZIO_FLAG_SCRUB:0x00000010
|
|
|
|
ZIO_FLAG_SCAN_THREAD:0x00000020
|
|
|
|
ZIO_FLAG_PHYSICAL:0x00000040
|
|
|
|
|
|
|
|
ZIO_FLAG_CANFAIL:0x00000080
|
|
|
|
ZIO_FLAG_SPECULATIVE:0x00000100
|
|
|
|
ZIO_FLAG_CONFIG_WRITER:0x00000200
|
|
|
|
ZIO_FLAG_DONT_RETRY:0x00000400
|
|
|
|
ZIO_FLAG_NODATA:0x00001000
|
|
|
|
ZIO_FLAG_INDUCE_DAMAGE:0x00002000
|
|
|
|
|
|
|
|
ZIO_FLAG_IO_ALLOCATING:0x00004000
|
|
|
|
ZIO_FLAG_IO_RETRY:0x00008000
|
|
|
|
ZIO_FLAG_PROBE:0x00010000
|
|
|
|
ZIO_FLAG_TRYHARD:0x00020000
|
|
|
|
ZIO_FLAG_OPTIONAL:0x00040000
|
|
|
|
|
|
|
|
ZIO_FLAG_DONT_QUEUE:0x00080000
|
|
|
|
ZIO_FLAG_DONT_PROPAGATE:0x00100000
|
|
|
|
ZIO_FLAG_IO_BYPASS:0x00200000
|
|
|
|
ZIO_FLAG_IO_REWRITE:0x00400000
|
|
|
|
ZIO_FLAG_RAW_COMPRESS:0x00800000
|
|
|
|
ZIO_FLAG_RAW_ENCRYPT:0x01000000
|
|
|
|
|
|
|
|
ZIO_FLAG_GANG_CHILD:0x02000000
|
|
|
|
ZIO_FLAG_DDT_CHILD:0x04000000
|
|
|
|
ZIO_FLAG_GODFATHER:0x08000000
|
|
|
|
ZIO_FLAG_NOPWRITE:0x10000000
|
|
|
|
ZIO_FLAG_REEXECUTED:0x20000000
|
|
|
|
ZIO_FLAG_DELEGATED:0x40000000
|
|
|
|
ZIO_FLAG_FASTWRITE:0x80000000
|
|
|
|
.TE
|
|
|
|
.
|
2019-11-13 20:21:07 +03:00
|
|
|
.Sh SEE ALSO
|
2021-06-04 23:29:26 +03:00
|
|
|
.Xr zfs 4 ,
|
2021-05-27 03:46:40 +03:00
|
|
|
.Xr zed 8 ,
|
|
|
|
.Xr zpool-wait 8
|