2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
2022-07-12 00:16:13 +03:00
|
|
|
* or https://opensource.org/licenses/CDDL-1.0.
|
2008-11-20 23:01:55 +03:00
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
/*
|
2010-05-29 00:45:14 +04:00
|
|
|
* Copyright 2009 Sun Microsystems, Inc. All rights reserved.
|
2008-11-20 23:01:55 +03:00
|
|
|
* Use is subject to license terms.
|
|
|
|
*/
|
|
|
|
|
2013-05-06 21:14:52 +04:00
|
|
|
/*
|
2016-10-14 03:59:18 +03:00
|
|
|
* Copyright (c) 2012, 2015 by Delphix. All rights reserved.
|
2024-03-21 22:10:04 +03:00
|
|
|
* Copyright (c) 2024, Klara Inc.
|
2013-05-06 21:14:52 +04:00
|
|
|
*/
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
#ifndef _ZIO_IMPL_H
|
|
|
|
#define _ZIO_IMPL_H
|
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
extern "C" {
|
|
|
|
#endif
|
|
|
|
|
2013-05-10 23:47:54 +04:00
|
|
|
/*
|
2013-07-03 20:13:38 +04:00
|
|
|
* XXX -- Describe ZFS I/O pipeline here. Fill in as needed.
|
2013-05-10 23:47:54 +04:00
|
|
|
*
|
|
|
|
* The ZFS I/O pipeline is comprised of various stages which are defined
|
|
|
|
* in the zio_stage enum below. The individual stages are used to construct
|
2024-04-04 14:35:00 +03:00
|
|
|
* these basic I/O operations: Read, Write, Free, Claim, Flush and Trim.
|
2013-05-10 23:47:54 +04:00
|
|
|
*
|
|
|
|
* I/O operations: (XXX - provide detail for each of the operations)
|
|
|
|
*
|
|
|
|
* Read:
|
|
|
|
* Write:
|
|
|
|
* Free:
|
|
|
|
* Claim:
|
2024-04-04 14:35:00 +03:00
|
|
|
* Flush:
|
2024-03-21 22:10:04 +03:00
|
|
|
* Trim:
|
2013-05-10 23:47:54 +04:00
|
|
|
*
|
|
|
|
* Although the most common pipeline are used by the basic I/O operations
|
|
|
|
* above, there are some helper pipelines (one could consider them
|
|
|
|
* sub-pipelines) which are used internally by the ZIO module and are
|
|
|
|
* explained below:
|
|
|
|
*
|
|
|
|
* Interlock Pipeline:
|
|
|
|
* The interlock pipeline is the most basic pipeline and is used by all
|
|
|
|
* of the I/O operations. The interlock pipeline does not perform any I/O
|
|
|
|
* and is used to coordinate the dependencies between I/Os that are being
|
|
|
|
* issued (i.e. the parent/child relationship).
|
|
|
|
*
|
|
|
|
* Vdev child Pipeline:
|
|
|
|
* The vdev child pipeline is responsible for performing the physical I/O.
|
|
|
|
* It is in this pipeline where the I/O are queued and possibly cached.
|
|
|
|
*
|
|
|
|
* In addition to performing I/O, the pipeline is also responsible for
|
|
|
|
* data transformations. The transformations performed are based on the
|
|
|
|
* specific properties that user may have selected and modify the
|
|
|
|
* behavior of the pipeline. Examples of supported transformations are
|
|
|
|
* compression, dedup, and nop writes. Transformations will either modify
|
|
|
|
* the data or the pipeline. This list below further describes each of
|
|
|
|
* the supported transformations:
|
|
|
|
*
|
|
|
|
* Compression:
|
Add zstd support to zfs
This PR adds two new compression types, based on ZStandard:
- zstd: A basic ZStandard compression algorithm Available compression.
Levels for zstd are zstd-1 through zstd-19, where the compression
increases with every level, but speed decreases.
- zstd-fast: A faster version of the ZStandard compression algorithm
zstd-fast is basically a "negative" level of zstd. The compression
decreases with every level, but speed increases.
Available compression levels for zstd-fast:
- zstd-fast-1 through zstd-fast-10
- zstd-fast-20 through zstd-fast-100 (in increments of 10)
- zstd-fast-500 and zstd-fast-1000
For more information check the man page.
Implementation details:
Rather than treat each level of zstd as a different algorithm (as was
done historically with gzip), the block pointer `enum zio_compress`
value is simply zstd for all levels, including zstd-fast, since they all
use the same decompression function.
The compress= property (a 64bit unsigned integer) uses the lower 7 bits
to store the compression algorithm (matching the number of bits used in
a block pointer, as the 8th bit was borrowed for embedded block
pointers). The upper bits are used to store the compression level.
It is necessary to be able to determine what compression level was used
when later reading a block back, so the concept used in LZ4, where the
first 32bits of the on-disk value are the size of the compressed data
(since the allocation is rounded up to the nearest ashift), was
extended, and we store the version of ZSTD and the level as well as the
compressed size. This value is returned when decompressing a block, so
that if the block needs to be recompressed (L2ARC, nop-write, etc), that
the same parameters will be used to result in the matching checksum.
All of the internal ZFS code ( `arc_buf_hdr_t`, `objset_t`,
`zio_prop_t`, etc.) uses the separated _compress and _complevel
variables. Only the properties ZAP contains the combined/bit-shifted
value. The combined value is split when the compression_changed_cb()
callback is called, and sets both objset members (os_compress and
os_complevel).
The userspace tools all use the combined/bit-shifted value.
Additional notes:
zdb can now also decode the ZSTD compression header (flag -Z) and
inspect the size, version and compression level saved in that header.
For each record, if it is ZSTD compressed, the parameters of the decoded
compression header get printed.
ZSTD is included with all current tests and new tests are added
as-needed.
Per-dataset feature flags now get activated when the property is set.
If a compression algorithm requires a feature flag, zfs activates the
feature when the property is set, rather than waiting for the first
block to be born. This is currently only used by zstd but can be
extended as needed.
Portions-Sponsored-By: The FreeBSD Foundation
Co-authored-by: Allan Jude <allanjude@freebsd.org>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-authored-by: Michael Niewöhner <foss@mniewoehner.de>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Allan Jude <allanjude@freebsd.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>
Closes #6247
Closes #9024
Closes #10277
Closes #10278
2020-08-18 20:10:17 +03:00
|
|
|
* ZFS supports five different flavors of compression -- gzip, lzjb, lz4, zle,
|
|
|
|
* and zstd. Compression occurs as part of the write pipeline and is
|
|
|
|
* performed in the ZIO_STAGE_WRITE_BP_INIT stage.
|
2013-05-10 23:47:54 +04:00
|
|
|
*
|
2023-03-10 22:59:53 +03:00
|
|
|
* Block cloning:
|
|
|
|
* The block cloning functionality introduces ZIO_STAGE_BRT_FREE stage which
|
|
|
|
* is called during a free pipeline. If the block is referenced in the
|
|
|
|
* Block Cloning Table (BRT) we will just decrease its reference counter
|
|
|
|
* instead of actually freeing the block.
|
|
|
|
*
|
2013-05-10 23:47:54 +04:00
|
|
|
* Dedup:
|
|
|
|
* Dedup reads are handled by the ZIO_STAGE_DDT_READ_START and
|
|
|
|
* ZIO_STAGE_DDT_READ_DONE stages. These stages are added to an existing
|
|
|
|
* read pipeline if the dedup bit is set on the block pointer.
|
|
|
|
* Writing a dedup block is performed by the ZIO_STAGE_DDT_WRITE stage
|
|
|
|
* and added to a write pipeline if a user has enabled dedup on that
|
|
|
|
* particular dataset.
|
|
|
|
*
|
|
|
|
* NOP Write:
|
|
|
|
* The NOP write feature is performed by the ZIO_STAGE_NOP_WRITE stage
|
2019-08-30 19:53:15 +03:00
|
|
|
* and is added to an existing write pipeline if a cryptographically
|
2013-05-10 23:47:54 +04:00
|
|
|
* secure checksum (i.e. SHA256) is enabled and compression is turned on.
|
|
|
|
* The NOP write stage will compare the checksums of the current data
|
|
|
|
* on-disk (level-0 blocks only) and the data that is currently being written.
|
|
|
|
* If the checksum values are identical then the pipeline is converted to
|
|
|
|
* an interlock pipeline skipping block allocation and bypassing the
|
|
|
|
* physical I/O. The nop write feature can handle writes in either
|
|
|
|
* syncing or open context (i.e. zil writes) and as a result is mutually
|
|
|
|
* exclusive with dedup.
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
*
|
|
|
|
* Encryption:
|
|
|
|
* Encryption and authentication is handled by the ZIO_STAGE_ENCRYPT stage.
|
|
|
|
* This stage determines how the encryption metadata is stored in the bp.
|
|
|
|
* Decryption and MAC verification is performed during zio_decrypt() as a
|
|
|
|
* transform callback. Encryption is mutually exclusive with nopwrite, because
|
|
|
|
* blocks with the same plaintext will be encrypted with different salts and
|
|
|
|
* IV's (if dedup is off), and therefore have different ciphertexts. For dedup
|
|
|
|
* blocks we deterministically generate the IV and salt by performing an HMAC
|
|
|
|
* of the plaintext, which is computationally expensive, but allows us to keep
|
|
|
|
* support for encrypted dedup. See the block comment in zio_crypt.c for
|
|
|
|
* details.
|
2013-05-10 23:47:54 +04:00
|
|
|
*/
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
2010-05-29 00:45:14 +04:00
|
|
|
* zio pipeline stage definitions
|
2024-02-29 03:25:24 +03:00
|
|
|
*
|
|
|
|
* NOTE: PLEASE UPDATE THE BITFIELD STRINGS IN zfs_valstr.c IF YOU ADD ANOTHER
|
|
|
|
* FLAG.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
2010-05-29 00:45:14 +04:00
|
|
|
enum zio_stage {
|
2024-04-04 14:35:00 +03:00
|
|
|
ZIO_STAGE_OPEN = 1 << 0, /* RWFCXT */
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_READ_BP_INIT = 1 << 1, /* R----- */
|
|
|
|
ZIO_STAGE_WRITE_BP_INIT = 1 << 2, /* -W---- */
|
|
|
|
ZIO_STAGE_FREE_BP_INIT = 1 << 3, /* --F--- */
|
|
|
|
ZIO_STAGE_ISSUE_ASYNC = 1 << 4, /* -WF--T */
|
|
|
|
ZIO_STAGE_WRITE_COMPRESS = 1 << 5, /* -W---- */
|
2008-12-03 23:09:06 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_ENCRYPT = 1 << 6, /* -W---- */
|
|
|
|
ZIO_STAGE_CHECKSUM_GENERATE = 1 << 7, /* -W---- */
|
2008-12-03 23:09:06 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_NOP_WRITE = 1 << 8, /* -W---- */
|
2013-05-10 23:47:54 +04:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_BRT_FREE = 1 << 9, /* --F--- */
|
2023-03-10 22:59:53 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_DDT_READ_START = 1 << 10, /* R----- */
|
|
|
|
ZIO_STAGE_DDT_READ_DONE = 1 << 11, /* R----- */
|
|
|
|
ZIO_STAGE_DDT_WRITE = 1 << 12, /* -W---- */
|
|
|
|
ZIO_STAGE_DDT_FREE = 1 << 13, /* --F--- */
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_GANG_ASSEMBLE = 1 << 14, /* RWFC-- */
|
|
|
|
ZIO_STAGE_GANG_ISSUE = 1 << 15, /* RWFC-- */
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_DVA_THROTTLE = 1 << 16, /* -W---- */
|
|
|
|
ZIO_STAGE_DVA_ALLOCATE = 1 << 17, /* -W---- */
|
|
|
|
ZIO_STAGE_DVA_FREE = 1 << 18, /* --F--- */
|
|
|
|
ZIO_STAGE_DVA_CLAIM = 1 << 19, /* ---C-- */
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2024-04-04 14:35:00 +03:00
|
|
|
ZIO_STAGE_READY = 1 << 20, /* RWFCXT */
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2024-04-04 14:35:00 +03:00
|
|
|
ZIO_STAGE_VDEV_IO_START = 1 << 21, /* RW--XT */
|
|
|
|
ZIO_STAGE_VDEV_IO_DONE = 1 << 22, /* RW--XT */
|
|
|
|
ZIO_STAGE_VDEV_IO_ASSESS = 1 << 23, /* RW--XT */
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2024-03-21 22:10:04 +03:00
|
|
|
ZIO_STAGE_CHECKSUM_VERIFY = 1 << 24, /* R----- */
|
Adding Direct IO Support
Adding O_DIRECT support to ZFS to bypass the ARC for writes/reads.
O_DIRECT support in ZFS will always ensure there is coherency between
buffered and O_DIRECT IO requests. This ensures that all IO requests,
whether buffered or direct, will see the same file contents at all
times. Just as in other FS's , O_DIRECT does not imply O_SYNC. While
data is written directly to VDEV disks, metadata will not be synced
until the associated TXG is synced.
For both O_DIRECT read and write request the offset and request sizes,
at a minimum, must be PAGE_SIZE aligned. In the event they are not,
then EINVAL is returned unless the direct property is set to always (see
below).
For O_DIRECT writes:
The request also must be block aligned (recordsize) or the write
request will take the normal (buffered) write path. In the event that
request is block aligned and a cached copy of the buffer in the ARC,
then it will be discarded from the ARC forcing all further reads to
retrieve the data from disk.
For O_DIRECT reads:
The only alignment restrictions are PAGE_SIZE alignment. In the event
that the requested data is in buffered (in the ARC) it will just be
copied from the ARC into the user buffer.
For both O_DIRECT writes and reads the O_DIRECT flag will be ignored in
the event that file contents are mmap'ed. In this case, all requests
that are at least PAGE_SIZE aligned will just fall back to the buffered
paths. If the request however is not PAGE_SIZE aligned, EINVAL will
be returned as always regardless if the file's contents are mmap'ed.
Since O_DIRECT writes go through the normal ZIO pipeline, the
following operations are supported just as with normal buffered writes:
Checksum
Compression
Encryption
Erasure Coding
There is one caveat for the data integrity of O_DIRECT writes that is
distinct for each of the OS's supported by ZFS.
FreeBSD - FreeBSD is able to place user pages under write protection so
any data in the user buffers and written directly down to the
VDEV disks is guaranteed to not change. There is no concern
with data integrity and O_DIRECT writes.
Linux - Linux is not able to place anonymous user pages under write
protection. Because of this, if the user decides to manipulate
the page contents while the write operation is occurring, data
integrity can not be guaranteed. However, there is a module
parameter `zfs_vdev_direct_write_verify` that controls the
if a O_DIRECT writes that can occur to a top-level VDEV before
a checksum verify is run before the contents of the I/O buffer
are committed to disk. In the event of a checksum verification
failure the write will return EIO. The number of O_DIRECT write
checksum verification errors can be observed by doing
`zpool status -d`, which will list all verification errors that
have occurred on a top-level VDEV. Along with `zpool status`, a
ZED event will be issues as `dio_verify` when a checksum
verification error occurs.
ZVOLs and dedup is not currently supported with Direct I/O.
A new dataset property `direct` has been added with the following 3
allowable values:
disabled - Accepts O_DIRECT flag, but silently ignores it and treats
the request as a buffered IO request.
standard - Follows the alignment restrictions outlined above for
write/read IO requests when the O_DIRECT flag is used.
always - Treats every write/read IO request as though it passed
O_DIRECT and will do O_DIRECT if the alignment restrictions
are met otherwise will redirect through the ARC. This
property will not allow a request to fail.
There is also a module parameter zfs_dio_enabled that can be used to
force all reads and writes through the ARC. By setting this module
parameter to 0, it mimics as if the direct dataset property is set to
disabled.
Reviewed-by: Brian Behlendorf <behlendorf@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Co-authored-by: Matt Macy <mmacy@FreeBSD.org>
Co-authored-by: Brian Behlendorf <behlendorf@llnl.gov>
Closes #10018
2024-09-14 23:47:59 +03:00
|
|
|
ZIO_STAGE_DIO_CHECKSUM_VERIFY = 1 << 25, /* -W---- */
|
2008-11-20 23:01:55 +03:00
|
|
|
|
Adding Direct IO Support
Adding O_DIRECT support to ZFS to bypass the ARC for writes/reads.
O_DIRECT support in ZFS will always ensure there is coherency between
buffered and O_DIRECT IO requests. This ensures that all IO requests,
whether buffered or direct, will see the same file contents at all
times. Just as in other FS's , O_DIRECT does not imply O_SYNC. While
data is written directly to VDEV disks, metadata will not be synced
until the associated TXG is synced.
For both O_DIRECT read and write request the offset and request sizes,
at a minimum, must be PAGE_SIZE aligned. In the event they are not,
then EINVAL is returned unless the direct property is set to always (see
below).
For O_DIRECT writes:
The request also must be block aligned (recordsize) or the write
request will take the normal (buffered) write path. In the event that
request is block aligned and a cached copy of the buffer in the ARC,
then it will be discarded from the ARC forcing all further reads to
retrieve the data from disk.
For O_DIRECT reads:
The only alignment restrictions are PAGE_SIZE alignment. In the event
that the requested data is in buffered (in the ARC) it will just be
copied from the ARC into the user buffer.
For both O_DIRECT writes and reads the O_DIRECT flag will be ignored in
the event that file contents are mmap'ed. In this case, all requests
that are at least PAGE_SIZE aligned will just fall back to the buffered
paths. If the request however is not PAGE_SIZE aligned, EINVAL will
be returned as always regardless if the file's contents are mmap'ed.
Since O_DIRECT writes go through the normal ZIO pipeline, the
following operations are supported just as with normal buffered writes:
Checksum
Compression
Encryption
Erasure Coding
There is one caveat for the data integrity of O_DIRECT writes that is
distinct for each of the OS's supported by ZFS.
FreeBSD - FreeBSD is able to place user pages under write protection so
any data in the user buffers and written directly down to the
VDEV disks is guaranteed to not change. There is no concern
with data integrity and O_DIRECT writes.
Linux - Linux is not able to place anonymous user pages under write
protection. Because of this, if the user decides to manipulate
the page contents while the write operation is occurring, data
integrity can not be guaranteed. However, there is a module
parameter `zfs_vdev_direct_write_verify` that controls the
if a O_DIRECT writes that can occur to a top-level VDEV before
a checksum verify is run before the contents of the I/O buffer
are committed to disk. In the event of a checksum verification
failure the write will return EIO. The number of O_DIRECT write
checksum verification errors can be observed by doing
`zpool status -d`, which will list all verification errors that
have occurred on a top-level VDEV. Along with `zpool status`, a
ZED event will be issues as `dio_verify` when a checksum
verification error occurs.
ZVOLs and dedup is not currently supported with Direct I/O.
A new dataset property `direct` has been added with the following 3
allowable values:
disabled - Accepts O_DIRECT flag, but silently ignores it and treats
the request as a buffered IO request.
standard - Follows the alignment restrictions outlined above for
write/read IO requests when the O_DIRECT flag is used.
always - Treats every write/read IO request as though it passed
O_DIRECT and will do O_DIRECT if the alignment restrictions
are met otherwise will redirect through the ARC. This
property will not allow a request to fail.
There is also a module parameter zfs_dio_enabled that can be used to
force all reads and writes through the ARC. By setting this module
parameter to 0, it mimics as if the direct dataset property is set to
disabled.
Reviewed-by: Brian Behlendorf <behlendorf@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Co-authored-by: Matt Macy <mmacy@FreeBSD.org>
Co-authored-by: Brian Behlendorf <behlendorf@llnl.gov>
Closes #10018
2024-09-14 23:47:59 +03:00
|
|
|
ZIO_STAGE_DONE = 1 << 26 /* RWFCXT */
|
2010-05-29 00:45:14 +04:00
|
|
|
};
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2023-10-26 01:22:25 +03:00
|
|
|
#define ZIO_ROOT_PIPELINE \
|
|
|
|
ZIO_STAGE_DONE
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_INTERLOCK_STAGES \
|
|
|
|
(ZIO_STAGE_READY | \
|
|
|
|
ZIO_STAGE_DONE)
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_INTERLOCK_PIPELINE \
|
2008-12-03 23:09:06 +03:00
|
|
|
ZIO_INTERLOCK_STAGES
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_VDEV_IO_STAGES \
|
|
|
|
(ZIO_STAGE_VDEV_IO_START | \
|
|
|
|
ZIO_STAGE_VDEV_IO_DONE | \
|
|
|
|
ZIO_STAGE_VDEV_IO_ASSESS)
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_VDEV_CHILD_PIPELINE \
|
|
|
|
(ZIO_VDEV_IO_STAGES | \
|
|
|
|
ZIO_STAGE_DONE)
|
2008-12-03 23:09:06 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_READ_COMMON_STAGES \
|
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
|
|
|
ZIO_VDEV_IO_STAGES | \
|
|
|
|
ZIO_STAGE_CHECKSUM_VERIFY)
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_READ_PHYS_PIPELINE \
|
2008-12-03 23:09:06 +03:00
|
|
|
ZIO_READ_COMMON_STAGES
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_READ_PIPELINE \
|
|
|
|
(ZIO_READ_COMMON_STAGES | \
|
|
|
|
ZIO_STAGE_READ_BP_INIT)
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_DDT_CHILD_READ_PIPELINE \
|
|
|
|
ZIO_READ_COMMON_STAGES
|
2008-12-03 23:09:06 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_DDT_READ_PIPELINE \
|
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
|
|
|
ZIO_STAGE_READ_BP_INIT | \
|
|
|
|
ZIO_STAGE_DDT_READ_START | \
|
|
|
|
ZIO_STAGE_DDT_READ_DONE)
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_WRITE_COMMON_STAGES \
|
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
|
|
|
ZIO_VDEV_IO_STAGES | \
|
|
|
|
ZIO_STAGE_ISSUE_ASYNC | \
|
|
|
|
ZIO_STAGE_CHECKSUM_GENERATE)
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_WRITE_PHYS_PIPELINE \
|
|
|
|
ZIO_WRITE_COMMON_STAGES
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_REWRITE_PIPELINE \
|
|
|
|
(ZIO_WRITE_COMMON_STAGES | \
|
2016-10-14 03:59:18 +03:00
|
|
|
ZIO_STAGE_WRITE_COMPRESS | \
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
ZIO_STAGE_ENCRYPT | \
|
2010-05-29 00:45:14 +04:00
|
|
|
ZIO_STAGE_WRITE_BP_INIT)
|
|
|
|
|
|
|
|
#define ZIO_WRITE_PIPELINE \
|
|
|
|
(ZIO_WRITE_COMMON_STAGES | \
|
|
|
|
ZIO_STAGE_WRITE_BP_INIT | \
|
2016-10-14 03:59:18 +03:00
|
|
|
ZIO_STAGE_WRITE_COMPRESS | \
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
ZIO_STAGE_ENCRYPT | \
|
2016-10-14 03:59:18 +03:00
|
|
|
ZIO_STAGE_DVA_THROTTLE | \
|
2010-05-29 00:45:14 +04:00
|
|
|
ZIO_STAGE_DVA_ALLOCATE)
|
|
|
|
|
Adding Direct IO Support
Adding O_DIRECT support to ZFS to bypass the ARC for writes/reads.
O_DIRECT support in ZFS will always ensure there is coherency between
buffered and O_DIRECT IO requests. This ensures that all IO requests,
whether buffered or direct, will see the same file contents at all
times. Just as in other FS's , O_DIRECT does not imply O_SYNC. While
data is written directly to VDEV disks, metadata will not be synced
until the associated TXG is synced.
For both O_DIRECT read and write request the offset and request sizes,
at a minimum, must be PAGE_SIZE aligned. In the event they are not,
then EINVAL is returned unless the direct property is set to always (see
below).
For O_DIRECT writes:
The request also must be block aligned (recordsize) or the write
request will take the normal (buffered) write path. In the event that
request is block aligned and a cached copy of the buffer in the ARC,
then it will be discarded from the ARC forcing all further reads to
retrieve the data from disk.
For O_DIRECT reads:
The only alignment restrictions are PAGE_SIZE alignment. In the event
that the requested data is in buffered (in the ARC) it will just be
copied from the ARC into the user buffer.
For both O_DIRECT writes and reads the O_DIRECT flag will be ignored in
the event that file contents are mmap'ed. In this case, all requests
that are at least PAGE_SIZE aligned will just fall back to the buffered
paths. If the request however is not PAGE_SIZE aligned, EINVAL will
be returned as always regardless if the file's contents are mmap'ed.
Since O_DIRECT writes go through the normal ZIO pipeline, the
following operations are supported just as with normal buffered writes:
Checksum
Compression
Encryption
Erasure Coding
There is one caveat for the data integrity of O_DIRECT writes that is
distinct for each of the OS's supported by ZFS.
FreeBSD - FreeBSD is able to place user pages under write protection so
any data in the user buffers and written directly down to the
VDEV disks is guaranteed to not change. There is no concern
with data integrity and O_DIRECT writes.
Linux - Linux is not able to place anonymous user pages under write
protection. Because of this, if the user decides to manipulate
the page contents while the write operation is occurring, data
integrity can not be guaranteed. However, there is a module
parameter `zfs_vdev_direct_write_verify` that controls the
if a O_DIRECT writes that can occur to a top-level VDEV before
a checksum verify is run before the contents of the I/O buffer
are committed to disk. In the event of a checksum verification
failure the write will return EIO. The number of O_DIRECT write
checksum verification errors can be observed by doing
`zpool status -d`, which will list all verification errors that
have occurred on a top-level VDEV. Along with `zpool status`, a
ZED event will be issues as `dio_verify` when a checksum
verification error occurs.
ZVOLs and dedup is not currently supported with Direct I/O.
A new dataset property `direct` has been added with the following 3
allowable values:
disabled - Accepts O_DIRECT flag, but silently ignores it and treats
the request as a buffered IO request.
standard - Follows the alignment restrictions outlined above for
write/read IO requests when the O_DIRECT flag is used.
always - Treats every write/read IO request as though it passed
O_DIRECT and will do O_DIRECT if the alignment restrictions
are met otherwise will redirect through the ARC. This
property will not allow a request to fail.
There is also a module parameter zfs_dio_enabled that can be used to
force all reads and writes through the ARC. By setting this module
parameter to 0, it mimics as if the direct dataset property is set to
disabled.
Reviewed-by: Brian Behlendorf <behlendorf@llnl.gov>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Atkinson <batkinson@lanl.gov>
Co-authored-by: Mark Maybee <mark.maybee@delphix.com>
Co-authored-by: Matt Macy <mmacy@FreeBSD.org>
Co-authored-by: Brian Behlendorf <behlendorf@llnl.gov>
Closes #10018
2024-09-14 23:47:59 +03:00
|
|
|
#define ZIO_DIRECT_WRITE_PIPELINE \
|
|
|
|
ZIO_WRITE_PIPELINE & \
|
|
|
|
(~ZIO_STAGE_ISSUE_ASYNC)
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_DDT_CHILD_WRITE_PIPELINE \
|
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
|
|
|
ZIO_VDEV_IO_STAGES | \
|
2016-10-14 03:59:18 +03:00
|
|
|
ZIO_STAGE_DVA_THROTTLE | \
|
2010-05-29 00:45:14 +04:00
|
|
|
ZIO_STAGE_DVA_ALLOCATE)
|
|
|
|
|
|
|
|
#define ZIO_DDT_WRITE_PIPELINE \
|
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
|
|
|
ZIO_STAGE_WRITE_BP_INIT | \
|
2016-10-14 03:59:18 +03:00
|
|
|
ZIO_STAGE_ISSUE_ASYNC | \
|
|
|
|
ZIO_STAGE_WRITE_COMPRESS | \
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
ZIO_STAGE_ENCRYPT | \
|
2010-05-29 00:45:14 +04:00
|
|
|
ZIO_STAGE_CHECKSUM_GENERATE | \
|
|
|
|
ZIO_STAGE_DDT_WRITE)
|
|
|
|
|
|
|
|
#define ZIO_GANG_STAGES \
|
|
|
|
(ZIO_STAGE_GANG_ASSEMBLE | \
|
|
|
|
ZIO_STAGE_GANG_ISSUE)
|
|
|
|
|
|
|
|
#define ZIO_FREE_PIPELINE \
|
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
|
|
|
ZIO_STAGE_FREE_BP_INIT | \
|
2023-03-10 22:59:53 +03:00
|
|
|
ZIO_STAGE_BRT_FREE | \
|
2010-05-29 00:45:14 +04:00
|
|
|
ZIO_STAGE_DVA_FREE)
|
|
|
|
|
|
|
|
#define ZIO_DDT_FREE_PIPELINE \
|
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
|
|
|
ZIO_STAGE_FREE_BP_INIT | \
|
|
|
|
ZIO_STAGE_ISSUE_ASYNC | \
|
|
|
|
ZIO_STAGE_DDT_FREE)
|
|
|
|
|
|
|
|
#define ZIO_CLAIM_PIPELINE \
|
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
|
|
|
ZIO_STAGE_DVA_CLAIM)
|
|
|
|
|
2024-04-04 14:35:00 +03:00
|
|
|
#define ZIO_FLUSH_PIPELINE \
|
2010-05-29 00:45:14 +04:00
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
zinject: inject device errors into ioctls
Adds 'ioctl' as a valid IO type for device error injection, so we can
simulate a flush error (which OpenZFS currently ignores, but that's by
the by).
To support this, adding ZIO_STAGE_VDEV_IO_DONE to ZIO_IOCTL_PIPELINE,
since that's where device error injection happens. This needs a small
exclusion to avoid the vdev_queue, since flushes are not queued, and I'm
assuming that the various failure responses are still reasonable for
flush failures (probes, media change, etc). This seems reasonable to me,
as a flush failure is not unlike a write failure in this regard, however
this may be too aggressive or subtle to assume in just this change.
Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Rob Norris <rob.norris@klarasystems.com>
Closes #16061
2024-04-08 21:59:04 +03:00
|
|
|
ZIO_VDEV_IO_STAGES)
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2019-03-29 19:13:20 +03:00
|
|
|
#define ZIO_TRIM_PIPELINE \
|
|
|
|
(ZIO_INTERLOCK_STAGES | \
|
|
|
|
ZIO_STAGE_ISSUE_ASYNC | \
|
|
|
|
ZIO_VDEV_IO_STAGES)
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
#define ZIO_BLOCKING_STAGES \
|
|
|
|
(ZIO_STAGE_DVA_ALLOCATE | \
|
|
|
|
ZIO_STAGE_DVA_CLAIM | \
|
|
|
|
ZIO_STAGE_VDEV_IO_START)
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
extern void zio_inject_init(void);
|
|
|
|
extern void zio_inject_fini(void);
|
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#endif /* _ZIO_IMPL_H */
|