2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
* or http://www.opensolaris.org/os/licensing.
|
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
/*
|
2010-05-29 00:45:14 +04:00
|
|
|
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
|
2016-05-15 18:02:28 +03:00
|
|
|
* Copyright (c) 2012, 2016 by Delphix. All rights reserved.
|
2013-08-02 00:02:10 +04:00
|
|
|
* Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
|
Add zstd support to zfs
This PR adds two new compression types, based on ZStandard:
- zstd: A basic ZStandard compression algorithm Available compression.
Levels for zstd are zstd-1 through zstd-19, where the compression
increases with every level, but speed decreases.
- zstd-fast: A faster version of the ZStandard compression algorithm
zstd-fast is basically a "negative" level of zstd. The compression
decreases with every level, but speed increases.
Available compression levels for zstd-fast:
- zstd-fast-1 through zstd-fast-10
- zstd-fast-20 through zstd-fast-100 (in increments of 10)
- zstd-fast-500 and zstd-fast-1000
For more information check the man page.
Implementation details:
Rather than treat each level of zstd as a different algorithm (as was
done historically with gzip), the block pointer `enum zio_compress`
value is simply zstd for all levels, including zstd-fast, since they all
use the same decompression function.
The compress= property (a 64bit unsigned integer) uses the lower 7 bits
to store the compression algorithm (matching the number of bits used in
a block pointer, as the 8th bit was borrowed for embedded block
pointers). The upper bits are used to store the compression level.
It is necessary to be able to determine what compression level was used
when later reading a block back, so the concept used in LZ4, where the
first 32bits of the on-disk value are the size of the compressed data
(since the allocation is rounded up to the nearest ashift), was
extended, and we store the version of ZSTD and the level as well as the
compressed size. This value is returned when decompressing a block, so
that if the block needs to be recompressed (L2ARC, nop-write, etc), that
the same parameters will be used to result in the matching checksum.
All of the internal ZFS code ( `arc_buf_hdr_t`, `objset_t`,
`zio_prop_t`, etc.) uses the separated _compress and _complevel
variables. Only the properties ZAP contains the combined/bit-shifted
value. The combined value is split when the compression_changed_cb()
callback is called, and sets both objset members (os_compress and
os_complevel).
The userspace tools all use the combined/bit-shifted value.
Additional notes:
zdb can now also decode the ZSTD compression header (flag -Z) and
inspect the size, version and compression level saved in that header.
For each record, if it is ZSTD compressed, the parameters of the decoded
compression header get printed.
ZSTD is included with all current tests and new tests are added
as-needed.
Per-dataset feature flags now get activated when the property is set.
If a compression algorithm requires a feature flag, zfs activates the
feature when the property is set, rather than waiting for the first
block to be born. This is currently only used by zstd but can be
extended as needed.
Portions-Sponsored-By: The FreeBSD Foundation
Co-authored-by: Allan Jude <allanjude@freebsd.org>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-authored-by: Michael Niewöhner <foss@mniewoehner.de>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Allan Jude <allanjude@freebsd.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>
Closes #6247
Closes #9024
Closes #10277
Closes #10278
2020-08-18 20:10:17 +03:00
|
|
|
* Copyright (c) 2019, Allan Jude
|
|
|
|
* Copyright (c) 2019, Klara Inc.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _SYS_ARC_H
|
|
|
|
#define _SYS_ARC_H
|
|
|
|
|
|
|
|
#include <sys/zfs_context.h>
|
|
|
|
|
|
|
|
#ifdef __cplusplus
|
|
|
|
extern "C" {
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#include <sys/zio.h>
|
|
|
|
#include <sys/dmu.h>
|
|
|
|
#include <sys/spa.h>
|
2020-07-30 02:35:33 +03:00
|
|
|
#include <sys/zfs_refcount.h>
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2015-01-13 06:52:19 +03:00
|
|
|
/*
|
|
|
|
* Used by arc_flush() to inform arc_evict_state() that it should evict
|
|
|
|
* all available buffers from the arc state being passed in.
|
|
|
|
*/
|
2021-07-20 17:13:21 +03:00
|
|
|
#define ARC_EVICT_ALL UINT64_MAX
|
2015-01-13 06:52:19 +03:00
|
|
|
|
2016-06-02 07:04:53 +03:00
|
|
|
#define HDR_SET_LSIZE(hdr, x) do { \
|
|
|
|
ASSERT(IS_P2ALIGNED(x, 1U << SPA_MINBLOCKSHIFT)); \
|
|
|
|
(hdr)->b_lsize = ((x) >> SPA_MINBLOCKSHIFT); \
|
|
|
|
_NOTE(CONSTCOND) } while (0)
|
|
|
|
|
|
|
|
#define HDR_SET_PSIZE(hdr, x) do { \
|
|
|
|
ASSERT(IS_P2ALIGNED((x), 1U << SPA_MINBLOCKSHIFT)); \
|
|
|
|
(hdr)->b_psize = ((x) >> SPA_MINBLOCKSHIFT); \
|
|
|
|
_NOTE(CONSTCOND) } while (0)
|
|
|
|
|
|
|
|
#define HDR_GET_LSIZE(hdr) ((hdr)->b_lsize << SPA_MINBLOCKSHIFT)
|
|
|
|
#define HDR_GET_PSIZE(hdr) ((hdr)->b_psize << SPA_MINBLOCKSHIFT)
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
typedef struct arc_buf_hdr arc_buf_hdr_t;
|
|
|
|
typedef struct arc_buf arc_buf_t;
|
2011-12-23 00:20:43 +04:00
|
|
|
typedef struct arc_prune arc_prune_t;
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Because the ARC can store encrypted data, errors (not due to bugs) may arise
|
|
|
|
* while transforming data into its desired format - specifically, when
|
|
|
|
* decrypting, the key may not be present, or the HMAC may not be correct
|
|
|
|
* which signifies deliberate tampering with the on-disk state
|
2017-11-16 04:27:01 +03:00
|
|
|
* (assuming that the checksum was correct). If any error occurs, the "buf"
|
|
|
|
* parameter will be NULL.
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
*/
|
2017-11-16 04:27:01 +03:00
|
|
|
typedef void arc_read_done_func_t(zio_t *zio, const zbookmark_phys_t *zb,
|
2020-06-06 22:54:04 +03:00
|
|
|
const blkptr_t *bp, arc_buf_t *buf, void *priv);
|
|
|
|
typedef void arc_write_done_func_t(zio_t *zio, arc_buf_t *buf, void *priv);
|
|
|
|
typedef void arc_prune_func_t(int64_t bytes, void *priv);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2015-08-31 04:59:23 +03:00
|
|
|
/* Shared module parameters */
|
|
|
|
extern int zfs_arc_average_blocksize;
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/* generic arc_done_func_t's which you can use */
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
arc_read_done_func_t arc_bcopy_func;
|
|
|
|
arc_read_done_func_t arc_getbuf_func;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2011-12-23 00:20:43 +04:00
|
|
|
/* generic arc_prune_func_t wrapper for callbacks */
|
|
|
|
struct arc_prune {
|
|
|
|
arc_prune_func_t *p_pfunc;
|
|
|
|
void *p_private;
|
2015-05-30 17:57:53 +03:00
|
|
|
uint64_t p_adjust;
|
2011-12-23 00:20:43 +04:00
|
|
|
list_node_t p_node;
|
2018-09-26 20:29:26 +03:00
|
|
|
zfs_refcount_t p_refcnt;
|
2011-12-23 00:20:43 +04:00
|
|
|
};
|
|
|
|
|
2015-05-30 17:57:53 +03:00
|
|
|
typedef enum arc_strategy {
|
|
|
|
ARC_STRATEGY_META_ONLY = 0, /* Evict only meta data buffers */
|
|
|
|
ARC_STRATEGY_META_BALANCED = 1, /* Evict data buffers if needed */
|
|
|
|
} arc_strategy_t;
|
|
|
|
|
2014-12-06 20:24:32 +03:00
|
|
|
typedef enum arc_flags
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Public flags that can be passed into the ARC by external consumers.
|
|
|
|
*/
|
2016-06-02 07:04:53 +03:00
|
|
|
ARC_FLAG_WAIT = 1 << 0, /* perform sync I/O */
|
|
|
|
ARC_FLAG_NOWAIT = 1 << 1, /* perform async I/O */
|
|
|
|
ARC_FLAG_PREFETCH = 1 << 2, /* I/O is a prefetch */
|
|
|
|
ARC_FLAG_CACHED = 1 << 3, /* I/O was in cache */
|
|
|
|
ARC_FLAG_L2CACHE = 1 << 4, /* cache in L2ARC */
|
|
|
|
ARC_FLAG_PREDICTIVE_PREFETCH = 1 << 5, /* I/O from zfetch */
|
2017-11-16 04:27:01 +03:00
|
|
|
ARC_FLAG_PRESCIENT_PREFETCH = 1 << 6, /* long min lifespan */
|
2014-12-06 20:24:32 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Private ARC flags. These flags are private ARC only flags that
|
|
|
|
* will show up in b_flags in the arc_hdr_buf_t. These flags should
|
|
|
|
* only be set by ARC code.
|
|
|
|
*/
|
2017-11-16 04:27:01 +03:00
|
|
|
ARC_FLAG_IN_HASH_TABLE = 1 << 7, /* buffer is hashed */
|
|
|
|
ARC_FLAG_IO_IN_PROGRESS = 1 << 8, /* I/O in progress */
|
|
|
|
ARC_FLAG_IO_ERROR = 1 << 9, /* I/O failed for buf */
|
|
|
|
ARC_FLAG_INDIRECT = 1 << 10, /* indirect block */
|
2015-12-27 00:10:31 +03:00
|
|
|
/* Indicates that block was read with ASYNC priority. */
|
2017-11-16 04:27:01 +03:00
|
|
|
ARC_FLAG_PRIO_ASYNC_READ = 1 << 11,
|
|
|
|
ARC_FLAG_L2_WRITING = 1 << 12, /* write in progress */
|
|
|
|
ARC_FLAG_L2_EVICTED = 1 << 13, /* evicted during I/O */
|
|
|
|
ARC_FLAG_L2_WRITE_HEAD = 1 << 14, /* head of write list */
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
/*
|
|
|
|
* Encrypted or authenticated on disk (may be plaintext in memory).
|
|
|
|
* This header has b_crypt_hdr allocated. Does not include indirect
|
|
|
|
* blocks with checksums of MACs which will also have their X
|
|
|
|
* (encrypted) bit set in the bp.
|
|
|
|
*/
|
2017-11-16 04:27:01 +03:00
|
|
|
ARC_FLAG_PROTECTED = 1 << 15,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
/* data has not been authenticated yet */
|
2017-11-16 04:27:01 +03:00
|
|
|
ARC_FLAG_NOAUTH = 1 << 16,
|
2014-12-30 06:12:23 +03:00
|
|
|
/* indicates that the buffer contains metadata (otherwise, data) */
|
2017-11-16 04:27:01 +03:00
|
|
|
ARC_FLAG_BUFC_METADATA = 1 << 17,
|
2014-12-30 06:12:23 +03:00
|
|
|
|
|
|
|
/* Flags specifying whether optional hdr struct fields are defined */
|
2017-11-16 04:27:01 +03:00
|
|
|
ARC_FLAG_HAS_L1HDR = 1 << 18,
|
|
|
|
ARC_FLAG_HAS_L2HDR = 1 << 19,
|
2016-06-02 07:04:53 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Indicates the arc_buf_hdr_t's b_pdata matches the on-disk data.
|
|
|
|
* This allows the l2arc to use the blkptr's checksum to verify
|
|
|
|
* the data without having to store the checksum in the hdr.
|
|
|
|
*/
|
2017-11-16 04:27:01 +03:00
|
|
|
ARC_FLAG_COMPRESSED_ARC = 1 << 20,
|
|
|
|
ARC_FLAG_SHARED_DATA = 1 << 21,
|
2016-06-02 07:04:53 +03:00
|
|
|
|
Improve zfs send performance by bypassing the ARC
When doing a zfs send on a dataset with small recordsize (e.g. 8K),
performance is dominated by the per-block overheads. This is especially
true with `zfs send --compressed`, which further reduces the amount of
data sent, for the same number of blocks. Several threads are involved,
but the limiting factor is the `send_prefetch` thread, which is 100% on
CPU.
The main job of the `send_prefetch` thread is to issue zio's for the
data that will be needed by the main thread. It does this by calling
`arc_read(ARC_FLAG_PREFETCH)`. This has an immediate cost of creating
an arc_hdr, which takes around 14% of one CPU. It also induces later
costs by other threads:
* Since the data was only prefetched, dmu_send()->dmu_dump_write() will
need to call arc_read() again to get the data. This will have to
look up the arc_hdr in the hash table and copy the data from the
scatter ABD in the arc_hdr to a linear ABD in arc_buf. This takes
27% of one CPU.
* dmu_dump_write() needs to arc_buf_destroy() This takes 11% of one
CPU.
* arc_adjust() will need to evict this arc_hdr, taking about 50% of one
CPU.
All of these costs can be avoided by bypassing the ARC if the data is
not already cached. This commit changes `zfs send` to check for the
data in the ARC, and if it is not found then we directly call
`zio_read()`, reading the data into a linear ABD which is used by
dmu_dump_write() directly.
The performance improvement is best expressed in terms of how many
blocks can be processed by `zfs send` in one second. This change
increases the metric by 50%, from ~100,000 to ~150,000. When the amount
of data per block is small (e.g. 2KB), there is a corresponding
reduction in the elapsed time of `zfs send >/dev/null` (from 86 minutes
to 58 minutes in this test case).
In addition to improving the performance of `zfs send`, this change
makes `zfs send` not pollute the ARC cache. In most cases the data will
not be reused, so this allows us to keep caching useful data in the MRU
(hit-once) part of the ARC.
Reviewed-by: Paul Dagnelie <pcd@delphix.com>
Reviewed-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Matthew Ahrens <mahrens@delphix.com>
Closes #10067
2020-03-10 20:51:04 +03:00
|
|
|
/*
|
|
|
|
* Fail this arc_read() (with ENOENT) if the data is not already present
|
|
|
|
* in cache.
|
|
|
|
*/
|
|
|
|
ARC_FLAG_CACHED_ONLY = 1 << 22,
|
|
|
|
|
2020-12-10 02:05:06 +03:00
|
|
|
/*
|
|
|
|
* Don't instantiate an arc_buf_t for arc_read_done.
|
|
|
|
*/
|
|
|
|
ARC_FLAG_NO_BUF = 1 << 23,
|
|
|
|
|
2016-06-02 07:04:53 +03:00
|
|
|
/*
|
|
|
|
* The arc buffer's compression mode is stored in the top 7 bits of the
|
|
|
|
* flags field, so these dummy flags are included so that MDB can
|
|
|
|
* interpret the enum properly.
|
|
|
|
*/
|
|
|
|
ARC_FLAG_COMPRESS_0 = 1 << 24,
|
|
|
|
ARC_FLAG_COMPRESS_1 = 1 << 25,
|
|
|
|
ARC_FLAG_COMPRESS_2 = 1 << 26,
|
|
|
|
ARC_FLAG_COMPRESS_3 = 1 << 27,
|
|
|
|
ARC_FLAG_COMPRESS_4 = 1 << 28,
|
|
|
|
ARC_FLAG_COMPRESS_5 = 1 << 29,
|
|
|
|
ARC_FLAG_COMPRESS_6 = 1 << 30
|
2015-12-27 00:10:31 +03:00
|
|
|
|
2014-12-06 20:24:32 +03:00
|
|
|
} arc_flags_t;
|
|
|
|
|
2016-07-11 20:45:52 +03:00
|
|
|
typedef enum arc_buf_flags {
|
|
|
|
ARC_BUF_FLAG_SHARED = 1 << 0,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
ARC_BUF_FLAG_COMPRESSED = 1 << 1,
|
|
|
|
/*
|
|
|
|
* indicates whether this arc_buf_t is encrypted, regardless of
|
|
|
|
* state on-disk
|
|
|
|
*/
|
|
|
|
ARC_BUF_FLAG_ENCRYPTED = 1 << 2
|
2016-07-11 20:45:52 +03:00
|
|
|
} arc_buf_flags_t;
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
struct arc_buf {
|
|
|
|
arc_buf_hdr_t *b_hdr;
|
|
|
|
arc_buf_t *b_next;
|
2010-05-29 00:45:14 +04:00
|
|
|
kmutex_t b_evict_lock;
|
2008-11-20 23:01:55 +03:00
|
|
|
void *b_data;
|
2016-07-14 00:17:41 +03:00
|
|
|
arc_buf_flags_t b_flags;
|
2008-11-20 23:01:55 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
typedef enum arc_buf_contents {
|
2016-06-02 07:04:53 +03:00
|
|
|
ARC_BUFC_INVALID, /* invalid type */
|
2008-11-20 23:01:55 +03:00
|
|
|
ARC_BUFC_DATA, /* buffer contains data */
|
|
|
|
ARC_BUFC_METADATA, /* buffer contains metadata */
|
|
|
|
ARC_BUFC_NUMTYPES
|
|
|
|
} arc_buf_contents_t;
|
|
|
|
|
2009-02-18 23:51:31 +03:00
|
|
|
/*
|
2019-08-30 19:53:15 +03:00
|
|
|
* The following breakdowns of arc_size exist for kstat only.
|
2009-02-18 23:51:31 +03:00
|
|
|
*/
|
|
|
|
typedef enum arc_space_type {
|
|
|
|
ARC_SPACE_DATA,
|
2014-02-04 00:41:47 +04:00
|
|
|
ARC_SPACE_META,
|
2009-02-18 23:51:31 +03:00
|
|
|
ARC_SPACE_HDRS,
|
|
|
|
ARC_SPACE_L2HDRS,
|
2016-07-13 15:42:40 +03:00
|
|
|
ARC_SPACE_DBUF,
|
|
|
|
ARC_SPACE_DNODE,
|
|
|
|
ARC_SPACE_BONUS,
|
Include scatter_chunk_waste in arc_size
The ARC caches data in scatter ABD's, which are collections of pages,
which are typically 4K. Therefore, the space used to cache each block
is rounded up to a multiple of 4K. The ABD subsystem tracks this wasted
memory in the `scatter_chunk_waste` kstat. However, the ARC's `size` is
not aware of the memory used by this round-up, it only accounts for the
size that it requested from the ABD subsystem.
Therefore, the ARC is effectively using more memory than it is aware of,
due to the `scatter_chunk_waste`. This impacts observability, e.g.
`arcstat` will show that the ARC is using less memory than it
effectively is. It also impacts how the ARC responds to memory
pressure. As the amount of `scatter_chunk_waste` changes, it appears to
the ARC as memory pressure, so it needs to resize `arc_c`.
If the sector size (`1<<ashift`) is the same as the page size (or
larger), there won't be any waste. If the (compressed) block size is
relatively large compared to the page size, the amount of
`scatter_chunk_waste` will be small, so the problematic effects are
minimal.
However, if using 512B sectors (`ashift=9`), and the (compressed) block
size is small (e.g. `compression=on` with the default `volblocksize=8k`
or a decreased `recordsize`), the amount of `scatter_chunk_waste` can be
very large. On a production system, with `arc_size` at a constant 50%
of memory, `scatter_chunk_waste` has been been observed to be 10-30% of
memory.
This commit adds `scatter_chunk_waste` to `arc_size`, and adds a new
`waste` field to `arcstat`. As a result, the ARC's memory usage is more
observable, and `arc_c` does not need to be adjusted as frequently.
Reviewed-by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Matthew Ahrens <mahrens@delphix.com>
Closes #10701
2020-08-18 06:04:04 +03:00
|
|
|
ARC_SPACE_ABD_CHUNK_WASTE,
|
2009-02-18 23:51:31 +03:00
|
|
|
ARC_SPACE_NUMTYPES
|
|
|
|
} arc_space_type_t;
|
|
|
|
|
2013-10-03 04:11:19 +04:00
|
|
|
typedef enum arc_state_type {
|
|
|
|
ARC_STATE_ANON,
|
|
|
|
ARC_STATE_MRU,
|
|
|
|
ARC_STATE_MRU_GHOST,
|
|
|
|
ARC_STATE_MFU,
|
|
|
|
ARC_STATE_MFU_GHOST,
|
|
|
|
ARC_STATE_L2C_ONLY,
|
|
|
|
ARC_STATE_NUMTYPES
|
|
|
|
} arc_state_type_t;
|
|
|
|
|
|
|
|
typedef struct arc_buf_info {
|
|
|
|
arc_state_type_t abi_state_type;
|
|
|
|
arc_buf_contents_t abi_state_contents;
|
|
|
|
uint32_t abi_flags;
|
2016-06-02 07:04:53 +03:00
|
|
|
uint32_t abi_bufcnt;
|
2013-10-03 04:11:19 +04:00
|
|
|
uint64_t abi_size;
|
|
|
|
uint64_t abi_spa;
|
|
|
|
uint64_t abi_access;
|
|
|
|
uint32_t abi_mru_hits;
|
|
|
|
uint32_t abi_mru_ghost_hits;
|
|
|
|
uint32_t abi_mfu_hits;
|
|
|
|
uint32_t abi_mfu_ghost_hits;
|
|
|
|
uint32_t abi_l2arc_hits;
|
|
|
|
uint32_t abi_holds;
|
|
|
|
uint64_t abi_l2arc_dattr;
|
|
|
|
uint64_t abi_l2arc_asize;
|
|
|
|
enum zio_compress abi_l2arc_compress;
|
|
|
|
} arc_buf_info_t;
|
|
|
|
|
2009-02-18 23:51:31 +03:00
|
|
|
void arc_space_consume(uint64_t space, arc_space_type_t type);
|
|
|
|
void arc_space_return(uint64_t space, arc_space_type_t type);
|
2016-07-11 20:45:52 +03:00
|
|
|
boolean_t arc_is_metadata(arc_buf_t *buf);
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
boolean_t arc_is_encrypted(arc_buf_t *buf);
|
|
|
|
boolean_t arc_is_unauthenticated(arc_buf_t *buf);
|
2016-07-11 20:45:52 +03:00
|
|
|
enum zio_compress arc_get_compression(arc_buf_t *buf);
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
void arc_get_raw_params(arc_buf_t *buf, boolean_t *byteorder, uint8_t *salt,
|
|
|
|
uint8_t *iv, uint8_t *mac);
|
2018-03-31 21:12:51 +03:00
|
|
|
int arc_untransform(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
boolean_t in_place);
|
|
|
|
void arc_convert_to_raw(arc_buf_t *buf, uint64_t dsobj, boolean_t byteorder,
|
|
|
|
dmu_object_type_t ot, const uint8_t *salt, const uint8_t *iv,
|
|
|
|
const uint8_t *mac);
|
2016-07-11 20:45:52 +03:00
|
|
|
arc_buf_t *arc_alloc_buf(spa_t *spa, void *tag, arc_buf_contents_t type,
|
|
|
|
int32_t size);
|
|
|
|
arc_buf_t *arc_alloc_compressed_buf(spa_t *spa, void *tag,
|
Add zstd support to zfs
This PR adds two new compression types, based on ZStandard:
- zstd: A basic ZStandard compression algorithm Available compression.
Levels for zstd are zstd-1 through zstd-19, where the compression
increases with every level, but speed decreases.
- zstd-fast: A faster version of the ZStandard compression algorithm
zstd-fast is basically a "negative" level of zstd. The compression
decreases with every level, but speed increases.
Available compression levels for zstd-fast:
- zstd-fast-1 through zstd-fast-10
- zstd-fast-20 through zstd-fast-100 (in increments of 10)
- zstd-fast-500 and zstd-fast-1000
For more information check the man page.
Implementation details:
Rather than treat each level of zstd as a different algorithm (as was
done historically with gzip), the block pointer `enum zio_compress`
value is simply zstd for all levels, including zstd-fast, since they all
use the same decompression function.
The compress= property (a 64bit unsigned integer) uses the lower 7 bits
to store the compression algorithm (matching the number of bits used in
a block pointer, as the 8th bit was borrowed for embedded block
pointers). The upper bits are used to store the compression level.
It is necessary to be able to determine what compression level was used
when later reading a block back, so the concept used in LZ4, where the
first 32bits of the on-disk value are the size of the compressed data
(since the allocation is rounded up to the nearest ashift), was
extended, and we store the version of ZSTD and the level as well as the
compressed size. This value is returned when decompressing a block, so
that if the block needs to be recompressed (L2ARC, nop-write, etc), that
the same parameters will be used to result in the matching checksum.
All of the internal ZFS code ( `arc_buf_hdr_t`, `objset_t`,
`zio_prop_t`, etc.) uses the separated _compress and _complevel
variables. Only the properties ZAP contains the combined/bit-shifted
value. The combined value is split when the compression_changed_cb()
callback is called, and sets both objset members (os_compress and
os_complevel).
The userspace tools all use the combined/bit-shifted value.
Additional notes:
zdb can now also decode the ZSTD compression header (flag -Z) and
inspect the size, version and compression level saved in that header.
For each record, if it is ZSTD compressed, the parameters of the decoded
compression header get printed.
ZSTD is included with all current tests and new tests are added
as-needed.
Per-dataset feature flags now get activated when the property is set.
If a compression algorithm requires a feature flag, zfs activates the
feature when the property is set, rather than waiting for the first
block to be born. This is currently only used by zstd but can be
extended as needed.
Portions-Sponsored-By: The FreeBSD Foundation
Co-authored-by: Allan Jude <allanjude@freebsd.org>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-authored-by: Michael Niewöhner <foss@mniewoehner.de>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Allan Jude <allanjude@freebsd.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>
Closes #6247
Closes #9024
Closes #10277
Closes #10278
2020-08-18 20:10:17 +03:00
|
|
|
uint64_t psize, uint64_t lsize, enum zio_compress compression_type,
|
|
|
|
uint8_t complevel);
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
arc_buf_t *arc_alloc_raw_buf(spa_t *spa, void *tag, uint64_t dsobj,
|
|
|
|
boolean_t byteorder, const uint8_t *salt, const uint8_t *iv,
|
|
|
|
const uint8_t *mac, dmu_object_type_t ot, uint64_t psize, uint64_t lsize,
|
Add zstd support to zfs
This PR adds two new compression types, based on ZStandard:
- zstd: A basic ZStandard compression algorithm Available compression.
Levels for zstd are zstd-1 through zstd-19, where the compression
increases with every level, but speed decreases.
- zstd-fast: A faster version of the ZStandard compression algorithm
zstd-fast is basically a "negative" level of zstd. The compression
decreases with every level, but speed increases.
Available compression levels for zstd-fast:
- zstd-fast-1 through zstd-fast-10
- zstd-fast-20 through zstd-fast-100 (in increments of 10)
- zstd-fast-500 and zstd-fast-1000
For more information check the man page.
Implementation details:
Rather than treat each level of zstd as a different algorithm (as was
done historically with gzip), the block pointer `enum zio_compress`
value is simply zstd for all levels, including zstd-fast, since they all
use the same decompression function.
The compress= property (a 64bit unsigned integer) uses the lower 7 bits
to store the compression algorithm (matching the number of bits used in
a block pointer, as the 8th bit was borrowed for embedded block
pointers). The upper bits are used to store the compression level.
It is necessary to be able to determine what compression level was used
when later reading a block back, so the concept used in LZ4, where the
first 32bits of the on-disk value are the size of the compressed data
(since the allocation is rounded up to the nearest ashift), was
extended, and we store the version of ZSTD and the level as well as the
compressed size. This value is returned when decompressing a block, so
that if the block needs to be recompressed (L2ARC, nop-write, etc), that
the same parameters will be used to result in the matching checksum.
All of the internal ZFS code ( `arc_buf_hdr_t`, `objset_t`,
`zio_prop_t`, etc.) uses the separated _compress and _complevel
variables. Only the properties ZAP contains the combined/bit-shifted
value. The combined value is split when the compression_changed_cb()
callback is called, and sets both objset members (os_compress and
os_complevel).
The userspace tools all use the combined/bit-shifted value.
Additional notes:
zdb can now also decode the ZSTD compression header (flag -Z) and
inspect the size, version and compression level saved in that header.
For each record, if it is ZSTD compressed, the parameters of the decoded
compression header get printed.
ZSTD is included with all current tests and new tests are added
as-needed.
Per-dataset feature flags now get activated when the property is set.
If a compression algorithm requires a feature flag, zfs activates the
feature when the property is set, rather than waiting for the first
block to be born. This is currently only used by zstd but can be
extended as needed.
Portions-Sponsored-By: The FreeBSD Foundation
Co-authored-by: Allan Jude <allanjude@freebsd.org>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-authored-by: Michael Niewöhner <foss@mniewoehner.de>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Allan Jude <allanjude@freebsd.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>
Closes #6247
Closes #9024
Closes #10277
Closes #10278
2020-08-18 20:10:17 +03:00
|
|
|
enum zio_compress compression_type, uint8_t complevel);
|
|
|
|
uint8_t arc_get_complevel(arc_buf_t *buf);
|
2016-07-11 20:45:52 +03:00
|
|
|
arc_buf_t *arc_loan_buf(spa_t *spa, boolean_t is_metadata, int size);
|
|
|
|
arc_buf_t *arc_loan_compressed_buf(spa_t *spa, uint64_t psize, uint64_t lsize,
|
Add zstd support to zfs
This PR adds two new compression types, based on ZStandard:
- zstd: A basic ZStandard compression algorithm Available compression.
Levels for zstd are zstd-1 through zstd-19, where the compression
increases with every level, but speed decreases.
- zstd-fast: A faster version of the ZStandard compression algorithm
zstd-fast is basically a "negative" level of zstd. The compression
decreases with every level, but speed increases.
Available compression levels for zstd-fast:
- zstd-fast-1 through zstd-fast-10
- zstd-fast-20 through zstd-fast-100 (in increments of 10)
- zstd-fast-500 and zstd-fast-1000
For more information check the man page.
Implementation details:
Rather than treat each level of zstd as a different algorithm (as was
done historically with gzip), the block pointer `enum zio_compress`
value is simply zstd for all levels, including zstd-fast, since they all
use the same decompression function.
The compress= property (a 64bit unsigned integer) uses the lower 7 bits
to store the compression algorithm (matching the number of bits used in
a block pointer, as the 8th bit was borrowed for embedded block
pointers). The upper bits are used to store the compression level.
It is necessary to be able to determine what compression level was used
when later reading a block back, so the concept used in LZ4, where the
first 32bits of the on-disk value are the size of the compressed data
(since the allocation is rounded up to the nearest ashift), was
extended, and we store the version of ZSTD and the level as well as the
compressed size. This value is returned when decompressing a block, so
that if the block needs to be recompressed (L2ARC, nop-write, etc), that
the same parameters will be used to result in the matching checksum.
All of the internal ZFS code ( `arc_buf_hdr_t`, `objset_t`,
`zio_prop_t`, etc.) uses the separated _compress and _complevel
variables. Only the properties ZAP contains the combined/bit-shifted
value. The combined value is split when the compression_changed_cb()
callback is called, and sets both objset members (os_compress and
os_complevel).
The userspace tools all use the combined/bit-shifted value.
Additional notes:
zdb can now also decode the ZSTD compression header (flag -Z) and
inspect the size, version and compression level saved in that header.
For each record, if it is ZSTD compressed, the parameters of the decoded
compression header get printed.
ZSTD is included with all current tests and new tests are added
as-needed.
Per-dataset feature flags now get activated when the property is set.
If a compression algorithm requires a feature flag, zfs activates the
feature when the property is set, rather than waiting for the first
block to be born. This is currently only used by zstd but can be
extended as needed.
Portions-Sponsored-By: The FreeBSD Foundation
Co-authored-by: Allan Jude <allanjude@freebsd.org>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-authored-by: Michael Niewöhner <foss@mniewoehner.de>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Allan Jude <allanjude@freebsd.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>
Closes #6247
Closes #9024
Closes #10277
Closes #10278
2020-08-18 20:10:17 +03:00
|
|
|
enum zio_compress compression_type, uint8_t complevel);
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
arc_buf_t *arc_loan_raw_buf(spa_t *spa, uint64_t dsobj, boolean_t byteorder,
|
|
|
|
const uint8_t *salt, const uint8_t *iv, const uint8_t *mac,
|
|
|
|
dmu_object_type_t ot, uint64_t psize, uint64_t lsize,
|
Add zstd support to zfs
This PR adds two new compression types, based on ZStandard:
- zstd: A basic ZStandard compression algorithm Available compression.
Levels for zstd are zstd-1 through zstd-19, where the compression
increases with every level, but speed decreases.
- zstd-fast: A faster version of the ZStandard compression algorithm
zstd-fast is basically a "negative" level of zstd. The compression
decreases with every level, but speed increases.
Available compression levels for zstd-fast:
- zstd-fast-1 through zstd-fast-10
- zstd-fast-20 through zstd-fast-100 (in increments of 10)
- zstd-fast-500 and zstd-fast-1000
For more information check the man page.
Implementation details:
Rather than treat each level of zstd as a different algorithm (as was
done historically with gzip), the block pointer `enum zio_compress`
value is simply zstd for all levels, including zstd-fast, since they all
use the same decompression function.
The compress= property (a 64bit unsigned integer) uses the lower 7 bits
to store the compression algorithm (matching the number of bits used in
a block pointer, as the 8th bit was borrowed for embedded block
pointers). The upper bits are used to store the compression level.
It is necessary to be able to determine what compression level was used
when later reading a block back, so the concept used in LZ4, where the
first 32bits of the on-disk value are the size of the compressed data
(since the allocation is rounded up to the nearest ashift), was
extended, and we store the version of ZSTD and the level as well as the
compressed size. This value is returned when decompressing a block, so
that if the block needs to be recompressed (L2ARC, nop-write, etc), that
the same parameters will be used to result in the matching checksum.
All of the internal ZFS code ( `arc_buf_hdr_t`, `objset_t`,
`zio_prop_t`, etc.) uses the separated _compress and _complevel
variables. Only the properties ZAP contains the combined/bit-shifted
value. The combined value is split when the compression_changed_cb()
callback is called, and sets both objset members (os_compress and
os_complevel).
The userspace tools all use the combined/bit-shifted value.
Additional notes:
zdb can now also decode the ZSTD compression header (flag -Z) and
inspect the size, version and compression level saved in that header.
For each record, if it is ZSTD compressed, the parameters of the decoded
compression header get printed.
ZSTD is included with all current tests and new tests are added
as-needed.
Per-dataset feature flags now get activated when the property is set.
If a compression algorithm requires a feature flag, zfs activates the
feature when the property is set, rather than waiting for the first
block to be born. This is currently only used by zstd but can be
extended as needed.
Portions-Sponsored-By: The FreeBSD Foundation
Co-authored-by: Allan Jude <allanjude@freebsd.org>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-authored-by: Michael Niewöhner <foss@mniewoehner.de>
Signed-off-by: Allan Jude <allan@klarasystems.com>
Signed-off-by: Allan Jude <allanjude@freebsd.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: Michael Niewöhner <foss@mniewoehner.de>
Closes #6247
Closes #9024
Closes #10277
Closes #10278
2020-08-18 20:10:17 +03:00
|
|
|
enum zio_compress compression_type, uint8_t complevel);
|
2009-07-03 02:44:48 +04:00
|
|
|
void arc_return_buf(arc_buf_t *buf, void *tag);
|
2010-05-29 00:45:14 +04:00
|
|
|
void arc_loan_inuse_buf(arc_buf_t *buf, void *tag);
|
2016-06-02 07:04:53 +03:00
|
|
|
void arc_buf_destroy(arc_buf_t *buf, void *tag);
|
2013-10-03 04:11:19 +04:00
|
|
|
void arc_buf_info(arc_buf_t *buf, arc_buf_info_t *abi, int state_index);
|
2014-09-10 22:59:03 +04:00
|
|
|
uint64_t arc_buf_size(arc_buf_t *buf);
|
2016-07-11 20:45:52 +03:00
|
|
|
uint64_t arc_buf_lsize(arc_buf_t *buf);
|
2018-01-08 20:52:36 +03:00
|
|
|
void arc_buf_access(arc_buf_t *buf);
|
2008-11-20 23:01:55 +03:00
|
|
|
void arc_release(arc_buf_t *buf, void *tag);
|
|
|
|
int arc_released(arc_buf_t *buf);
|
2013-05-17 01:18:06 +04:00
|
|
|
void arc_buf_sigsegv(int sig, siginfo_t *si, void *unused);
|
2008-11-20 23:01:55 +03:00
|
|
|
void arc_buf_freeze(arc_buf_t *buf);
|
|
|
|
void arc_buf_thaw(arc_buf_t *buf);
|
|
|
|
#ifdef ZFS_DEBUG
|
|
|
|
int arc_referenced(arc_buf_t *buf);
|
|
|
|
#endif
|
|
|
|
|
2013-07-03 00:26:24 +04:00
|
|
|
int arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp,
|
2020-06-06 22:54:04 +03:00
|
|
|
arc_read_done_func_t *done, void *priv, zio_priority_t priority,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
int flags, arc_flags_t *arc_flags, const zbookmark_phys_t *zb);
|
2010-05-29 00:45:14 +04:00
|
|
|
zio_t *arc_write(zio_t *pio, spa_t *spa, uint64_t txg,
|
2016-06-02 07:04:53 +03:00
|
|
|
blkptr_t *bp, arc_buf_t *buf, boolean_t l2arc, const zio_prop_t *zp,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
arc_write_done_func_t *ready, arc_write_done_func_t *child_ready,
|
|
|
|
arc_write_done_func_t *physdone, arc_write_done_func_t *done,
|
2020-06-06 22:54:04 +03:00
|
|
|
void *priv, zio_priority_t priority, int zio_flags,
|
2016-05-15 18:02:28 +03:00
|
|
|
const zbookmark_phys_t *zb);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2020-06-06 22:54:04 +03:00
|
|
|
arc_prune_t *arc_add_prune_callback(arc_prune_func_t *func, void *priv);
|
2011-12-23 00:20:43 +04:00
|
|
|
void arc_remove_prune_callback(arc_prune_t *p);
|
Illumos #3805 arc shouldn't cache freed blocks
3805 arc shouldn't cache freed blocks
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Richard Elling <richard.elling@dey-sys.com>
Reviewed by: Will Andrews <will@firepipe.net>
Approved by: Dan McDonald <danmcd@nexenta.com>
References:
illumos/illumos-gate@6e6d5868f52089b9026785bd90257a3d3f6e5ee2
https://www.illumos.org/issues/3805
ZFS should proactively evict freed blocks from the cache.
On dcenter, we saw that we were caching ~256GB of metadata, while the
pool only had <4GB of metadata on disk. We were wasting about half the
system's RAM (252GB) on blocks that have been freed.
Even though these freed blocks will never be used again, and thus will
eventually be evicted, this causes us to use memory inefficiently for 2
reasons:
1. A block that is freed has no chance of being accessed again, but will
be kept in memory preferentially to a block that was accessed before it
(and is thus older) but has not been freed and thus has at least some
chance of being accessed again.
2. We partition the ARC into several buckets:
user data that has been accessed only once (MRU)
metadata that has been accessed only once (MRU)
user data that has been accessed more than once (MFU)
metadata that has been accessed more than once (MFU)
The user data vs metadata split is somewhat arbitrary, and the primary
control on how much memory is used to cache data vs metadata is to
simply try to keep the proportion the same as it has been in the past
(each bucket "evicts against" itself). The secondary control is to
evict data before evicting metadata.
Because of this bucketing, we may end up with one bucket mostly
containing freed blocks that are very old, while another bucket has more
recently accessed, still-allocated blocks. Data in the useful bucket
(with still-allocated blocks) may be evicted in preference to data in
the useless bucket (with old, freed blocks).
On dcenter, we saw that the MFU metadata bucket was 230MB, while the MFU
data bucket was 27GB and the MRU metadata bucket was 256GB. However,
the vast majority of data in the MRU metadata bucket (256GB) was freed
blocks, and thus useless. Meanwhile, the MFU metadata bucket (230MB)
was constantly evicting useful blocks that will be soon needed.
The problem of cache segmentation is a larger problem that needs more
investigation. However, if we stop caching freed blocks, it should
reduce the impact of this more fundamental issue.
Ported-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1503
2013-06-07 02:46:55 +04:00
|
|
|
void arc_freed(spa_t *spa, const blkptr_t *bp);
|
2011-12-23 00:20:43 +04:00
|
|
|
|
2015-01-13 06:52:19 +03:00
|
|
|
void arc_flush(spa_t *spa, boolean_t retry);
|
2008-11-20 23:01:55 +03:00
|
|
|
void arc_tempreserve_clear(uint64_t reserve);
|
2017-09-27 04:45:19 +03:00
|
|
|
int arc_tempreserve_space(spa_t *spa, uint64_t reserve, uint64_t txg);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
Cap metaslab memory usage
On systems with large amounts of storage and high fragmentation, a huge
amount of space can be used by storing metaslab range trees. Since
metaslabs are only unloaded during a txg sync, and only if they have
been inactive for 8 txgs, it is possible to get into a state where all
of the system's memory is consumed by range trees and metaslabs, and
txgs cannot sync. While ZFS knows how to evict ARC data when needed,
it has no such mechanism for range tree data. This can result in boot
hangs for some system configurations.
First, we add the ability to unload metaslabs outside of syncing
context. Second, we store a multilist of all loaded metaslabs, sorted
by their selection txg, so we can quickly identify the oldest
metaslabs. We use a multilist to reduce lock contention during heavy
write workloads. Finally, we add logic that will unload a metaslab
when we're loading a new metaslab, if we're using more than a certain
fraction of the available memory on range trees.
Reviewed-by: Matt Ahrens <mahrens@delphix.com>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Sebastien Roy <sebastien.roy@delphix.com>
Reviewed-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #9128
2019-08-16 18:08:21 +03:00
|
|
|
uint64_t arc_all_memory(void);
|
2020-03-27 19:14:46 +03:00
|
|
|
uint64_t arc_default_max(uint64_t min, uint64_t allmem);
|
2017-09-30 01:49:19 +03:00
|
|
|
uint64_t arc_target_bytes(void);
|
2020-12-11 01:09:23 +03:00
|
|
|
void arc_set_limits(uint64_t);
|
2008-11-20 23:01:55 +03:00
|
|
|
void arc_init(void);
|
|
|
|
void arc_fini(void);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Level 2 ARC
|
|
|
|
*/
|
|
|
|
|
2009-07-03 02:44:48 +04:00
|
|
|
void l2arc_add_vdev(spa_t *spa, vdev_t *vd);
|
2008-11-20 23:01:55 +03:00
|
|
|
void l2arc_remove_vdev(vdev_t *vd);
|
2008-12-03 23:09:06 +03:00
|
|
|
boolean_t l2arc_vdev_present(vdev_t *vd);
|
2020-04-10 20:33:35 +03:00
|
|
|
void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen);
|
|
|
|
boolean_t l2arc_range_check_overlap(uint64_t bottom, uint64_t top,
|
|
|
|
uint64_t check);
|
2008-11-20 23:01:55 +03:00
|
|
|
void l2arc_init(void);
|
|
|
|
void l2arc_fini(void);
|
2008-12-03 23:09:06 +03:00
|
|
|
void l2arc_start(void);
|
|
|
|
void l2arc_stop(void);
|
2020-04-10 20:33:35 +03:00
|
|
|
void l2arc_spa_rebuild_start(spa_t *spa);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2013-05-17 01:18:06 +04:00
|
|
|
#ifndef _KERNEL
|
|
|
|
extern boolean_t arc_watch;
|
|
|
|
#endif
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
#ifdef __cplusplus
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#endif /* _SYS_ARC_H */
|