2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
* or http://www.opensolaris.org/os/licensing.
|
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2010-05-29 00:45:14 +04:00
|
|
|
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
|
Reduce loaded range tree memory usage
This patch implements a new tree structure for ZFS, and uses it to
store range trees more efficiently.
The new structure is approximately a B-tree, though there are some
small differences from the usual characterizations. The tree has core
nodes and leaf nodes; each contain data elements, which the elements
in the core nodes acting as separators between its children. The
difference between core and leaf nodes is that the core nodes have an
array of children, while leaf nodes don't. Every node in the tree may
be only partially full; in most cases, they are all at least 50% full
(in terms of element count) except for the root node, which can be
less full. Underfull nodes will steal from their neighbors or merge to
remain full enough, while overfull nodes will split in two. The data
elements are contained in tree-controlled buffers; they are copied
into these on insertion, and overwritten on deletion. This means that
the elements are not independently allocated, which reduces overhead,
but also means they can't be shared between trees (and also that
pointers to them are only valid until a side-effectful tree operation
occurs). The overhead varies based on how dense the tree is, but is
usually on the order of about 50% of the element size; the per-node
overheads are very small, and so don't make a significant difference.
The trees can accept arbitrary records; they accept a size and a
comparator to allow them to be used for a variety of purposes.
The new trees replace the AVL trees used in the range trees today.
Currently, the range_seg_t structure contains three 8 byte integers
of payload and two 24 byte avl_tree_node_ts to handle its storage in
both an offset-sorted tree and a size-sorted tree (total size: 64
bytes). In the new model, the range seg structures are usually two 4
byte integers, but a separate one needs to exist for the size-sorted
and offset-sorted tree. Between the raw size, the 50% overhead, and
the double storage, the new btrees are expected to use 8*1.5*2 = 24
bytes per record, or 33.3% as much memory as the AVL trees (this is
for the purposes of storing metaslab range trees; for other purposes,
like scrubs, they use ~50% as much memory).
We reduced the size of the payload in the range segments by teaching
range trees about starting offsets and shifts; since metaslabs have a
fixed starting offset, and they all operate in terms of disk sectors,
we can store the ranges using 4-byte integers as long as the size of
the metaslab divided by the sector size is less than 2^32. For 512-byte
sectors, this is a 2^41 (or 2TB) metaslab, which with the default
settings corresponds to a 256PB disk. 4k sector disks can handle
metaslabs up to 2^46 bytes, or 2^63 byte disks. Since we do not
anticipate disks of this size in the near future, there should be
almost no cases where metaslabs need 64-byte integers to store their
ranges. We do still have the capability to store 64-byte integer ranges
to account for cases where we are storing per-vdev (or per-dnode) trees,
which could reasonably go above the limits discussed. We also do not
store fill information in the compact version of the node, since it
is only used for sorted scrub.
We also optimized the metaslab loading process in various other ways
to offset some inefficiencies in the btree model. While individual
operations (find, insert, remove_from) are faster for the btree than
they are for the avl tree, remove usually requires a find operation,
while in the AVL tree model the element itself suffices. Some clever
changes actually caused an overall speedup in metaslab loading; we use
approximately 40% less cpu to load metaslabs in our tests on Illumos.
Another memory and performance optimization was achieved by changing
what is stored in the size-sorted trees. When a disk is heavily
fragmented, the df algorithm used by default in ZFS will almost always
find a number of small regions in its initial cursor-based search; it
will usually only fall back to the size-sorted tree to find larger
regions. If we increase the size of the cursor-based search slightly,
and don't store segments that are smaller than a tunable size floor
in the size-sorted tree, we can further cut memory usage down to
below 20% of what the AVL trees store. This also results in further
reductions in CPU time spent loading metaslabs.
The 16KiB size floor was chosen because it results in substantial memory
usage reduction while not usually resulting in situations where we can't
find an appropriate chunk with the cursor and are forced to use an
oversized chunk from the size-sorted tree. In addition, even if we do
have to use an oversized chunk from the size-sorted tree, the chunk
would be too small to use for ZIL allocations, so it isn't as big of a
loss as it might otherwise be. And often, more small allocations will
follow the initial one, and the cursor search will now find the
remainder of the chunk we didn't use all of and use it for subsequent
allocations. Practical testing has shown little or no change in
fragmentation as a result of this change.
If the size-sorted tree becomes empty while the offset sorted one still
has entries, it will load all the entries from the offset sorted tree
and disregard the size floor until it is unloaded again. This operation
occurs rarely with the default setting, only on incredibly thoroughly
fragmented pools.
There are some other small changes to zdb to teach it to handle btrees,
but nothing major.
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed by: Sebastien Roy seb@delphix.com
Reviewed-by: Igor Kozhukhov <igor@dilos.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #9181
2019-10-09 20:36:03 +03:00
|
|
|
* Copyright (c) 2012, 2019 by Delphix. All rights reserved.
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
* Copyright (c) 2016 Gvozden Nešković. All rights reserved.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/zfs_context.h>
|
|
|
|
#include <sys/spa.h>
|
|
|
|
#include <sys/vdev_impl.h>
|
|
|
|
#include <sys/zio.h>
|
|
|
|
#include <sys/zio_checksum.h>
|
2016-07-22 18:52:49 +03:00
|
|
|
#include <sys/abd.h>
|
2008-11-20 23:01:55 +03:00
|
|
|
#include <sys/fs/zfs.h>
|
|
|
|
#include <sys/fm/fs/zfs.h>
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
#include <sys/vdev_raidz.h>
|
|
|
|
#include <sys/vdev_raidz_impl.h>
|
2008-11-20 23:01:55 +03:00
|
|
|
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
#ifdef ZFS_DEBUG
|
2019-03-29 19:13:20 +03:00
|
|
|
#include <sys/vdev.h> /* For vdev_xlate() in vdev_raidz_io_verify() */
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
#endif
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* Virtual device vector for RAID-Z.
|
|
|
|
*
|
2009-08-18 22:43:27 +04:00
|
|
|
* This vdev supports single, double, and triple parity. For single parity,
|
|
|
|
* we use a simple XOR of all the data columns. For double or triple parity,
|
|
|
|
* we use a special case of Reed-Solomon coding. This extends the
|
|
|
|
* technique described in "The mathematics of RAID-6" by H. Peter Anvin by
|
|
|
|
* drawing on the system described in "A Tutorial on Reed-Solomon Coding for
|
|
|
|
* Fault-Tolerance in RAID-like Systems" by James S. Plank on which the
|
|
|
|
* former is also based. The latter is designed to provide higher performance
|
|
|
|
* for writes.
|
|
|
|
*
|
|
|
|
* Note that the Plank paper claimed to support arbitrary N+M, but was then
|
|
|
|
* amended six years later identifying a critical flaw that invalidates its
|
|
|
|
* claims. Nevertheless, the technique can be adapted to work for up to
|
|
|
|
* triple parity. For additional parity, the amendment "Note: Correction to
|
|
|
|
* the 1997 Tutorial on Reed-Solomon Coding" by James S. Plank and Ying Ding
|
|
|
|
* is viable, but the additional complexity means that write performance will
|
|
|
|
* suffer.
|
|
|
|
*
|
|
|
|
* All of the methods above operate on a Galois field, defined over the
|
|
|
|
* integers mod 2^N. In our case we choose N=8 for GF(8) so that all elements
|
|
|
|
* can be expressed with a single byte. Briefly, the operations on the
|
|
|
|
* field are defined as follows:
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
|
|
|
* o addition (+) is represented by a bitwise XOR
|
|
|
|
* o subtraction (-) is therefore identical to addition: A + B = A - B
|
|
|
|
* o multiplication of A by 2 is defined by the following bitwise expression:
|
2013-06-11 21:12:34 +04:00
|
|
|
*
|
2008-11-20 23:01:55 +03:00
|
|
|
* (A * 2)_7 = A_6
|
|
|
|
* (A * 2)_6 = A_5
|
|
|
|
* (A * 2)_5 = A_4
|
|
|
|
* (A * 2)_4 = A_3 + A_7
|
|
|
|
* (A * 2)_3 = A_2 + A_7
|
|
|
|
* (A * 2)_2 = A_1 + A_7
|
|
|
|
* (A * 2)_1 = A_0
|
|
|
|
* (A * 2)_0 = A_7
|
|
|
|
*
|
|
|
|
* In C, multiplying by 2 is therefore ((a << 1) ^ ((a & 0x80) ? 0x1d : 0)).
|
2009-08-18 22:43:27 +04:00
|
|
|
* As an aside, this multiplication is derived from the error correcting
|
|
|
|
* primitive polynomial x^8 + x^4 + x^3 + x^2 + 1.
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
|
|
|
* Observe that any number in the field (except for 0) can be expressed as a
|
|
|
|
* power of 2 -- a generator for the field. We store a table of the powers of
|
|
|
|
* 2 and logs base 2 for quick look ups, and exploit the fact that A * B can
|
|
|
|
* be rewritten as 2^(log_2(A) + log_2(B)) (where '+' is normal addition rather
|
2009-08-18 22:43:27 +04:00
|
|
|
* than field addition). The inverse of a field element A (A^-1) is therefore
|
|
|
|
* A ^ (255 - 1) = A^254.
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
2009-08-18 22:43:27 +04:00
|
|
|
* The up-to-three parity columns, P, Q, R over several data columns,
|
|
|
|
* D_0, ... D_n-1, can be expressed by field operations:
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
|
|
|
* P = D_0 + D_1 + ... + D_n-2 + D_n-1
|
|
|
|
* Q = 2^n-1 * D_0 + 2^n-2 * D_1 + ... + 2^1 * D_n-2 + 2^0 * D_n-1
|
|
|
|
* = ((...((D_0) * 2 + D_1) * 2 + ...) * 2 + D_n-2) * 2 + D_n-1
|
2009-08-18 22:43:27 +04:00
|
|
|
* R = 4^n-1 * D_0 + 4^n-2 * D_1 + ... + 4^1 * D_n-2 + 4^0 * D_n-1
|
|
|
|
* = ((...((D_0) * 4 + D_1) * 4 + ...) * 4 + D_n-2) * 4 + D_n-1
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
2019-09-03 03:56:41 +03:00
|
|
|
* We chose 1, 2, and 4 as our generators because 1 corresponds to the trivial
|
2009-08-18 22:43:27 +04:00
|
|
|
* XOR operation, and 2 and 4 can be computed quickly and generate linearly-
|
|
|
|
* independent coefficients. (There are no additional coefficients that have
|
|
|
|
* this property which is why the uncorrected Plank method breaks down.)
|
|
|
|
*
|
|
|
|
* See the reconstruction code below for how P, Q and R can used individually
|
|
|
|
* or in concert to recover missing data columns.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
|
|
|
|
|
|
|
#define VDEV_RAIDZ_P 0
|
|
|
|
#define VDEV_RAIDZ_Q 1
|
2009-08-18 22:43:27 +04:00
|
|
|
#define VDEV_RAIDZ_R 2
|
|
|
|
|
|
|
|
#define VDEV_RAIDZ_MUL_2(x) (((x) << 1) ^ (((x) & 0x80) ? 0x1d : 0))
|
|
|
|
#define VDEV_RAIDZ_MUL_4(x) (VDEV_RAIDZ_MUL_2(VDEV_RAIDZ_MUL_2(x)))
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We provide a mechanism to perform the field multiplication operation on a
|
|
|
|
* 64-bit value all at once rather than a byte at a time. This works by
|
|
|
|
* creating a mask from the top bit in each byte and using that to
|
|
|
|
* conditionally apply the XOR of 0x1d.
|
|
|
|
*/
|
|
|
|
#define VDEV_RAIDZ_64MUL_2(x, mask) \
|
|
|
|
{ \
|
|
|
|
(mask) = (x) & 0x8080808080808080ULL; \
|
|
|
|
(mask) = ((mask) << 1) - ((mask) >> 7); \
|
|
|
|
(x) = (((x) << 1) & 0xfefefefefefefefeULL) ^ \
|
2010-08-26 20:52:39 +04:00
|
|
|
((mask) & 0x1d1d1d1d1d1d1d1dULL); \
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
#define VDEV_RAIDZ_64MUL_4(x, mask) \
|
|
|
|
{ \
|
|
|
|
VDEV_RAIDZ_64MUL_2((x), mask); \
|
|
|
|
VDEV_RAIDZ_64MUL_2((x), mask); \
|
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
void
|
2010-05-29 00:45:14 +04:00
|
|
|
vdev_raidz_map_free(raidz_map_t *rm)
|
2008-12-03 23:09:06 +03:00
|
|
|
{
|
|
|
|
int c;
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
for (c = 0; c < rm->rm_firstdatacol; c++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_free(rm->rm_col[c].rc_abd);
|
2008-12-03 23:09:06 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
if (rm->rm_col[c].rc_gdata != NULL)
|
2017-01-05 22:10:07 +03:00
|
|
|
abd_free(rm->rm_col[c].rc_gdata);
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
|
|
|
|
2017-01-05 22:10:07 +03:00
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++)
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_put(rm->rm_col[c].rc_abd);
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
if (rm->rm_abd_copy != NULL)
|
|
|
|
abd_free(rm->rm_abd_copy);
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
kmem_free(rm, offsetof(raidz_map_t, rm_col[rm->rm_scols]));
|
2008-12-03 23:09:06 +03:00
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
static void
|
|
|
|
vdev_raidz_map_free_vsd(zio_t *zio)
|
|
|
|
{
|
|
|
|
raidz_map_t *rm = zio->io_vsd;
|
|
|
|
|
2013-05-11 01:17:03 +04:00
|
|
|
ASSERT0(rm->rm_freed);
|
2010-05-29 00:45:14 +04:00
|
|
|
rm->rm_freed = 1;
|
|
|
|
|
|
|
|
if (rm->rm_reports == 0)
|
|
|
|
vdev_raidz_map_free(rm);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*ARGSUSED*/
|
|
|
|
static void
|
|
|
|
vdev_raidz_cksum_free(void *arg, size_t ignored)
|
|
|
|
{
|
|
|
|
raidz_map_t *rm = arg;
|
|
|
|
|
|
|
|
ASSERT3U(rm->rm_reports, >, 0);
|
|
|
|
|
|
|
|
if (--rm->rm_reports == 0 && rm->rm_freed != 0)
|
|
|
|
vdev_raidz_map_free(rm);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2017-01-05 22:10:07 +03:00
|
|
|
vdev_raidz_cksum_finish(zio_cksum_report_t *zcr, const abd_t *good_data)
|
2010-05-29 00:45:14 +04:00
|
|
|
{
|
|
|
|
raidz_map_t *rm = zcr->zcr_cbdata;
|
2017-01-05 22:10:07 +03:00
|
|
|
const size_t c = zcr->zcr_cbinfo;
|
|
|
|
size_t x, offset;
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2017-01-05 22:10:07 +03:00
|
|
|
const abd_t *good = NULL;
|
|
|
|
const abd_t *bad = rm->rm_col[c].rc_abd;
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
if (good_data == NULL) {
|
|
|
|
zfs_ereport_finish_checksum(zcr, NULL, NULL, B_FALSE);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (c < rm->rm_firstdatacol) {
|
|
|
|
/*
|
|
|
|
* The first time through, calculate the parity blocks for
|
|
|
|
* the good data (this relies on the fact that the good
|
|
|
|
* data never changes for a given logical ZIO)
|
|
|
|
*/
|
|
|
|
if (rm->rm_col[0].rc_gdata == NULL) {
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_t *bad_parity[VDEV_RAIDZ_MAXPARITY];
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Set up the rm_col[]s to generate the parity for
|
|
|
|
* good_data, first saving the parity bufs and
|
|
|
|
* replacing them with buffers to hold the result.
|
|
|
|
*/
|
|
|
|
for (x = 0; x < rm->rm_firstdatacol; x++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
bad_parity[x] = rm->rm_col[x].rc_abd;
|
|
|
|
rm->rm_col[x].rc_abd =
|
2017-01-05 22:10:07 +03:00
|
|
|
rm->rm_col[x].rc_gdata =
|
|
|
|
abd_alloc_sametype(rm->rm_col[x].rc_abd,
|
2016-07-22 18:52:49 +03:00
|
|
|
rm->rm_col[x].rc_size);
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* fill in the data columns from good_data */
|
2017-01-05 22:10:07 +03:00
|
|
|
offset = 0;
|
2010-05-29 00:45:14 +04:00
|
|
|
for (; x < rm->rm_cols; x++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_put(rm->rm_col[x].rc_abd);
|
2017-01-05 22:10:07 +03:00
|
|
|
|
|
|
|
rm->rm_col[x].rc_abd =
|
|
|
|
abd_get_offset_size((abd_t *)good_data,
|
|
|
|
offset, rm->rm_col[x].rc_size);
|
|
|
|
offset += rm->rm_col[x].rc_size;
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Construct the parity from the good data.
|
|
|
|
*/
|
|
|
|
vdev_raidz_generate_parity(rm);
|
|
|
|
|
|
|
|
/* restore everything back to its original state */
|
2017-01-05 22:10:07 +03:00
|
|
|
for (x = 0; x < rm->rm_firstdatacol; x++)
|
2016-07-22 18:52:49 +03:00
|
|
|
rm->rm_col[x].rc_abd = bad_parity[x];
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
offset = 0;
|
2010-05-29 00:45:14 +04:00
|
|
|
for (x = rm->rm_firstdatacol; x < rm->rm_cols; x++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_put(rm->rm_col[x].rc_abd);
|
2016-08-24 16:42:51 +03:00
|
|
|
rm->rm_col[x].rc_abd = abd_get_offset_size(
|
|
|
|
rm->rm_abd_copy, offset,
|
|
|
|
rm->rm_col[x].rc_size);
|
2016-07-22 18:52:49 +03:00
|
|
|
offset += rm->rm_col[x].rc_size;
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT3P(rm->rm_col[c].rc_gdata, !=, NULL);
|
2017-01-05 22:10:07 +03:00
|
|
|
good = abd_get_offset_size(rm->rm_col[c].rc_gdata, 0,
|
|
|
|
rm->rm_col[c].rc_size);
|
2010-05-29 00:45:14 +04:00
|
|
|
} else {
|
|
|
|
/* adjust good_data to point at the start of our column */
|
2017-01-05 22:10:07 +03:00
|
|
|
offset = 0;
|
2010-05-29 00:45:14 +04:00
|
|
|
for (x = rm->rm_firstdatacol; x < c; x++)
|
2017-01-05 22:10:07 +03:00
|
|
|
offset += rm->rm_col[x].rc_size;
|
|
|
|
|
|
|
|
good = abd_get_offset_size((abd_t *)good_data, offset,
|
|
|
|
rm->rm_col[c].rc_size);
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* we drop the ereport if it ends up that the data was good */
|
|
|
|
zfs_ereport_finish_checksum(zcr, good, bad, B_TRUE);
|
2017-01-05 22:10:07 +03:00
|
|
|
abd_put((abd_t *)good);
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Invoked indirectly by zfs_ereport_start_checksum(), called
|
|
|
|
* below when our read operation fails completely. The main point
|
|
|
|
* is to keep a copy of everything we read from disk, so that at
|
|
|
|
* vdev_raidz_cksum_finish() time we can compare it with the good data.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
vdev_raidz_cksum_report(zio_t *zio, zio_cksum_report_t *zcr, void *arg)
|
|
|
|
{
|
|
|
|
size_t c = (size_t)(uintptr_t)arg;
|
2016-07-22 18:52:49 +03:00
|
|
|
size_t offset;
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
raidz_map_t *rm = zio->io_vsd;
|
|
|
|
size_t size;
|
|
|
|
|
|
|
|
/* set up the report and bump the refcount */
|
|
|
|
zcr->zcr_cbdata = rm;
|
|
|
|
zcr->zcr_cbinfo = c;
|
|
|
|
zcr->zcr_finish = vdev_raidz_cksum_finish;
|
|
|
|
zcr->zcr_free = vdev_raidz_cksum_free;
|
|
|
|
|
|
|
|
rm->rm_reports++;
|
|
|
|
ASSERT3U(rm->rm_reports, >, 0);
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
if (rm->rm_abd_copy != NULL)
|
2010-05-29 00:45:14 +04:00
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* It's the first time we're called for this raidz_map_t, so we need
|
|
|
|
* to copy the data aside; there's no guarantee that our zio's buffer
|
|
|
|
* won't be re-used for something else.
|
|
|
|
*
|
|
|
|
* Our parity data is already in separate buffers, so there's no need
|
|
|
|
* to copy them.
|
|
|
|
*/
|
|
|
|
|
|
|
|
size = 0;
|
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++)
|
|
|
|
size += rm->rm_col[c].rc_size;
|
|
|
|
|
2017-01-05 22:10:07 +03:00
|
|
|
rm->rm_abd_copy = abd_alloc_for_io(size, B_FALSE);
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
for (offset = 0, c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
2010-05-29 00:45:14 +04:00
|
|
|
raidz_col_t *col = &rm->rm_col[c];
|
2016-08-24 16:42:51 +03:00
|
|
|
abd_t *tmp = abd_get_offset_size(rm->rm_abd_copy, offset,
|
|
|
|
col->rc_size);
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_copy(tmp, col->rc_abd, col->rc_size);
|
2017-01-05 22:10:07 +03:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_put(col->rc_abd);
|
|
|
|
col->rc_abd = tmp;
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
offset += col->rc_size;
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
2016-07-22 18:52:49 +03:00
|
|
|
ASSERT3U(offset, ==, size);
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static const zio_vsd_ops_t vdev_raidz_vsd_ops = {
|
2017-11-29 02:33:48 +03:00
|
|
|
.vsd_free = vdev_raidz_map_free_vsd,
|
|
|
|
.vsd_cksum_report = vdev_raidz_cksum_report
|
2010-05-29 00:45:14 +04:00
|
|
|
};
|
|
|
|
|
2013-06-11 21:12:34 +04:00
|
|
|
/*
|
|
|
|
* Divides the IO evenly across all child vdevs; usually, dcols is
|
|
|
|
* the number of children in the target vdev.
|
2013-11-13 23:05:17 +04:00
|
|
|
*
|
|
|
|
* Avoid inlining the function to keep vdev_raidz_io_start(), which
|
|
|
|
* is this functions only caller, as small as possible on the stack.
|
2013-06-11 21:12:34 +04:00
|
|
|
*/
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
noinline raidz_map_t *
|
2017-05-13 03:28:03 +03:00
|
|
|
vdev_raidz_map_alloc(zio_t *zio, uint64_t ashift, uint64_t dcols,
|
2008-11-20 23:01:55 +03:00
|
|
|
uint64_t nparity)
|
|
|
|
{
|
|
|
|
raidz_map_t *rm;
|
2013-06-11 21:12:34 +04:00
|
|
|
/* The starting RAIDZ (parent) vdev sector of the block. */
|
2017-05-13 03:28:03 +03:00
|
|
|
uint64_t b = zio->io_offset >> ashift;
|
2013-06-11 21:12:34 +04:00
|
|
|
/* The zio's size in units of the vdev's minimum sector size. */
|
2017-05-13 03:28:03 +03:00
|
|
|
uint64_t s = zio->io_size >> ashift;
|
2013-06-11 21:12:34 +04:00
|
|
|
/* The first column for this stripe. */
|
2008-11-20 23:01:55 +03:00
|
|
|
uint64_t f = b % dcols;
|
2013-06-11 21:12:34 +04:00
|
|
|
/* The starting byte offset on each child vdev. */
|
2017-05-13 03:28:03 +03:00
|
|
|
uint64_t o = (b / dcols) << ashift;
|
2009-08-18 22:43:27 +04:00
|
|
|
uint64_t q, r, c, bc, col, acols, scols, coff, devidx, asize, tot;
|
2016-07-22 18:52:49 +03:00
|
|
|
uint64_t off = 0;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2013-06-11 21:12:34 +04:00
|
|
|
/*
|
|
|
|
* "Quotient": The number of data sectors for this stripe on all but
|
|
|
|
* the "big column" child vdevs that also contain "remainder" data.
|
|
|
|
*/
|
2008-11-20 23:01:55 +03:00
|
|
|
q = s / (dcols - nparity);
|
2013-06-11 21:12:34 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* "Remainder": The number of partial stripe data sectors in this I/O.
|
|
|
|
* This will add a sector to some, but not all, child vdevs.
|
|
|
|
*/
|
2008-11-20 23:01:55 +03:00
|
|
|
r = s - q * (dcols - nparity);
|
2013-06-11 21:12:34 +04:00
|
|
|
|
|
|
|
/* The number of "big columns" - those which contain remainder data. */
|
2008-11-20 23:01:55 +03:00
|
|
|
bc = (r == 0 ? 0 : r + nparity);
|
2013-06-11 21:12:34 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The total number of data and parity sectors associated with
|
|
|
|
* this I/O.
|
|
|
|
*/
|
2009-08-18 22:43:27 +04:00
|
|
|
tot = s + nparity * (q + (r == 0 ? 0 : 1));
|
|
|
|
|
2013-06-11 21:12:34 +04:00
|
|
|
/* acols: The columns that will be accessed. */
|
|
|
|
/* scols: The columns that will be accessed or skipped. */
|
2009-08-18 22:43:27 +04:00
|
|
|
if (q == 0) {
|
2013-06-11 21:12:34 +04:00
|
|
|
/* Our I/O request doesn't span all child vdevs. */
|
2009-08-18 22:43:27 +04:00
|
|
|
acols = bc;
|
|
|
|
scols = MIN(dcols, roundup(bc, nparity + 1));
|
|
|
|
} else {
|
|
|
|
acols = dcols;
|
|
|
|
scols = dcols;
|
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
ASSERT3U(acols, <=, scols);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2014-11-21 03:09:39 +03:00
|
|
|
rm = kmem_alloc(offsetof(raidz_map_t, rm_col[scols]), KM_SLEEP);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
rm->rm_cols = acols;
|
2009-08-18 22:43:27 +04:00
|
|
|
rm->rm_scols = scols;
|
2008-11-20 23:01:55 +03:00
|
|
|
rm->rm_bigcols = bc;
|
2010-05-29 00:45:14 +04:00
|
|
|
rm->rm_skipstart = bc;
|
2008-11-20 23:01:55 +03:00
|
|
|
rm->rm_missingdata = 0;
|
|
|
|
rm->rm_missingparity = 0;
|
|
|
|
rm->rm_firstdatacol = nparity;
|
2016-07-22 18:52:49 +03:00
|
|
|
rm->rm_abd_copy = NULL;
|
2010-05-29 00:45:14 +04:00
|
|
|
rm->rm_reports = 0;
|
|
|
|
rm->rm_freed = 0;
|
|
|
|
rm->rm_ecksuminjected = 0;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
asize = 0;
|
|
|
|
|
|
|
|
for (c = 0; c < scols; c++) {
|
2008-11-20 23:01:55 +03:00
|
|
|
col = f + c;
|
|
|
|
coff = o;
|
|
|
|
if (col >= dcols) {
|
|
|
|
col -= dcols;
|
2017-05-13 03:28:03 +03:00
|
|
|
coff += 1ULL << ashift;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
rm->rm_col[c].rc_devidx = col;
|
|
|
|
rm->rm_col[c].rc_offset = coff;
|
2016-07-22 18:52:49 +03:00
|
|
|
rm->rm_col[c].rc_abd = NULL;
|
2010-05-29 00:45:14 +04:00
|
|
|
rm->rm_col[c].rc_gdata = NULL;
|
2008-11-20 23:01:55 +03:00
|
|
|
rm->rm_col[c].rc_error = 0;
|
|
|
|
rm->rm_col[c].rc_tried = 0;
|
|
|
|
rm->rm_col[c].rc_skipped = 0;
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
if (c >= acols)
|
|
|
|
rm->rm_col[c].rc_size = 0;
|
|
|
|
else if (c < bc)
|
2017-05-13 03:28:03 +03:00
|
|
|
rm->rm_col[c].rc_size = (q + 1) << ashift;
|
2009-08-18 22:43:27 +04:00
|
|
|
else
|
2017-05-13 03:28:03 +03:00
|
|
|
rm->rm_col[c].rc_size = q << ashift;
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
asize += rm->rm_col[c].rc_size;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2017-05-13 03:28:03 +03:00
|
|
|
ASSERT3U(asize, ==, tot << ashift);
|
|
|
|
rm->rm_asize = roundup(asize, (nparity + 1) << ashift);
|
2010-05-29 00:45:14 +04:00
|
|
|
rm->rm_nskip = roundup(tot, nparity + 1) - tot;
|
2017-05-13 03:28:03 +03:00
|
|
|
ASSERT3U(rm->rm_asize - asize, ==, rm->rm_nskip << ashift);
|
2010-05-29 00:45:14 +04:00
|
|
|
ASSERT3U(rm->rm_nskip, <=, nparity);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
for (c = 0; c < rm->rm_firstdatacol; c++)
|
2016-07-22 18:52:49 +03:00
|
|
|
rm->rm_col[c].rc_abd =
|
2016-08-24 16:42:51 +03:00
|
|
|
abd_alloc_linear(rm->rm_col[c].rc_size, B_FALSE);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2016-08-24 16:42:51 +03:00
|
|
|
rm->rm_col[c].rc_abd = abd_get_offset_size(zio->io_abd, 0,
|
|
|
|
rm->rm_col[c].rc_size);
|
2016-07-22 18:52:49 +03:00
|
|
|
off = rm->rm_col[c].rc_size;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
for (c = c + 1; c < acols; c++) {
|
2016-08-24 16:42:51 +03:00
|
|
|
rm->rm_col[c].rc_abd = abd_get_offset_size(zio->io_abd, off,
|
|
|
|
rm->rm_col[c].rc_size);
|
2016-07-22 18:52:49 +03:00
|
|
|
off += rm->rm_col[c].rc_size;
|
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If all data stored spans all columns, there's a danger that parity
|
|
|
|
* will always be on the same device and, since parity isn't read
|
2019-09-03 03:56:41 +03:00
|
|
|
* during normal operation, that device's I/O bandwidth won't be
|
2008-11-20 23:01:55 +03:00
|
|
|
* used effectively. We therefore switch the parity every 1MB.
|
|
|
|
*
|
|
|
|
* ... at least that was, ostensibly, the theory. As a practical
|
|
|
|
* matter unless we juggle the parity between all devices evenly, we
|
|
|
|
* won't see any benefit. Further, occasional writes that aren't a
|
|
|
|
* multiple of the LCM of the number of children and the minimum
|
|
|
|
* stripe width are sufficient to avoid pessimal behavior.
|
|
|
|
* Unfortunately, this decision created an implicit on-disk format
|
|
|
|
* requirement that we need to support for all eternity, but only
|
|
|
|
* for single-parity RAID-Z.
|
2010-05-29 00:45:14 +04:00
|
|
|
*
|
|
|
|
* If we intend to skip a sector in the zeroth column for padding
|
|
|
|
* we must make sure to note this swap. We will never intend to
|
|
|
|
* skip the first column since at least one data and one parity
|
|
|
|
* column must appear in each row.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
|
|
|
ASSERT(rm->rm_cols >= 2);
|
|
|
|
ASSERT(rm->rm_col[0].rc_size == rm->rm_col[1].rc_size);
|
|
|
|
|
|
|
|
if (rm->rm_firstdatacol == 1 && (zio->io_offset & (1ULL << 20))) {
|
|
|
|
devidx = rm->rm_col[0].rc_devidx;
|
|
|
|
o = rm->rm_col[0].rc_offset;
|
|
|
|
rm->rm_col[0].rc_devidx = rm->rm_col[1].rc_devidx;
|
|
|
|
rm->rm_col[0].rc_offset = rm->rm_col[1].rc_offset;
|
|
|
|
rm->rm_col[1].rc_devidx = devidx;
|
|
|
|
rm->rm_col[1].rc_offset = o;
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
if (rm->rm_skipstart == 0)
|
|
|
|
rm->rm_skipstart = 1;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
zio->io_vsd = rm;
|
2010-05-29 00:45:14 +04:00
|
|
|
zio->io_vsd_ops = &vdev_raidz_vsd_ops;
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
|
2016-07-17 20:41:11 +03:00
|
|
|
/* init RAIDZ parity ops */
|
|
|
|
rm->rm_ops = vdev_raidz_math_get_ops();
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
return (rm);
|
|
|
|
}
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
struct pqr_struct {
|
|
|
|
uint64_t *p;
|
|
|
|
uint64_t *q;
|
|
|
|
uint64_t *r;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
vdev_raidz_p_func(void *buf, size_t size, void *private)
|
|
|
|
{
|
|
|
|
struct pqr_struct *pqr = private;
|
|
|
|
const uint64_t *src = buf;
|
|
|
|
int i, cnt = size / sizeof (src[0]);
|
|
|
|
|
|
|
|
ASSERT(pqr->p && !pqr->q && !pqr->r);
|
|
|
|
|
|
|
|
for (i = 0; i < cnt; i++, src++, pqr->p++)
|
|
|
|
*pqr->p ^= *src;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
vdev_raidz_pq_func(void *buf, size_t size, void *private)
|
|
|
|
{
|
|
|
|
struct pqr_struct *pqr = private;
|
|
|
|
const uint64_t *src = buf;
|
|
|
|
uint64_t mask;
|
|
|
|
int i, cnt = size / sizeof (src[0]);
|
|
|
|
|
|
|
|
ASSERT(pqr->p && pqr->q && !pqr->r);
|
|
|
|
|
|
|
|
for (i = 0; i < cnt; i++, src++, pqr->p++, pqr->q++) {
|
|
|
|
*pqr->p ^= *src;
|
|
|
|
VDEV_RAIDZ_64MUL_2(*pqr->q, mask);
|
|
|
|
*pqr->q ^= *src;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
vdev_raidz_pqr_func(void *buf, size_t size, void *private)
|
|
|
|
{
|
|
|
|
struct pqr_struct *pqr = private;
|
|
|
|
const uint64_t *src = buf;
|
|
|
|
uint64_t mask;
|
|
|
|
int i, cnt = size / sizeof (src[0]);
|
|
|
|
|
|
|
|
ASSERT(pqr->p && pqr->q && pqr->r);
|
|
|
|
|
|
|
|
for (i = 0; i < cnt; i++, src++, pqr->p++, pqr->q++, pqr->r++) {
|
|
|
|
*pqr->p ^= *src;
|
|
|
|
VDEV_RAIDZ_64MUL_2(*pqr->q, mask);
|
|
|
|
*pqr->q ^= *src;
|
|
|
|
VDEV_RAIDZ_64MUL_4(*pqr->r, mask);
|
|
|
|
*pqr->r ^= *src;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
static void
|
|
|
|
vdev_raidz_generate_parity_p(raidz_map_t *rm)
|
|
|
|
{
|
2016-07-22 18:52:49 +03:00
|
|
|
uint64_t *p;
|
2008-11-20 23:01:55 +03:00
|
|
|
int c;
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_t *src;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
src = rm->rm_col[c].rc_abd;
|
|
|
|
p = abd_to_buf(rm->rm_col[VDEV_RAIDZ_P].rc_abd);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
if (c == rm->rm_firstdatacol) {
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_copy_to_buf(p, src, rm->rm_col[c].rc_size);
|
2008-11-20 23:01:55 +03:00
|
|
|
} else {
|
2016-07-22 18:52:49 +03:00
|
|
|
struct pqr_struct pqr = { p, NULL, NULL };
|
|
|
|
(void) abd_iterate_func(src, 0, rm->rm_col[c].rc_size,
|
|
|
|
vdev_raidz_p_func, &pqr);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vdev_raidz_generate_parity_pq(raidz_map_t *rm)
|
|
|
|
{
|
2016-07-22 18:52:49 +03:00
|
|
|
uint64_t *p, *q, pcnt, ccnt, mask, i;
|
2008-11-20 23:01:55 +03:00
|
|
|
int c;
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_t *src;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
pcnt = rm->rm_col[VDEV_RAIDZ_P].rc_size / sizeof (p[0]);
|
2008-11-20 23:01:55 +03:00
|
|
|
ASSERT(rm->rm_col[VDEV_RAIDZ_P].rc_size ==
|
|
|
|
rm->rm_col[VDEV_RAIDZ_Q].rc_size);
|
|
|
|
|
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
src = rm->rm_col[c].rc_abd;
|
|
|
|
p = abd_to_buf(rm->rm_col[VDEV_RAIDZ_P].rc_abd);
|
|
|
|
q = abd_to_buf(rm->rm_col[VDEV_RAIDZ_Q].rc_abd);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
ccnt = rm->rm_col[c].rc_size / sizeof (p[0]);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
if (c == rm->rm_firstdatacol) {
|
2017-02-24 23:05:42 +03:00
|
|
|
ASSERT(ccnt == pcnt || ccnt == 0);
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_copy_to_buf(p, src, rm->rm_col[c].rc_size);
|
|
|
|
(void) memcpy(q, p, rm->rm_col[c].rc_size);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
for (i = ccnt; i < pcnt; i++) {
|
|
|
|
p[i] = 0;
|
|
|
|
q[i] = 0;
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
2016-07-22 18:52:49 +03:00
|
|
|
} else {
|
2017-02-24 23:05:42 +03:00
|
|
|
struct pqr_struct pqr = { p, q, NULL };
|
|
|
|
|
|
|
|
ASSERT(ccnt <= pcnt);
|
|
|
|
(void) abd_iterate_func(src, 0, rm->rm_col[c].rc_size,
|
|
|
|
vdev_raidz_pq_func, &pqr);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Treat short columns as though they are full of 0s.
|
|
|
|
* Note that there's therefore nothing needed for P.
|
|
|
|
*/
|
2016-07-22 18:52:49 +03:00
|
|
|
for (i = ccnt; i < pcnt; i++) {
|
|
|
|
VDEV_RAIDZ_64MUL_2(q[i], mask);
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vdev_raidz_generate_parity_pqr(raidz_map_t *rm)
|
|
|
|
{
|
2016-07-22 18:52:49 +03:00
|
|
|
uint64_t *p, *q, *r, pcnt, ccnt, mask, i;
|
2009-08-18 22:43:27 +04:00
|
|
|
int c;
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_t *src;
|
2009-08-18 22:43:27 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
pcnt = rm->rm_col[VDEV_RAIDZ_P].rc_size / sizeof (p[0]);
|
2009-08-18 22:43:27 +04:00
|
|
|
ASSERT(rm->rm_col[VDEV_RAIDZ_P].rc_size ==
|
|
|
|
rm->rm_col[VDEV_RAIDZ_Q].rc_size);
|
|
|
|
ASSERT(rm->rm_col[VDEV_RAIDZ_P].rc_size ==
|
|
|
|
rm->rm_col[VDEV_RAIDZ_R].rc_size);
|
|
|
|
|
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
src = rm->rm_col[c].rc_abd;
|
|
|
|
p = abd_to_buf(rm->rm_col[VDEV_RAIDZ_P].rc_abd);
|
|
|
|
q = abd_to_buf(rm->rm_col[VDEV_RAIDZ_Q].rc_abd);
|
|
|
|
r = abd_to_buf(rm->rm_col[VDEV_RAIDZ_R].rc_abd);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
ccnt = rm->rm_col[c].rc_size / sizeof (p[0]);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
if (c == rm->rm_firstdatacol) {
|
2017-02-24 23:05:42 +03:00
|
|
|
ASSERT(ccnt == pcnt || ccnt == 0);
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_copy_to_buf(p, src, rm->rm_col[c].rc_size);
|
|
|
|
(void) memcpy(q, p, rm->rm_col[c].rc_size);
|
|
|
|
(void) memcpy(r, p, rm->rm_col[c].rc_size);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
for (i = ccnt; i < pcnt; i++) {
|
|
|
|
p[i] = 0;
|
|
|
|
q[i] = 0;
|
|
|
|
r[i] = 0;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
2016-07-22 18:52:49 +03:00
|
|
|
} else {
|
2017-02-24 23:05:42 +03:00
|
|
|
struct pqr_struct pqr = { p, q, r };
|
|
|
|
|
|
|
|
ASSERT(ccnt <= pcnt);
|
|
|
|
(void) abd_iterate_func(src, 0, rm->rm_col[c].rc_size,
|
|
|
|
vdev_raidz_pqr_func, &pqr);
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* Treat short columns as though they are full of 0s.
|
2009-08-18 22:43:27 +04:00
|
|
|
* Note that there's therefore nothing needed for P.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
2016-07-22 18:52:49 +03:00
|
|
|
for (i = ccnt; i < pcnt; i++) {
|
|
|
|
VDEV_RAIDZ_64MUL_2(q[i], mask);
|
|
|
|
VDEV_RAIDZ_64MUL_4(r[i], mask);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
/*
|
|
|
|
* Generate RAID parity in the first virtual columns according to the number of
|
|
|
|
* parity columns available.
|
|
|
|
*/
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
void
|
2009-08-18 22:43:27 +04:00
|
|
|
vdev_raidz_generate_parity(raidz_map_t *rm)
|
|
|
|
{
|
2016-07-17 20:41:11 +03:00
|
|
|
/* Generate using the new math implementation */
|
|
|
|
if (vdev_raidz_math_generate(rm) != RAIDZ_ORIGINAL_IMPL)
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
return;
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
switch (rm->rm_firstdatacol) {
|
|
|
|
case 1:
|
|
|
|
vdev_raidz_generate_parity_p(rm);
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
vdev_raidz_generate_parity_pq(rm);
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
vdev_raidz_generate_parity_pqr(rm);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
cmn_err(CE_PANIC, "invalid RAID-Z configuration");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
/* ARGSUSED */
|
|
|
|
static int
|
|
|
|
vdev_raidz_reconst_p_func(void *dbuf, void *sbuf, size_t size, void *private)
|
|
|
|
{
|
|
|
|
uint64_t *dst = dbuf;
|
|
|
|
uint64_t *src = sbuf;
|
|
|
|
int cnt = size / sizeof (src[0]);
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < cnt; i++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
dst[i] ^= src[i];
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ARGSUSED */
|
|
|
|
static int
|
|
|
|
vdev_raidz_reconst_q_pre_func(void *dbuf, void *sbuf, size_t size,
|
|
|
|
void *private)
|
|
|
|
{
|
|
|
|
uint64_t *dst = dbuf;
|
|
|
|
uint64_t *src = sbuf;
|
|
|
|
uint64_t mask;
|
|
|
|
int cnt = size / sizeof (dst[0]);
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < cnt; i++, dst++, src++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
VDEV_RAIDZ_64MUL_2(*dst, mask);
|
|
|
|
*dst ^= *src;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ARGSUSED */
|
|
|
|
static int
|
|
|
|
vdev_raidz_reconst_q_pre_tail_func(void *buf, size_t size, void *private)
|
|
|
|
{
|
|
|
|
uint64_t *dst = buf;
|
|
|
|
uint64_t mask;
|
|
|
|
int cnt = size / sizeof (dst[0]);
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < cnt; i++, dst++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
/* same operation as vdev_raidz_reconst_q_pre_func() on dst */
|
|
|
|
VDEV_RAIDZ_64MUL_2(*dst, mask);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct reconst_q_struct {
|
|
|
|
uint64_t *q;
|
|
|
|
int exp;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
vdev_raidz_reconst_q_post_func(void *buf, size_t size, void *private)
|
|
|
|
{
|
|
|
|
struct reconst_q_struct *rq = private;
|
|
|
|
uint64_t *dst = buf;
|
|
|
|
int cnt = size / sizeof (dst[0]);
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < cnt; i++, dst++, rq->q++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
int j;
|
|
|
|
uint8_t *b;
|
|
|
|
|
|
|
|
*dst ^= *rq->q;
|
|
|
|
for (j = 0, b = (uint8_t *)dst; j < 8; j++, b++) {
|
|
|
|
*b = vdev_raidz_exp2(*b, rq->exp);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct reconst_pq_struct {
|
|
|
|
uint8_t *p;
|
|
|
|
uint8_t *q;
|
|
|
|
uint8_t *pxy;
|
|
|
|
uint8_t *qxy;
|
|
|
|
int aexp;
|
|
|
|
int bexp;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int
|
|
|
|
vdev_raidz_reconst_pq_func(void *xbuf, void *ybuf, size_t size, void *private)
|
|
|
|
{
|
|
|
|
struct reconst_pq_struct *rpq = private;
|
|
|
|
uint8_t *xd = xbuf;
|
|
|
|
uint8_t *yd = ybuf;
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < size;
|
2016-07-22 18:52:49 +03:00
|
|
|
i++, rpq->p++, rpq->q++, rpq->pxy++, rpq->qxy++, xd++, yd++) {
|
|
|
|
*xd = vdev_raidz_exp2(*rpq->p ^ *rpq->pxy, rpq->aexp) ^
|
|
|
|
vdev_raidz_exp2(*rpq->q ^ *rpq->qxy, rpq->bexp);
|
|
|
|
*yd = *rpq->p ^ *rpq->pxy ^ *xd;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
vdev_raidz_reconst_pq_tail_func(void *xbuf, size_t size, void *private)
|
|
|
|
{
|
|
|
|
struct reconst_pq_struct *rpq = private;
|
|
|
|
uint8_t *xd = xbuf;
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < size;
|
2016-07-22 18:52:49 +03:00
|
|
|
i++, rpq->p++, rpq->q++, rpq->pxy++, rpq->qxy++, xd++) {
|
|
|
|
/* same operation as vdev_raidz_reconst_pq_func() on xd */
|
|
|
|
*xd = vdev_raidz_exp2(*rpq->p ^ *rpq->pxy, rpq->aexp) ^
|
|
|
|
vdev_raidz_exp2(*rpq->q ^ *rpq->qxy, rpq->bexp);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
static int
|
|
|
|
vdev_raidz_reconstruct_p(raidz_map_t *rm, int *tgts, int ntgts)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2009-08-18 22:43:27 +04:00
|
|
|
int x = tgts[0];
|
2008-11-20 23:01:55 +03:00
|
|
|
int c;
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_t *dst, *src;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
ASSERT(ntgts == 1);
|
|
|
|
ASSERT(x >= rm->rm_firstdatacol);
|
|
|
|
ASSERT(x < rm->rm_cols);
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
ASSERT(rm->rm_col[x].rc_size <= rm->rm_col[VDEV_RAIDZ_P].rc_size);
|
|
|
|
ASSERT(rm->rm_col[x].rc_size > 0);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
src = rm->rm_col[VDEV_RAIDZ_P].rc_abd;
|
|
|
|
dst = rm->rm_col[x].rc_abd;
|
|
|
|
|
|
|
|
abd_copy_from_buf(dst, abd_to_buf(src), rm->rm_col[x].rc_size);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
uint64_t size = MIN(rm->rm_col[x].rc_size,
|
|
|
|
rm->rm_col[c].rc_size);
|
|
|
|
|
|
|
|
src = rm->rm_col[c].rc_abd;
|
|
|
|
dst = rm->rm_col[x].rc_abd;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
if (c == x)
|
|
|
|
continue;
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
(void) abd_iterate_func2(dst, src, 0, 0, size,
|
|
|
|
vdev_raidz_reconst_p_func, NULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
return (1 << VDEV_RAIDZ_P);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
static int
|
|
|
|
vdev_raidz_reconstruct_q(raidz_map_t *rm, int *tgts, int ntgts)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2009-08-18 22:43:27 +04:00
|
|
|
int x = tgts[0];
|
2016-07-22 18:52:49 +03:00
|
|
|
int c, exp;
|
|
|
|
abd_t *dst, *src;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
ASSERT(ntgts == 1);
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
ASSERT(rm->rm_col[x].rc_size <= rm->rm_col[VDEV_RAIDZ_Q].rc_size);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
2016-07-22 18:52:49 +03:00
|
|
|
uint64_t size = (c == x) ? 0 : MIN(rm->rm_col[x].rc_size,
|
|
|
|
rm->rm_col[c].rc_size);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
src = rm->rm_col[c].rc_abd;
|
|
|
|
dst = rm->rm_col[x].rc_abd;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
if (c == rm->rm_firstdatacol) {
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_copy(dst, src, size);
|
|
|
|
if (rm->rm_col[x].rc_size > size)
|
|
|
|
abd_zero_off(dst, size,
|
|
|
|
rm->rm_col[x].rc_size - size);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
} else {
|
2016-07-22 18:52:49 +03:00
|
|
|
ASSERT3U(size, <=, rm->rm_col[x].rc_size);
|
|
|
|
(void) abd_iterate_func2(dst, src, 0, 0, size,
|
|
|
|
vdev_raidz_reconst_q_pre_func, NULL);
|
|
|
|
(void) abd_iterate_func(dst,
|
|
|
|
size, rm->rm_col[x].rc_size - size,
|
|
|
|
vdev_raidz_reconst_q_pre_tail_func, NULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
src = rm->rm_col[VDEV_RAIDZ_Q].rc_abd;
|
|
|
|
dst = rm->rm_col[x].rc_abd;
|
2008-11-20 23:01:55 +03:00
|
|
|
exp = 255 - (rm->rm_cols - 1 - x);
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
struct reconst_q_struct rq = { abd_to_buf(src), exp };
|
2016-07-22 18:52:49 +03:00
|
|
|
(void) abd_iterate_func(dst, 0, rm->rm_col[x].rc_size,
|
|
|
|
vdev_raidz_reconst_q_post_func, &rq);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
return (1 << VDEV_RAIDZ_Q);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
static int
|
|
|
|
vdev_raidz_reconstruct_pq(raidz_map_t *rm, int *tgts, int ntgts)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2016-07-22 18:52:49 +03:00
|
|
|
uint8_t *p, *q, *pxy, *qxy, tmp, a, b, aexp, bexp;
|
|
|
|
abd_t *pdata, *qdata;
|
|
|
|
uint64_t xsize, ysize;
|
2009-08-18 22:43:27 +04:00
|
|
|
int x = tgts[0];
|
|
|
|
int y = tgts[1];
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_t *xd, *yd;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
ASSERT(ntgts == 2);
|
2008-11-20 23:01:55 +03:00
|
|
|
ASSERT(x < y);
|
|
|
|
ASSERT(x >= rm->rm_firstdatacol);
|
|
|
|
ASSERT(y < rm->rm_cols);
|
|
|
|
|
|
|
|
ASSERT(rm->rm_col[x].rc_size >= rm->rm_col[y].rc_size);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Move the parity data aside -- we're going to compute parity as
|
|
|
|
* though columns x and y were full of zeros -- Pxy and Qxy. We want to
|
|
|
|
* reuse the parity generation mechanism without trashing the actual
|
|
|
|
* parity so we make those columns appear to be full of zeros by
|
|
|
|
* setting their lengths to zero.
|
|
|
|
*/
|
2016-07-22 18:52:49 +03:00
|
|
|
pdata = rm->rm_col[VDEV_RAIDZ_P].rc_abd;
|
|
|
|
qdata = rm->rm_col[VDEV_RAIDZ_Q].rc_abd;
|
2008-11-20 23:01:55 +03:00
|
|
|
xsize = rm->rm_col[x].rc_size;
|
|
|
|
ysize = rm->rm_col[y].rc_size;
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
rm->rm_col[VDEV_RAIDZ_P].rc_abd =
|
|
|
|
abd_alloc_linear(rm->rm_col[VDEV_RAIDZ_P].rc_size, B_TRUE);
|
|
|
|
rm->rm_col[VDEV_RAIDZ_Q].rc_abd =
|
|
|
|
abd_alloc_linear(rm->rm_col[VDEV_RAIDZ_Q].rc_size, B_TRUE);
|
2008-11-20 23:01:55 +03:00
|
|
|
rm->rm_col[x].rc_size = 0;
|
|
|
|
rm->rm_col[y].rc_size = 0;
|
|
|
|
|
|
|
|
vdev_raidz_generate_parity_pq(rm);
|
|
|
|
|
|
|
|
rm->rm_col[x].rc_size = xsize;
|
|
|
|
rm->rm_col[y].rc_size = ysize;
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
p = abd_to_buf(pdata);
|
|
|
|
q = abd_to_buf(qdata);
|
|
|
|
pxy = abd_to_buf(rm->rm_col[VDEV_RAIDZ_P].rc_abd);
|
|
|
|
qxy = abd_to_buf(rm->rm_col[VDEV_RAIDZ_Q].rc_abd);
|
|
|
|
xd = rm->rm_col[x].rc_abd;
|
|
|
|
yd = rm->rm_col[y].rc_abd;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We now have:
|
|
|
|
* Pxy = P + D_x + D_y
|
|
|
|
* Qxy = Q + 2^(ndevs - 1 - x) * D_x + 2^(ndevs - 1 - y) * D_y
|
|
|
|
*
|
|
|
|
* We can then solve for D_x:
|
|
|
|
* D_x = A * (P + Pxy) + B * (Q + Qxy)
|
|
|
|
* where
|
|
|
|
* A = 2^(x - y) * (2^(x - y) + 1)^-1
|
|
|
|
* B = 2^(ndevs - 1 - x) * (2^(x - y) + 1)^-1
|
|
|
|
*
|
|
|
|
* With D_x in hand, we can easily solve for D_y:
|
|
|
|
* D_y = P + Pxy + D_x
|
|
|
|
*/
|
|
|
|
|
|
|
|
a = vdev_raidz_pow2[255 + x - y];
|
|
|
|
b = vdev_raidz_pow2[255 - (rm->rm_cols - 1 - x)];
|
|
|
|
tmp = 255 - vdev_raidz_log2[a ^ 1];
|
|
|
|
|
|
|
|
aexp = vdev_raidz_log2[vdev_raidz_exp2(a, tmp)];
|
|
|
|
bexp = vdev_raidz_log2[vdev_raidz_exp2(b, tmp)];
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
ASSERT3U(xsize, >=, ysize);
|
2017-11-04 23:25:13 +03:00
|
|
|
struct reconst_pq_struct rpq = { p, q, pxy, qxy, aexp, bexp };
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
(void) abd_iterate_func2(xd, yd, 0, 0, ysize,
|
|
|
|
vdev_raidz_reconst_pq_func, &rpq);
|
|
|
|
(void) abd_iterate_func(xd, ysize, xsize - ysize,
|
|
|
|
vdev_raidz_reconst_pq_tail_func, &rpq);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_free(rm->rm_col[VDEV_RAIDZ_P].rc_abd);
|
|
|
|
abd_free(rm->rm_col[VDEV_RAIDZ_Q].rc_abd);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Restore the saved parity data.
|
|
|
|
*/
|
2016-07-22 18:52:49 +03:00
|
|
|
rm->rm_col[VDEV_RAIDZ_P].rc_abd = pdata;
|
|
|
|
rm->rm_col[VDEV_RAIDZ_Q].rc_abd = qdata;
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
return ((1 << VDEV_RAIDZ_P) | (1 << VDEV_RAIDZ_Q));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* BEGIN CSTYLED */
|
|
|
|
/*
|
|
|
|
* In the general case of reconstruction, we must solve the system of linear
|
2020-06-10 07:24:09 +03:00
|
|
|
* equations defined by the coefficients used to generate parity as well as
|
2009-08-18 22:43:27 +04:00
|
|
|
* the contents of the data and parity disks. This can be expressed with
|
|
|
|
* vectors for the original data (D) and the actual data (d) and parity (p)
|
|
|
|
* and a matrix composed of the identity matrix (I) and a dispersal matrix (V):
|
|
|
|
*
|
|
|
|
* __ __ __ __
|
|
|
|
* | | __ __ | p_0 |
|
|
|
|
* | V | | D_0 | | p_m-1 |
|
|
|
|
* | | x | : | = | d_0 |
|
|
|
|
* | I | | D_n-1 | | : |
|
|
|
|
* | | ~~ ~~ | d_n-1 |
|
|
|
|
* ~~ ~~ ~~ ~~
|
|
|
|
*
|
|
|
|
* I is simply a square identity matrix of size n, and V is a vandermonde
|
2020-06-10 07:24:09 +03:00
|
|
|
* matrix defined by the coefficients we chose for the various parity columns
|
2009-08-18 22:43:27 +04:00
|
|
|
* (1, 2, 4). Note that these values were chosen both for simplicity, speedy
|
|
|
|
* computation as well as linear separability.
|
|
|
|
*
|
|
|
|
* __ __ __ __
|
|
|
|
* | 1 .. 1 1 1 | | p_0 |
|
|
|
|
* | 2^n-1 .. 4 2 1 | __ __ | : |
|
|
|
|
* | 4^n-1 .. 16 4 1 | | D_0 | | p_m-1 |
|
|
|
|
* | 1 .. 0 0 0 | | D_1 | | d_0 |
|
|
|
|
* | 0 .. 0 0 0 | x | D_2 | = | d_1 |
|
|
|
|
* | : : : : | | : | | d_2 |
|
|
|
|
* | 0 .. 1 0 0 | | D_n-1 | | : |
|
|
|
|
* | 0 .. 0 1 0 | ~~ ~~ | : |
|
|
|
|
* | 0 .. 0 0 1 | | d_n-1 |
|
|
|
|
* ~~ ~~ ~~ ~~
|
|
|
|
*
|
|
|
|
* Note that I, V, d, and p are known. To compute D, we must invert the
|
|
|
|
* matrix and use the known data and parity values to reconstruct the unknown
|
|
|
|
* data values. We begin by removing the rows in V|I and d|p that correspond
|
|
|
|
* to failed or missing columns; we then make V|I square (n x n) and d|p
|
|
|
|
* sized n by removing rows corresponding to unused parity from the bottom up
|
|
|
|
* to generate (V|I)' and (d|p)'. We can then generate the inverse of (V|I)'
|
|
|
|
* using Gauss-Jordan elimination. In the example below we use m=3 parity
|
|
|
|
* columns, n=8 data columns, with errors in d_1, d_2, and p_1:
|
|
|
|
* __ __
|
|
|
|
* | 1 1 1 1 1 1 1 1 |
|
|
|
|
* | 128 64 32 16 8 4 2 1 | <-----+-+-- missing disks
|
|
|
|
* | 19 205 116 29 64 16 4 1 | / /
|
|
|
|
* | 1 0 0 0 0 0 0 0 | / /
|
|
|
|
* | 0 1 0 0 0 0 0 0 | <--' /
|
|
|
|
* (V|I) = | 0 0 1 0 0 0 0 0 | <---'
|
|
|
|
* | 0 0 0 1 0 0 0 0 |
|
|
|
|
* | 0 0 0 0 1 0 0 0 |
|
|
|
|
* | 0 0 0 0 0 1 0 0 |
|
|
|
|
* | 0 0 0 0 0 0 1 0 |
|
|
|
|
* | 0 0 0 0 0 0 0 1 |
|
|
|
|
* ~~ ~~
|
|
|
|
* __ __
|
|
|
|
* | 1 1 1 1 1 1 1 1 |
|
|
|
|
* | 128 64 32 16 8 4 2 1 |
|
|
|
|
* | 19 205 116 29 64 16 4 1 |
|
|
|
|
* | 1 0 0 0 0 0 0 0 |
|
|
|
|
* | 0 1 0 0 0 0 0 0 |
|
|
|
|
* (V|I)' = | 0 0 1 0 0 0 0 0 |
|
|
|
|
* | 0 0 0 1 0 0 0 0 |
|
|
|
|
* | 0 0 0 0 1 0 0 0 |
|
|
|
|
* | 0 0 0 0 0 1 0 0 |
|
|
|
|
* | 0 0 0 0 0 0 1 0 |
|
|
|
|
* | 0 0 0 0 0 0 0 1 |
|
|
|
|
* ~~ ~~
|
|
|
|
*
|
|
|
|
* Here we employ Gauss-Jordan elimination to find the inverse of (V|I)'. We
|
|
|
|
* have carefully chosen the seed values 1, 2, and 4 to ensure that this
|
|
|
|
* matrix is not singular.
|
|
|
|
* __ __
|
|
|
|
* | 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 |
|
|
|
|
* | 19 205 116 29 64 16 4 1 0 1 0 0 0 0 0 0 |
|
|
|
|
* | 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 |
|
|
|
|
* | 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 |
|
|
|
|
* | 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 |
|
|
|
|
* | 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 |
|
|
|
|
* | 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 |
|
|
|
|
* | 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 |
|
|
|
|
* ~~ ~~
|
|
|
|
* __ __
|
|
|
|
* | 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 |
|
|
|
|
* | 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 |
|
|
|
|
* | 19 205 116 29 64 16 4 1 0 1 0 0 0 0 0 0 |
|
|
|
|
* | 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 |
|
|
|
|
* | 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 |
|
|
|
|
* | 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 |
|
|
|
|
* | 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 |
|
|
|
|
* | 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 |
|
|
|
|
* ~~ ~~
|
|
|
|
* __ __
|
|
|
|
* | 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 |
|
|
|
|
* | 0 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 |
|
|
|
|
* | 0 205 116 0 0 0 0 0 0 1 19 29 64 16 4 1 |
|
|
|
|
* | 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 |
|
|
|
|
* | 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 |
|
|
|
|
* | 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 |
|
|
|
|
* | 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 |
|
|
|
|
* | 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 |
|
|
|
|
* ~~ ~~
|
|
|
|
* __ __
|
|
|
|
* | 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 |
|
|
|
|
* | 0 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 |
|
|
|
|
* | 0 0 185 0 0 0 0 0 205 1 222 208 141 221 201 204 |
|
|
|
|
* | 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 |
|
|
|
|
* | 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 |
|
|
|
|
* | 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 |
|
|
|
|
* | 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 |
|
|
|
|
* | 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 |
|
|
|
|
* ~~ ~~
|
|
|
|
* __ __
|
|
|
|
* | 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 |
|
|
|
|
* | 0 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 |
|
|
|
|
* | 0 0 1 0 0 0 0 0 166 100 4 40 158 168 216 209 |
|
|
|
|
* | 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 |
|
|
|
|
* | 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 |
|
|
|
|
* | 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 |
|
|
|
|
* | 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 |
|
|
|
|
* | 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 |
|
|
|
|
* ~~ ~~
|
|
|
|
* __ __
|
|
|
|
* | 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 |
|
|
|
|
* | 0 1 0 0 0 0 0 0 167 100 5 41 159 169 217 208 |
|
|
|
|
* | 0 0 1 0 0 0 0 0 166 100 4 40 158 168 216 209 |
|
|
|
|
* | 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 |
|
|
|
|
* | 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 |
|
|
|
|
* | 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 |
|
|
|
|
* | 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 |
|
|
|
|
* | 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 |
|
|
|
|
* ~~ ~~
|
|
|
|
* __ __
|
|
|
|
* | 0 0 1 0 0 0 0 0 |
|
|
|
|
* | 167 100 5 41 159 169 217 208 |
|
|
|
|
* | 166 100 4 40 158 168 216 209 |
|
|
|
|
* (V|I)'^-1 = | 0 0 0 1 0 0 0 0 |
|
|
|
|
* | 0 0 0 0 1 0 0 0 |
|
|
|
|
* | 0 0 0 0 0 1 0 0 |
|
|
|
|
* | 0 0 0 0 0 0 1 0 |
|
|
|
|
* | 0 0 0 0 0 0 0 1 |
|
|
|
|
* ~~ ~~
|
|
|
|
*
|
|
|
|
* We can then simply compute D = (V|I)'^-1 x (d|p)' to discover the values
|
|
|
|
* of the missing data.
|
|
|
|
*
|
|
|
|
* As is apparent from the example above, the only non-trivial rows in the
|
|
|
|
* inverse matrix correspond to the data disks that we're trying to
|
|
|
|
* reconstruct. Indeed, those are the only rows we need as the others would
|
|
|
|
* only be useful for reconstructing data known or assumed to be valid. For
|
|
|
|
* that reason, we only build the coefficients in the rows that correspond to
|
|
|
|
* targeted columns.
|
|
|
|
*/
|
|
|
|
/* END CSTYLED */
|
|
|
|
|
|
|
|
static void
|
|
|
|
vdev_raidz_matrix_init(raidz_map_t *rm, int n, int nmap, int *map,
|
|
|
|
uint8_t **rows)
|
|
|
|
{
|
|
|
|
int i, j;
|
|
|
|
int pow;
|
|
|
|
|
|
|
|
ASSERT(n == rm->rm_cols - rm->rm_firstdatacol);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fill in the missing rows of interest.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < nmap; i++) {
|
|
|
|
ASSERT3S(0, <=, map[i]);
|
|
|
|
ASSERT3S(map[i], <=, 2);
|
|
|
|
|
|
|
|
pow = map[i] * n;
|
|
|
|
if (pow > 255)
|
|
|
|
pow -= 255;
|
|
|
|
ASSERT(pow <= 255);
|
|
|
|
|
|
|
|
for (j = 0; j < n; j++) {
|
|
|
|
pow -= map[i];
|
|
|
|
if (pow < 0)
|
|
|
|
pow += 255;
|
|
|
|
rows[i][j] = vdev_raidz_pow2[pow];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vdev_raidz_matrix_invert(raidz_map_t *rm, int n, int nmissing, int *missing,
|
|
|
|
uint8_t **rows, uint8_t **invrows, const uint8_t *used)
|
|
|
|
{
|
|
|
|
int i, j, ii, jj;
|
|
|
|
uint8_t log;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Assert that the first nmissing entries from the array of used
|
|
|
|
* columns correspond to parity columns and that subsequent entries
|
|
|
|
* correspond to data columns.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < nmissing; i++) {
|
|
|
|
ASSERT3S(used[i], <, rm->rm_firstdatacol);
|
|
|
|
}
|
|
|
|
for (; i < n; i++) {
|
|
|
|
ASSERT3S(used[i], >=, rm->rm_firstdatacol);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First initialize the storage where we'll compute the inverse rows.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < nmissing; i++) {
|
|
|
|
for (j = 0; j < n; j++) {
|
|
|
|
invrows[i][j] = (i == j) ? 1 : 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Subtract all trivial rows from the rows of consequence.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < nmissing; i++) {
|
|
|
|
for (j = nmissing; j < n; j++) {
|
|
|
|
ASSERT3U(used[j], >=, rm->rm_firstdatacol);
|
|
|
|
jj = used[j] - rm->rm_firstdatacol;
|
|
|
|
ASSERT3S(jj, <, n);
|
|
|
|
invrows[i][j] = rows[i][jj];
|
|
|
|
rows[i][jj] = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For each of the rows of interest, we must normalize it and subtract
|
|
|
|
* a multiple of it from the other rows.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < nmissing; i++) {
|
|
|
|
for (j = 0; j < missing[i]; j++) {
|
2013-05-11 01:17:03 +04:00
|
|
|
ASSERT0(rows[i][j]);
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
|
|
|
ASSERT3U(rows[i][missing[i]], !=, 0);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Compute the inverse of the first element and multiply each
|
|
|
|
* element in the row by that value.
|
|
|
|
*/
|
|
|
|
log = 255 - vdev_raidz_log2[rows[i][missing[i]]];
|
|
|
|
|
|
|
|
for (j = 0; j < n; j++) {
|
|
|
|
rows[i][j] = vdev_raidz_exp2(rows[i][j], log);
|
|
|
|
invrows[i][j] = vdev_raidz_exp2(invrows[i][j], log);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (ii = 0; ii < nmissing; ii++) {
|
|
|
|
if (i == ii)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
ASSERT3U(rows[ii][missing[i]], !=, 0);
|
|
|
|
|
|
|
|
log = vdev_raidz_log2[rows[ii][missing[i]]];
|
|
|
|
|
|
|
|
for (j = 0; j < n; j++) {
|
|
|
|
rows[ii][j] ^=
|
|
|
|
vdev_raidz_exp2(rows[i][j], log);
|
|
|
|
invrows[ii][j] ^=
|
|
|
|
vdev_raidz_exp2(invrows[i][j], log);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Verify that the data that is left in the rows are properly part of
|
|
|
|
* an identity matrix.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < nmissing; i++) {
|
|
|
|
for (j = 0; j < n; j++) {
|
|
|
|
if (j == missing[i]) {
|
|
|
|
ASSERT3U(rows[i][j], ==, 1);
|
|
|
|
} else {
|
2013-05-11 01:17:03 +04:00
|
|
|
ASSERT0(rows[i][j]);
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vdev_raidz_matrix_reconstruct(raidz_map_t *rm, int n, int nmissing,
|
|
|
|
int *missing, uint8_t **invrows, const uint8_t *used)
|
|
|
|
{
|
|
|
|
int i, j, x, cc, c;
|
|
|
|
uint8_t *src;
|
|
|
|
uint64_t ccount;
|
2016-07-26 22:08:51 +03:00
|
|
|
uint8_t *dst[VDEV_RAIDZ_MAXPARITY] = { NULL };
|
|
|
|
uint64_t dcount[VDEV_RAIDZ_MAXPARITY] = { 0 };
|
2013-02-11 10:21:05 +04:00
|
|
|
uint8_t log = 0;
|
|
|
|
uint8_t val;
|
2009-08-18 22:43:27 +04:00
|
|
|
int ll;
|
|
|
|
uint8_t *invlog[VDEV_RAIDZ_MAXPARITY];
|
|
|
|
uint8_t *p, *pp;
|
|
|
|
size_t psize;
|
|
|
|
|
|
|
|
psize = sizeof (invlog[0][0]) * n * nmissing;
|
2014-11-21 03:09:39 +03:00
|
|
|
p = kmem_alloc(psize, KM_SLEEP);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
for (pp = p, i = 0; i < nmissing; i++) {
|
|
|
|
invlog[i] = pp;
|
|
|
|
pp += n;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < nmissing; i++) {
|
|
|
|
for (j = 0; j < n; j++) {
|
|
|
|
ASSERT3U(invrows[i][j], !=, 0);
|
|
|
|
invlog[i][j] = vdev_raidz_log2[invrows[i][j]];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
c = used[i];
|
|
|
|
ASSERT3U(c, <, rm->rm_cols);
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
src = abd_to_buf(rm->rm_col[c].rc_abd);
|
2009-08-18 22:43:27 +04:00
|
|
|
ccount = rm->rm_col[c].rc_size;
|
|
|
|
for (j = 0; j < nmissing; j++) {
|
|
|
|
cc = missing[j] + rm->rm_firstdatacol;
|
|
|
|
ASSERT3U(cc, >=, rm->rm_firstdatacol);
|
|
|
|
ASSERT3U(cc, <, rm->rm_cols);
|
|
|
|
ASSERT3U(cc, !=, c);
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
dst[j] = abd_to_buf(rm->rm_col[cc].rc_abd);
|
2009-08-18 22:43:27 +04:00
|
|
|
dcount[j] = rm->rm_col[cc].rc_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(ccount >= rm->rm_col[missing[0]].rc_size || i > 0);
|
|
|
|
|
|
|
|
for (x = 0; x < ccount; x++, src++) {
|
|
|
|
if (*src != 0)
|
|
|
|
log = vdev_raidz_log2[*src];
|
|
|
|
|
|
|
|
for (cc = 0; cc < nmissing; cc++) {
|
|
|
|
if (x >= dcount[cc])
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (*src == 0) {
|
|
|
|
val = 0;
|
|
|
|
} else {
|
|
|
|
if ((ll = log + invlog[cc][i]) >= 255)
|
|
|
|
ll -= 255;
|
|
|
|
val = vdev_raidz_pow2[ll];
|
|
|
|
}
|
|
|
|
|
|
|
|
if (i == 0)
|
|
|
|
dst[cc][x] = val;
|
|
|
|
else
|
|
|
|
dst[cc][x] ^= val;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
kmem_free(p, psize);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
vdev_raidz_reconstruct_general(raidz_map_t *rm, int *tgts, int ntgts)
|
|
|
|
{
|
|
|
|
int n, i, c, t, tt;
|
|
|
|
int nmissing_rows;
|
|
|
|
int missing_rows[VDEV_RAIDZ_MAXPARITY];
|
|
|
|
int parity_map[VDEV_RAIDZ_MAXPARITY];
|
|
|
|
|
|
|
|
uint8_t *p, *pp;
|
|
|
|
size_t psize;
|
|
|
|
|
|
|
|
uint8_t *rows[VDEV_RAIDZ_MAXPARITY];
|
|
|
|
uint8_t *invrows[VDEV_RAIDZ_MAXPARITY];
|
|
|
|
uint8_t *used;
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
abd_t **bufs = NULL;
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
int code = 0;
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
/*
|
|
|
|
* Matrix reconstruction can't use scatter ABDs yet, so we allocate
|
|
|
|
* temporary linear ABDs.
|
|
|
|
*/
|
|
|
|
if (!abd_is_linear(rm->rm_col[rm->rm_firstdatacol].rc_abd)) {
|
|
|
|
bufs = kmem_alloc(rm->rm_cols * sizeof (abd_t *), KM_PUSHPAGE);
|
|
|
|
|
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
|
|
|
raidz_col_t *col = &rm->rm_col[c];
|
|
|
|
|
|
|
|
bufs[c] = col->rc_abd;
|
|
|
|
col->rc_abd = abd_alloc_linear(col->rc_size, B_TRUE);
|
|
|
|
abd_copy(col->rc_abd, bufs[c], col->rc_size);
|
|
|
|
}
|
|
|
|
}
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
n = rm->rm_cols - rm->rm_firstdatacol;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Figure out which data columns are missing.
|
|
|
|
*/
|
|
|
|
nmissing_rows = 0;
|
|
|
|
for (t = 0; t < ntgts; t++) {
|
|
|
|
if (tgts[t] >= rm->rm_firstdatacol) {
|
|
|
|
missing_rows[nmissing_rows++] =
|
|
|
|
tgts[t] - rm->rm_firstdatacol;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Figure out which parity columns to use to help generate the missing
|
|
|
|
* data columns.
|
|
|
|
*/
|
|
|
|
for (tt = 0, c = 0, i = 0; i < nmissing_rows; c++) {
|
|
|
|
ASSERT(tt < ntgts);
|
|
|
|
ASSERT(c < rm->rm_firstdatacol);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Skip any targeted parity columns.
|
|
|
|
*/
|
|
|
|
if (c == tgts[tt]) {
|
|
|
|
tt++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
code |= 1 << c;
|
|
|
|
|
|
|
|
parity_map[i] = c;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(code != 0);
|
|
|
|
ASSERT3U(code, <, 1 << VDEV_RAIDZ_MAXPARITY);
|
|
|
|
|
|
|
|
psize = (sizeof (rows[0][0]) + sizeof (invrows[0][0])) *
|
|
|
|
nmissing_rows * n + sizeof (used[0]) * n;
|
2014-11-21 03:09:39 +03:00
|
|
|
p = kmem_alloc(psize, KM_SLEEP);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
for (pp = p, i = 0; i < nmissing_rows; i++) {
|
|
|
|
rows[i] = pp;
|
|
|
|
pp += n;
|
|
|
|
invrows[i] = pp;
|
|
|
|
pp += n;
|
|
|
|
}
|
|
|
|
used = pp;
|
|
|
|
|
|
|
|
for (i = 0; i < nmissing_rows; i++) {
|
|
|
|
used[i] = parity_map[i];
|
|
|
|
}
|
|
|
|
|
|
|
|
for (tt = 0, c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
|
|
|
if (tt < nmissing_rows &&
|
|
|
|
c == missing_rows[tt] + rm->rm_firstdatacol) {
|
|
|
|
tt++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT3S(i, <, n);
|
|
|
|
used[i] = c;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the interesting rows of the matrix.
|
|
|
|
*/
|
|
|
|
vdev_raidz_matrix_init(rm, n, nmissing_rows, parity_map, rows);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Invert the matrix.
|
|
|
|
*/
|
|
|
|
vdev_raidz_matrix_invert(rm, n, nmissing_rows, missing_rows, rows,
|
|
|
|
invrows, used);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reconstruct the missing data using the generated matrix.
|
|
|
|
*/
|
|
|
|
vdev_raidz_matrix_reconstruct(rm, n, nmissing_rows, missing_rows,
|
|
|
|
invrows, used);
|
|
|
|
|
|
|
|
kmem_free(p, psize);
|
|
|
|
|
2016-07-22 18:52:49 +03:00
|
|
|
/*
|
|
|
|
* copy back from temporary linear abds and free them
|
|
|
|
*/
|
|
|
|
if (bufs) {
|
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
|
|
|
raidz_col_t *col = &rm->rm_col[c];
|
|
|
|
|
|
|
|
abd_copy(bufs[c], col->rc_abd, col->rc_size);
|
|
|
|
abd_free(col->rc_abd);
|
|
|
|
col->rc_abd = bufs[c];
|
|
|
|
}
|
|
|
|
kmem_free(bufs, rm->rm_cols * sizeof (abd_t *));
|
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
return (code);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
int
|
|
|
|
vdev_raidz_reconstruct(raidz_map_t *rm, const int *t, int nt)
|
2009-08-18 22:43:27 +04:00
|
|
|
{
|
|
|
|
int tgts[VDEV_RAIDZ_MAXPARITY], *dt;
|
|
|
|
int ntgts;
|
2016-07-17 20:41:11 +03:00
|
|
|
int i, c, ret;
|
2009-08-18 22:43:27 +04:00
|
|
|
int code;
|
|
|
|
int nbadparity, nbaddata;
|
|
|
|
int parity_valid[VDEV_RAIDZ_MAXPARITY];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The tgts list must already be sorted.
|
|
|
|
*/
|
|
|
|
for (i = 1; i < nt; i++) {
|
|
|
|
ASSERT(t[i] > t[i - 1]);
|
|
|
|
}
|
|
|
|
|
|
|
|
nbadparity = rm->rm_firstdatacol;
|
|
|
|
nbaddata = rm->rm_cols - nbadparity;
|
|
|
|
ntgts = 0;
|
|
|
|
for (i = 0, c = 0; c < rm->rm_cols; c++) {
|
|
|
|
if (c < rm->rm_firstdatacol)
|
|
|
|
parity_valid[c] = B_FALSE;
|
|
|
|
|
|
|
|
if (i < nt && c == t[i]) {
|
|
|
|
tgts[ntgts++] = c;
|
|
|
|
i++;
|
|
|
|
} else if (rm->rm_col[c].rc_error != 0) {
|
|
|
|
tgts[ntgts++] = c;
|
|
|
|
} else if (c >= rm->rm_firstdatacol) {
|
|
|
|
nbaddata--;
|
|
|
|
} else {
|
|
|
|
parity_valid[c] = B_TRUE;
|
|
|
|
nbadparity--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(ntgts >= nt);
|
|
|
|
ASSERT(nbaddata >= 0);
|
|
|
|
ASSERT(nbaddata + nbadparity == ntgts);
|
|
|
|
|
|
|
|
dt = &tgts[nbadparity];
|
|
|
|
|
2016-07-17 20:41:11 +03:00
|
|
|
/* Reconstruct using the new math implementation */
|
|
|
|
ret = vdev_raidz_math_reconstruct(rm, parity_valid, dt, nbaddata);
|
|
|
|
if (ret != RAIDZ_ORIGINAL_IMPL)
|
|
|
|
return (ret);
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
/*
|
|
|
|
* See if we can use any of our optimized reconstruction routines.
|
|
|
|
*/
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
switch (nbaddata) {
|
|
|
|
case 1:
|
|
|
|
if (parity_valid[VDEV_RAIDZ_P])
|
|
|
|
return (vdev_raidz_reconstruct_p(rm, dt, 1));
|
2009-08-18 22:43:27 +04:00
|
|
|
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
ASSERT(rm->rm_firstdatacol > 1);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
if (parity_valid[VDEV_RAIDZ_Q])
|
|
|
|
return (vdev_raidz_reconstruct_q(rm, dt, 1));
|
2009-08-18 22:43:27 +04:00
|
|
|
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
ASSERT(rm->rm_firstdatacol > 2);
|
|
|
|
break;
|
2009-08-18 22:43:27 +04:00
|
|
|
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
case 2:
|
|
|
|
ASSERT(rm->rm_firstdatacol > 1);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
if (parity_valid[VDEV_RAIDZ_P] &&
|
|
|
|
parity_valid[VDEV_RAIDZ_Q])
|
|
|
|
return (vdev_raidz_reconstruct_pq(rm, dt, 2));
|
2009-08-18 22:43:27 +04:00
|
|
|
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
ASSERT(rm->rm_firstdatacol > 2);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
break;
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
code = vdev_raidz_reconstruct_general(rm, tgts, ntgts);
|
|
|
|
ASSERT(code < (1 << VDEV_RAIDZ_MAXPARITY));
|
|
|
|
ASSERT(code > 0);
|
|
|
|
return (code);
|
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
static int
|
2012-01-24 06:43:32 +04:00
|
|
|
vdev_raidz_open(vdev_t *vd, uint64_t *asize, uint64_t *max_asize,
|
2020-08-21 22:53:17 +03:00
|
|
|
uint64_t *logical_ashift, uint64_t *physical_ashift)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
|
|
|
vdev_t *cvd;
|
|
|
|
uint64_t nparity = vd->vdev_nparity;
|
2009-08-18 22:43:27 +04:00
|
|
|
int c;
|
2008-11-20 23:01:55 +03:00
|
|
|
int lasterror = 0;
|
|
|
|
int numerrors = 0;
|
|
|
|
|
|
|
|
ASSERT(nparity > 0);
|
|
|
|
|
|
|
|
if (nparity > VDEV_RAIDZ_MAXPARITY ||
|
|
|
|
vd->vdev_children < nparity + 1) {
|
|
|
|
vd->vdev_stat.vs_aux = VDEV_AUX_BAD_LABEL;
|
2013-03-08 22:41:28 +04:00
|
|
|
return (SET_ERROR(EINVAL));
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
vdev_open_children(vd);
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
for (c = 0; c < vd->vdev_children; c++) {
|
|
|
|
cvd = vd->vdev_child[c];
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
if (cvd->vdev_open_error != 0) {
|
|
|
|
lasterror = cvd->vdev_open_error;
|
2008-11-20 23:01:55 +03:00
|
|
|
numerrors++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
*asize = MIN(*asize - 1, cvd->vdev_asize - 1) + 1;
|
2012-01-24 06:43:32 +04:00
|
|
|
*max_asize = MIN(*max_asize - 1, cvd->vdev_max_asize - 1) + 1;
|
2020-08-21 22:53:17 +03:00
|
|
|
*logical_ashift = MAX(*logical_ashift, cvd->vdev_ashift);
|
|
|
|
*physical_ashift = MAX(*physical_ashift,
|
|
|
|
cvd->vdev_physical_ashift);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
*asize *= vd->vdev_children;
|
2012-01-24 06:43:32 +04:00
|
|
|
*max_asize *= vd->vdev_children;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
if (numerrors > nparity) {
|
|
|
|
vd->vdev_stat.vs_aux = VDEV_AUX_NO_REPLICAS;
|
|
|
|
return (lasterror);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vdev_raidz_close(vdev_t *vd)
|
|
|
|
{
|
|
|
|
int c;
|
|
|
|
|
|
|
|
for (c = 0; c < vd->vdev_children; c++)
|
|
|
|
vdev_close(vd->vdev_child[c]);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint64_t
|
|
|
|
vdev_raidz_asize(vdev_t *vd, uint64_t psize)
|
|
|
|
{
|
|
|
|
uint64_t asize;
|
|
|
|
uint64_t ashift = vd->vdev_top->vdev_ashift;
|
|
|
|
uint64_t cols = vd->vdev_children;
|
|
|
|
uint64_t nparity = vd->vdev_nparity;
|
|
|
|
|
|
|
|
asize = ((psize - 1) >> ashift) + 1;
|
|
|
|
asize += nparity * ((asize + cols - nparity - 1) / (cols - nparity));
|
|
|
|
asize = roundup(asize, nparity + 1) << ashift;
|
|
|
|
|
|
|
|
return (asize);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vdev_raidz_child_done(zio_t *zio)
|
|
|
|
{
|
|
|
|
raidz_col_t *rc = zio->io_private;
|
|
|
|
|
|
|
|
rc->rc_error = zio->io_error;
|
|
|
|
rc->rc_tried = 1;
|
|
|
|
rc->rc_skipped = 0;
|
|
|
|
}
|
|
|
|
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
static void
|
|
|
|
vdev_raidz_io_verify(zio_t *zio, raidz_map_t *rm, int col)
|
|
|
|
{
|
|
|
|
#ifdef ZFS_DEBUG
|
|
|
|
vdev_t *vd = zio->io_vd;
|
|
|
|
vdev_t *tvd = vd->vdev_top;
|
|
|
|
|
Reduce loaded range tree memory usage
This patch implements a new tree structure for ZFS, and uses it to
store range trees more efficiently.
The new structure is approximately a B-tree, though there are some
small differences from the usual characterizations. The tree has core
nodes and leaf nodes; each contain data elements, which the elements
in the core nodes acting as separators between its children. The
difference between core and leaf nodes is that the core nodes have an
array of children, while leaf nodes don't. Every node in the tree may
be only partially full; in most cases, they are all at least 50% full
(in terms of element count) except for the root node, which can be
less full. Underfull nodes will steal from their neighbors or merge to
remain full enough, while overfull nodes will split in two. The data
elements are contained in tree-controlled buffers; they are copied
into these on insertion, and overwritten on deletion. This means that
the elements are not independently allocated, which reduces overhead,
but also means they can't be shared between trees (and also that
pointers to them are only valid until a side-effectful tree operation
occurs). The overhead varies based on how dense the tree is, but is
usually on the order of about 50% of the element size; the per-node
overheads are very small, and so don't make a significant difference.
The trees can accept arbitrary records; they accept a size and a
comparator to allow them to be used for a variety of purposes.
The new trees replace the AVL trees used in the range trees today.
Currently, the range_seg_t structure contains three 8 byte integers
of payload and two 24 byte avl_tree_node_ts to handle its storage in
both an offset-sorted tree and a size-sorted tree (total size: 64
bytes). In the new model, the range seg structures are usually two 4
byte integers, but a separate one needs to exist for the size-sorted
and offset-sorted tree. Between the raw size, the 50% overhead, and
the double storage, the new btrees are expected to use 8*1.5*2 = 24
bytes per record, or 33.3% as much memory as the AVL trees (this is
for the purposes of storing metaslab range trees; for other purposes,
like scrubs, they use ~50% as much memory).
We reduced the size of the payload in the range segments by teaching
range trees about starting offsets and shifts; since metaslabs have a
fixed starting offset, and they all operate in terms of disk sectors,
we can store the ranges using 4-byte integers as long as the size of
the metaslab divided by the sector size is less than 2^32. For 512-byte
sectors, this is a 2^41 (or 2TB) metaslab, which with the default
settings corresponds to a 256PB disk. 4k sector disks can handle
metaslabs up to 2^46 bytes, or 2^63 byte disks. Since we do not
anticipate disks of this size in the near future, there should be
almost no cases where metaslabs need 64-byte integers to store their
ranges. We do still have the capability to store 64-byte integer ranges
to account for cases where we are storing per-vdev (or per-dnode) trees,
which could reasonably go above the limits discussed. We also do not
store fill information in the compact version of the node, since it
is only used for sorted scrub.
We also optimized the metaslab loading process in various other ways
to offset some inefficiencies in the btree model. While individual
operations (find, insert, remove_from) are faster for the btree than
they are for the avl tree, remove usually requires a find operation,
while in the AVL tree model the element itself suffices. Some clever
changes actually caused an overall speedup in metaslab loading; we use
approximately 40% less cpu to load metaslabs in our tests on Illumos.
Another memory and performance optimization was achieved by changing
what is stored in the size-sorted trees. When a disk is heavily
fragmented, the df algorithm used by default in ZFS will almost always
find a number of small regions in its initial cursor-based search; it
will usually only fall back to the size-sorted tree to find larger
regions. If we increase the size of the cursor-based search slightly,
and don't store segments that are smaller than a tunable size floor
in the size-sorted tree, we can further cut memory usage down to
below 20% of what the AVL trees store. This also results in further
reductions in CPU time spent loading metaslabs.
The 16KiB size floor was chosen because it results in substantial memory
usage reduction while not usually resulting in situations where we can't
find an appropriate chunk with the cursor and are forced to use an
oversized chunk from the size-sorted tree. In addition, even if we do
have to use an oversized chunk from the size-sorted tree, the chunk
would be too small to use for ZIL allocations, so it isn't as big of a
loss as it might otherwise be. And often, more small allocations will
follow the initial one, and the cursor search will now find the
remainder of the chunk we didn't use all of and use it for subsequent
allocations. Practical testing has shown little or no change in
fragmentation as a result of this change.
If the size-sorted tree becomes empty while the offset sorted one still
has entries, it will load all the entries from the offset sorted tree
and disregard the size floor until it is unloaded again. This operation
occurs rarely with the default setting, only on incredibly thoroughly
fragmented pools.
There are some other small changes to zdb to teach it to handle btrees,
but nothing major.
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed by: Sebastien Roy seb@delphix.com
Reviewed-by: Igor Kozhukhov <igor@dilos.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #9181
2019-10-09 20:36:03 +03:00
|
|
|
range_seg64_t logical_rs, physical_rs;
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
logical_rs.rs_start = zio->io_offset;
|
|
|
|
logical_rs.rs_end = logical_rs.rs_start +
|
|
|
|
vdev_raidz_asize(zio->io_vd, zio->io_size);
|
|
|
|
|
|
|
|
raidz_col_t *rc = &rm->rm_col[col];
|
|
|
|
vdev_t *cvd = vd->vdev_child[rc->rc_devidx];
|
|
|
|
|
|
|
|
vdev_xlate(cvd, &logical_rs, &physical_rs);
|
|
|
|
ASSERT3U(rc->rc_offset, ==, physical_rs.rs_start);
|
|
|
|
ASSERT3U(rc->rc_offset, <, physical_rs.rs_end);
|
|
|
|
/*
|
|
|
|
* It would be nice to assert that rs_end is equal
|
|
|
|
* to rc_offset + rc_size but there might be an
|
|
|
|
* optional I/O at the end that is not accounted in
|
|
|
|
* rc_size.
|
|
|
|
*/
|
|
|
|
if (physical_rs.rs_end > rc->rc_offset + rc->rc_size) {
|
|
|
|
ASSERT3U(physical_rs.rs_end, ==, rc->rc_offset +
|
|
|
|
rc->rc_size + (1 << tvd->vdev_ashift));
|
|
|
|
} else {
|
|
|
|
ASSERT3U(physical_rs.rs_end, ==, rc->rc_offset + rc->rc_size);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2013-06-11 21:12:34 +04:00
|
|
|
/*
|
|
|
|
* Start an IO operation on a RAIDZ VDev
|
|
|
|
*
|
|
|
|
* Outline:
|
|
|
|
* - For write operations:
|
|
|
|
* 1. Generate the parity data
|
|
|
|
* 2. Create child zio write operations to each column's vdev, for both
|
|
|
|
* data and parity.
|
|
|
|
* 3. If the column skips any sectors for padding, create optional dummy
|
|
|
|
* write zio children for those areas to improve aggregation continuity.
|
|
|
|
* - For read operations:
|
|
|
|
* 1. Create child zio read operations to each data column's vdev to read
|
|
|
|
* the range of data required for zio.
|
|
|
|
* 2. If this is a scrub or resilver operation, or if any of the data
|
|
|
|
* vdevs have had errors, then create zio read operations to the parity
|
|
|
|
* columns' VDevs as well.
|
|
|
|
*/
|
2014-10-21 02:07:45 +04:00
|
|
|
static void
|
2008-11-20 23:01:55 +03:00
|
|
|
vdev_raidz_io_start(zio_t *zio)
|
|
|
|
{
|
|
|
|
vdev_t *vd = zio->io_vd;
|
|
|
|
vdev_t *tvd = vd->vdev_top;
|
|
|
|
vdev_t *cvd;
|
|
|
|
raidz_map_t *rm;
|
|
|
|
raidz_col_t *rc;
|
2009-08-18 22:43:27 +04:00
|
|
|
int c, i;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
rm = vdev_raidz_map_alloc(zio, tvd->vdev_ashift, vd->vdev_children,
|
|
|
|
vd->vdev_nparity);
|
|
|
|
|
|
|
|
ASSERT3U(rm->rm_asize, ==, vdev_psize_to_asize(vd, zio->io_size));
|
|
|
|
|
|
|
|
if (zio->io_type == ZIO_TYPE_WRITE) {
|
2009-08-18 22:43:27 +04:00
|
|
|
vdev_raidz_generate_parity(rm);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
for (c = 0; c < rm->rm_cols; c++) {
|
|
|
|
rc = &rm->rm_col[c];
|
|
|
|
cvd = vd->vdev_child[rc->rc_devidx];
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Verify physical to logical translation.
|
|
|
|
*/
|
|
|
|
vdev_raidz_io_verify(zio, rm, c);
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
zio_nowait(zio_vdev_child_io(zio, NULL, cvd,
|
2016-07-22 18:52:49 +03:00
|
|
|
rc->rc_offset, rc->rc_abd, rc->rc_size,
|
2008-12-03 23:09:06 +03:00
|
|
|
zio->io_type, zio->io_priority, 0,
|
2008-11-20 23:01:55 +03:00
|
|
|
vdev_raidz_child_done, rc));
|
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
/*
|
|
|
|
* Generate optional I/Os for any skipped sectors to improve
|
|
|
|
* aggregation contiguity.
|
|
|
|
*/
|
2010-05-29 00:45:14 +04:00
|
|
|
for (c = rm->rm_skipstart, i = 0; i < rm->rm_nskip; c++, i++) {
|
2009-08-18 22:43:27 +04:00
|
|
|
ASSERT(c <= rm->rm_scols);
|
|
|
|
if (c == rm->rm_scols)
|
|
|
|
c = 0;
|
|
|
|
rc = &rm->rm_col[c];
|
|
|
|
cvd = vd->vdev_child[rc->rc_devidx];
|
|
|
|
zio_nowait(zio_vdev_child_io(zio, NULL, cvd,
|
|
|
|
rc->rc_offset + rc->rc_size, NULL,
|
|
|
|
1 << tvd->vdev_ashift,
|
|
|
|
zio->io_type, zio->io_priority,
|
|
|
|
ZIO_FLAG_NODATA | ZIO_FLAG_OPTIONAL, NULL, NULL));
|
|
|
|
}
|
|
|
|
|
2014-10-21 02:07:45 +04:00
|
|
|
zio_execute(zio);
|
|
|
|
return;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(zio->io_type == ZIO_TYPE_READ);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Iterate over the columns in reverse order so that we hit the parity
|
2009-08-18 22:43:27 +04:00
|
|
|
* last -- any errors along the way will force us to read the parity.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
|
|
|
for (c = rm->rm_cols - 1; c >= 0; c--) {
|
|
|
|
rc = &rm->rm_col[c];
|
|
|
|
cvd = vd->vdev_child[rc->rc_devidx];
|
|
|
|
if (!vdev_readable(cvd)) {
|
|
|
|
if (c >= rm->rm_firstdatacol)
|
|
|
|
rm->rm_missingdata++;
|
|
|
|
else
|
|
|
|
rm->rm_missingparity++;
|
2013-03-08 22:41:28 +04:00
|
|
|
rc->rc_error = SET_ERROR(ENXIO);
|
2008-11-20 23:01:55 +03:00
|
|
|
rc->rc_tried = 1; /* don't even try */
|
|
|
|
rc->rc_skipped = 1;
|
|
|
|
continue;
|
|
|
|
}
|
2010-05-29 00:45:14 +04:00
|
|
|
if (vdev_dtl_contains(cvd, DTL_MISSING, zio->io_txg, 1)) {
|
2008-11-20 23:01:55 +03:00
|
|
|
if (c >= rm->rm_firstdatacol)
|
|
|
|
rm->rm_missingdata++;
|
|
|
|
else
|
|
|
|
rm->rm_missingparity++;
|
2013-03-08 22:41:28 +04:00
|
|
|
rc->rc_error = SET_ERROR(ESTALE);
|
2008-11-20 23:01:55 +03:00
|
|
|
rc->rc_skipped = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (c >= rm->rm_firstdatacol || rm->rm_missingdata > 0 ||
|
2009-07-03 02:44:48 +04:00
|
|
|
(zio->io_flags & (ZIO_FLAG_SCRUB | ZIO_FLAG_RESILVER))) {
|
2008-11-20 23:01:55 +03:00
|
|
|
zio_nowait(zio_vdev_child_io(zio, NULL, cvd,
|
2016-07-22 18:52:49 +03:00
|
|
|
rc->rc_offset, rc->rc_abd, rc->rc_size,
|
2008-12-03 23:09:06 +03:00
|
|
|
zio->io_type, zio->io_priority, 0,
|
2008-11-20 23:01:55 +03:00
|
|
|
vdev_raidz_child_done, rc));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-10-21 02:07:45 +04:00
|
|
|
zio_execute(zio);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* Report a checksum error for a child of a RAID-Z device.
|
|
|
|
*/
|
|
|
|
static void
|
2017-01-05 22:10:07 +03:00
|
|
|
raidz_checksum_error(zio_t *zio, raidz_col_t *rc, abd_t *bad_data)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
|
|
|
vdev_t *vd = zio->io_vd->vdev_child[rc->rc_devidx];
|
|
|
|
|
|
|
|
if (!(zio->io_flags & ZIO_FLAG_SPECULATIVE)) {
|
2010-05-29 00:45:14 +04:00
|
|
|
zio_bad_cksum_t zbc;
|
|
|
|
raidz_map_t *rm = zio->io_vsd;
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
mutex_enter(&vd->vdev_stat_lock);
|
|
|
|
vd->vdev_stat.vs_checksum_errors++;
|
|
|
|
mutex_exit(&vd->vdev_stat_lock);
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
zbc.zbc_has_cksum = 0;
|
|
|
|
zbc.zbc_injected = rm->rm_ecksuminjected;
|
|
|
|
|
2020-09-01 05:35:11 +03:00
|
|
|
(void) zfs_ereport_post_checksum(zio->io_spa, vd,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
&zio->io_bookmark, zio, rc->rc_offset, rc->rc_size,
|
|
|
|
rc->rc_abd, bad_data, &zbc);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We keep track of whether or not there were any injected errors, so that
|
|
|
|
* any ereports we generate can note it.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
raidz_checksum_verify(zio_t *zio)
|
|
|
|
{
|
|
|
|
zio_bad_cksum_t zbc;
|
|
|
|
raidz_map_t *rm = zio->io_vsd;
|
|
|
|
|
2010-08-26 20:58:04 +04:00
|
|
|
bzero(&zbc, sizeof (zio_bad_cksum_t));
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
int ret = zio_checksum_error(zio, &zbc);
|
2010-05-29 00:45:14 +04:00
|
|
|
if (ret != 0 && zbc.zbc_injected != 0)
|
|
|
|
rm->rm_ecksuminjected = 1;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
return (ret);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Generate the parity from the data columns. If we tried and were able to
|
|
|
|
* read the parity without error, verify that the generated parity matches the
|
|
|
|
* data we read. If it doesn't, we fire off a checksum error. Return the
|
|
|
|
* number such failures.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
raidz_parity_verify(zio_t *zio, raidz_map_t *rm)
|
|
|
|
{
|
2017-01-05 22:10:07 +03:00
|
|
|
abd_t *orig[VDEV_RAIDZ_MAXPARITY];
|
2008-11-20 23:01:55 +03:00
|
|
|
int c, ret = 0;
|
|
|
|
raidz_col_t *rc;
|
|
|
|
|
2016-06-16 01:47:05 +03:00
|
|
|
blkptr_t *bp = zio->io_bp;
|
|
|
|
enum zio_checksum checksum = (bp == NULL ? zio->io_prop.zp_checksum :
|
|
|
|
(BP_IS_GANG(bp) ? ZIO_CHECKSUM_GANG_HEADER : BP_GET_CHECKSUM(bp)));
|
|
|
|
|
|
|
|
if (checksum == ZIO_CHECKSUM_NOPARITY)
|
|
|
|
return (ret);
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
for (c = 0; c < rm->rm_firstdatacol; c++) {
|
|
|
|
rc = &rm->rm_col[c];
|
|
|
|
if (!rc->rc_tried || rc->rc_error != 0)
|
|
|
|
continue;
|
2017-01-05 22:10:07 +03:00
|
|
|
|
|
|
|
orig[c] = abd_alloc_sametype(rc->rc_abd, rc->rc_size);
|
|
|
|
abd_copy(orig[c], rc->rc_abd, rc->rc_size);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
vdev_raidz_generate_parity(rm);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
for (c = 0; c < rm->rm_firstdatacol; c++) {
|
|
|
|
rc = &rm->rm_col[c];
|
|
|
|
if (!rc->rc_tried || rc->rc_error != 0)
|
|
|
|
continue;
|
2017-01-05 22:10:07 +03:00
|
|
|
if (abd_cmp(orig[c], rc->rc_abd) != 0) {
|
2010-05-29 00:45:14 +04:00
|
|
|
raidz_checksum_error(zio, rc, orig[c]);
|
2013-03-08 22:41:28 +04:00
|
|
|
rc->rc_error = SET_ERROR(ECKSUM);
|
2008-11-20 23:01:55 +03:00
|
|
|
ret++;
|
|
|
|
}
|
2017-01-05 22:10:07 +03:00
|
|
|
abd_free(orig[c]);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return (ret);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2008-12-03 23:09:06 +03:00
|
|
|
vdev_raidz_worst_error(raidz_map_t *rm)
|
|
|
|
{
|
2017-11-04 23:25:13 +03:00
|
|
|
int error = 0;
|
2008-12-03 23:09:06 +03:00
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int c = 0; c < rm->rm_cols; c++)
|
2008-12-03 23:09:06 +03:00
|
|
|
error = zio_worst_error(error, rm->rm_col[c].rc_error);
|
|
|
|
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
/*
|
|
|
|
* Iterate over all combinations of bad data and attempt a reconstruction.
|
|
|
|
* Note that the algorithm below is non-optimal because it doesn't take into
|
|
|
|
* account how reconstruction is actually performed. For example, with
|
|
|
|
* triple-parity RAID-Z the reconstruction procedure is the same if column 4
|
|
|
|
* is targeted as invalid as if columns 1 and 4 are targeted since in both
|
|
|
|
* cases we'd only use parity information in column 0.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
vdev_raidz_combrec(zio_t *zio, int total_errors, int data_errors)
|
|
|
|
{
|
|
|
|
raidz_map_t *rm = zio->io_vsd;
|
|
|
|
raidz_col_t *rc;
|
2017-01-05 22:10:07 +03:00
|
|
|
abd_t *orig[VDEV_RAIDZ_MAXPARITY];
|
2009-08-18 22:43:27 +04:00
|
|
|
int tstore[VDEV_RAIDZ_MAXPARITY + 2];
|
|
|
|
int *tgts = &tstore[1];
|
2010-08-26 21:18:06 +04:00
|
|
|
int curr, next, i, c, n;
|
2009-08-18 22:43:27 +04:00
|
|
|
int code, ret = 0;
|
|
|
|
|
|
|
|
ASSERT(total_errors < rm->rm_firstdatacol);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This simplifies one edge condition.
|
|
|
|
*/
|
|
|
|
tgts[-1] = -1;
|
|
|
|
|
|
|
|
for (n = 1; n <= rm->rm_firstdatacol - total_errors; n++) {
|
|
|
|
/*
|
|
|
|
* Initialize the targets array by finding the first n columns
|
|
|
|
* that contain no error.
|
|
|
|
*
|
|
|
|
* If there were no data errors, we need to ensure that we're
|
|
|
|
* always explicitly attempting to reconstruct at least one
|
|
|
|
* data column. To do this, we simply push the highest target
|
|
|
|
* up into the data columns.
|
|
|
|
*/
|
|
|
|
for (c = 0, i = 0; i < n; i++) {
|
|
|
|
if (i == n - 1 && data_errors == 0 &&
|
|
|
|
c < rm->rm_firstdatacol) {
|
|
|
|
c = rm->rm_firstdatacol;
|
|
|
|
}
|
|
|
|
|
|
|
|
while (rm->rm_col[c].rc_error != 0) {
|
|
|
|
c++;
|
|
|
|
ASSERT3S(c, <, rm->rm_cols);
|
|
|
|
}
|
|
|
|
|
|
|
|
tgts[i] = c++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Setting tgts[n] simplifies the other edge condition.
|
|
|
|
*/
|
|
|
|
tgts[n] = rm->rm_cols;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* These buffers were allocated in previous iterations.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < n - 1; i++) {
|
|
|
|
ASSERT(orig[i] != NULL);
|
|
|
|
}
|
|
|
|
|
2017-01-05 22:10:07 +03:00
|
|
|
orig[n - 1] = abd_alloc_sametype(rm->rm_col[0].rc_abd,
|
|
|
|
rm->rm_col[0].rc_size);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
2010-08-26 21:18:06 +04:00
|
|
|
curr = 0;
|
|
|
|
next = tgts[curr];
|
2009-08-18 22:43:27 +04:00
|
|
|
|
2010-08-26 21:18:06 +04:00
|
|
|
while (curr != n) {
|
|
|
|
tgts[curr] = next;
|
|
|
|
curr = 0;
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Save off the original data that we're going to
|
|
|
|
* attempt to reconstruct.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
ASSERT(orig[i] != NULL);
|
|
|
|
c = tgts[i];
|
|
|
|
ASSERT3S(c, >=, 0);
|
|
|
|
ASSERT3S(c, <, rm->rm_cols);
|
|
|
|
rc = &rm->rm_col[c];
|
2017-01-05 22:10:07 +03:00
|
|
|
abd_copy(orig[i], rc->rc_abd, rc->rc_size);
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Attempt a reconstruction and exit the outer loop on
|
|
|
|
* success.
|
|
|
|
*/
|
|
|
|
code = vdev_raidz_reconstruct(rm, tgts, n);
|
2010-05-29 00:45:14 +04:00
|
|
|
if (raidz_checksum_verify(zio) == 0) {
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
c = tgts[i];
|
|
|
|
rc = &rm->rm_col[c];
|
|
|
|
ASSERT(rc->rc_error == 0);
|
|
|
|
if (rc->rc_tried)
|
2010-05-29 00:45:14 +04:00
|
|
|
raidz_checksum_error(zio, rc,
|
|
|
|
orig[i]);
|
2013-03-08 22:41:28 +04:00
|
|
|
rc->rc_error = SET_ERROR(ECKSUM);
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = code;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Restore the original data.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < n; i++) {
|
|
|
|
c = tgts[i];
|
|
|
|
rc = &rm->rm_col[c];
|
2017-01-05 22:10:07 +03:00
|
|
|
abd_copy(rc->rc_abd, orig[i], rc->rc_size);
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
do {
|
|
|
|
/*
|
2010-08-26 21:18:06 +04:00
|
|
|
* Find the next valid column after the curr
|
2009-08-18 22:43:27 +04:00
|
|
|
* position..
|
|
|
|
*/
|
2010-08-26 21:18:06 +04:00
|
|
|
for (next = tgts[curr] + 1;
|
2009-08-18 22:43:27 +04:00
|
|
|
next < rm->rm_cols &&
|
|
|
|
rm->rm_col[next].rc_error != 0; next++)
|
|
|
|
continue;
|
|
|
|
|
2010-08-26 21:18:06 +04:00
|
|
|
ASSERT(next <= tgts[curr + 1]);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If that spot is available, we're done here.
|
|
|
|
*/
|
2010-08-26 21:18:06 +04:00
|
|
|
if (next != tgts[curr + 1])
|
2009-08-18 22:43:27 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Otherwise, find the next valid column after
|
|
|
|
* the previous position.
|
|
|
|
*/
|
2010-08-26 21:18:06 +04:00
|
|
|
for (c = tgts[curr - 1] + 1;
|
2009-08-18 22:43:27 +04:00
|
|
|
rm->rm_col[c].rc_error != 0; c++)
|
|
|
|
continue;
|
|
|
|
|
2010-08-26 21:18:06 +04:00
|
|
|
tgts[curr] = c;
|
|
|
|
curr++;
|
2009-08-18 22:43:27 +04:00
|
|
|
|
2010-08-26 21:18:06 +04:00
|
|
|
} while (curr != n);
|
2009-08-18 22:43:27 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
n--;
|
|
|
|
done:
|
2017-01-05 22:10:07 +03:00
|
|
|
for (i = 0; i < n; i++)
|
|
|
|
abd_free(orig[i]);
|
2009-08-18 22:43:27 +04:00
|
|
|
|
|
|
|
return (ret);
|
|
|
|
}
|
|
|
|
|
2013-06-11 21:12:34 +04:00
|
|
|
/*
|
|
|
|
* Complete an IO operation on a RAIDZ VDev
|
|
|
|
*
|
|
|
|
* Outline:
|
|
|
|
* - For write operations:
|
|
|
|
* 1. Check for errors on the child IOs.
|
|
|
|
* 2. Return, setting an error code if too few child VDevs were written
|
|
|
|
* to reconstruct the data later. Note that partial writes are
|
|
|
|
* considered successful if they can be reconstructed at all.
|
|
|
|
* - For read operations:
|
|
|
|
* 1. Check for errors on the child IOs.
|
|
|
|
* 2. If data errors occurred:
|
|
|
|
* a. Try to reassemble the data from the parity available.
|
|
|
|
* b. If we haven't yet read the parity drives, read them now.
|
|
|
|
* c. If all parity drives have been read but the data still doesn't
|
|
|
|
* reassemble with a correct checksum, then try combinatorial
|
|
|
|
* reconstruction.
|
|
|
|
* d. If that doesn't work, return an error.
|
|
|
|
* 3. If there were unexpected errors or this is a resilver operation,
|
|
|
|
* rewrite the vdevs that had errors.
|
|
|
|
*/
|
2008-12-03 23:09:06 +03:00
|
|
|
static void
|
2008-11-20 23:01:55 +03:00
|
|
|
vdev_raidz_io_done(zio_t *zio)
|
|
|
|
{
|
|
|
|
vdev_t *vd = zio->io_vd;
|
|
|
|
vdev_t *cvd;
|
|
|
|
raidz_map_t *rm = zio->io_vsd;
|
2010-08-26 20:58:04 +04:00
|
|
|
raidz_col_t *rc = NULL;
|
2008-11-20 23:01:55 +03:00
|
|
|
int unexpected_errors = 0;
|
|
|
|
int parity_errors = 0;
|
|
|
|
int parity_untried = 0;
|
|
|
|
int data_errors = 0;
|
2008-12-03 23:09:06 +03:00
|
|
|
int total_errors = 0;
|
2009-08-18 22:43:27 +04:00
|
|
|
int n, c;
|
|
|
|
int tgts[VDEV_RAIDZ_MAXPARITY];
|
|
|
|
int code;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
ASSERT(zio->io_bp != NULL); /* XXX need to add code to enforce this */
|
|
|
|
|
|
|
|
ASSERT(rm->rm_missingparity <= rm->rm_firstdatacol);
|
|
|
|
ASSERT(rm->rm_missingdata <= rm->rm_cols - rm->rm_firstdatacol);
|
|
|
|
|
|
|
|
for (c = 0; c < rm->rm_cols; c++) {
|
|
|
|
rc = &rm->rm_col[c];
|
|
|
|
|
|
|
|
if (rc->rc_error) {
|
2008-12-03 23:09:06 +03:00
|
|
|
ASSERT(rc->rc_error != ECKSUM); /* child has no bp */
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
if (c < rm->rm_firstdatacol)
|
|
|
|
parity_errors++;
|
|
|
|
else
|
|
|
|
data_errors++;
|
|
|
|
|
|
|
|
if (!rc->rc_skipped)
|
|
|
|
unexpected_errors++;
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
total_errors++;
|
2008-11-20 23:01:55 +03:00
|
|
|
} else if (c < rm->rm_firstdatacol && !rc->rc_tried) {
|
|
|
|
parity_untried++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (zio->io_type == ZIO_TYPE_WRITE) {
|
|
|
|
/*
|
2008-12-03 23:09:06 +03:00
|
|
|
* XXX -- for now, treat partial writes as a success.
|
|
|
|
* (If we couldn't write enough columns to reconstruct
|
|
|
|
* the data, the I/O failed. Otherwise, good enough.)
|
|
|
|
*
|
|
|
|
* Now that we support write reallocation, it would be better
|
|
|
|
* to treat partial failure as real failure unless there are
|
|
|
|
* no non-degraded top-level vdevs left, and not update DTLs
|
|
|
|
* if we intend to reallocate.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
|
|
|
/* XXPOLICY */
|
2008-12-03 23:09:06 +03:00
|
|
|
if (total_errors > rm->rm_firstdatacol)
|
|
|
|
zio->io_error = vdev_raidz_worst_error(rm);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
return;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT(zio->io_type == ZIO_TYPE_READ);
|
|
|
|
/*
|
|
|
|
* There are three potential phases for a read:
|
|
|
|
* 1. produce valid data from the columns read
|
|
|
|
* 2. read all disks and try again
|
|
|
|
* 3. perform combinatorial reconstruction
|
|
|
|
*
|
|
|
|
* Each phase is progressively both more expensive and less likely to
|
|
|
|
* occur. If we encounter more errors than we can repair or all phases
|
|
|
|
* fail, we have no choice but to return an error.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the number of errors we saw was correctable -- less than or equal
|
|
|
|
* to the number of parity disks read -- attempt to produce data that
|
|
|
|
* has a valid checksum. Naturally, this case applies in the absence of
|
|
|
|
* any errors.
|
|
|
|
*/
|
2008-12-03 23:09:06 +03:00
|
|
|
if (total_errors <= rm->rm_firstdatacol - parity_untried) {
|
2009-08-18 22:43:27 +04:00
|
|
|
if (data_errors == 0) {
|
2010-05-29 00:45:14 +04:00
|
|
|
if (raidz_checksum_verify(zio) == 0) {
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* If we read parity information (unnecessarily
|
|
|
|
* as it happens since no reconstruction was
|
|
|
|
* needed) regenerate and verify the parity.
|
|
|
|
* We also regenerate parity when resilvering
|
|
|
|
* so we can write it out to the failed device
|
|
|
|
* later.
|
|
|
|
*/
|
|
|
|
if (parity_errors + parity_untried <
|
|
|
|
rm->rm_firstdatacol ||
|
|
|
|
(zio->io_flags & ZIO_FLAG_RESILVER)) {
|
|
|
|
n = raidz_parity_verify(zio, rm);
|
|
|
|
unexpected_errors += n;
|
|
|
|
ASSERT(parity_errors + n <=
|
|
|
|
rm->rm_firstdatacol);
|
|
|
|
}
|
|
|
|
goto done;
|
|
|
|
}
|
2009-08-18 22:43:27 +04:00
|
|
|
} else {
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* We either attempt to read all the parity columns or
|
|
|
|
* none of them. If we didn't try to read parity, we
|
|
|
|
* wouldn't be here in the correctable case. There must
|
|
|
|
* also have been fewer parity errors than parity
|
|
|
|
* columns or, again, we wouldn't be in this code path.
|
|
|
|
*/
|
|
|
|
ASSERT(parity_untried == 0);
|
|
|
|
ASSERT(parity_errors < rm->rm_firstdatacol);
|
|
|
|
|
|
|
|
/*
|
2009-08-18 22:43:27 +04:00
|
|
|
* Identify the data columns that reported an error.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
2009-08-18 22:43:27 +04:00
|
|
|
n = 0;
|
2008-11-20 23:01:55 +03:00
|
|
|
for (c = rm->rm_firstdatacol; c < rm->rm_cols; c++) {
|
|
|
|
rc = &rm->rm_col[c];
|
2009-08-18 22:43:27 +04:00
|
|
|
if (rc->rc_error != 0) {
|
|
|
|
ASSERT(n < VDEV_RAIDZ_MAXPARITY);
|
|
|
|
tgts[n++] = c;
|
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
ASSERT(rm->rm_firstdatacol >= n);
|
|
|
|
|
|
|
|
code = vdev_raidz_reconstruct(rm, tgts, n);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
if (raidz_checksum_verify(zio) == 0) {
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
2009-08-18 22:43:27 +04:00
|
|
|
* If we read more parity disks than were used
|
|
|
|
* for reconstruction, confirm that the other
|
|
|
|
* parity disks produced correct data. This
|
|
|
|
* routine is suboptimal in that it regenerates
|
|
|
|
* the parity that we already used in addition
|
|
|
|
* to the parity that we're attempting to
|
|
|
|
* verify, but this should be a relatively
|
|
|
|
* uncommon case, and can be optimized if it
|
|
|
|
* becomes a problem. Note that we regenerate
|
|
|
|
* parity when resilvering so we can write it
|
|
|
|
* out to failed devices later.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
2009-08-18 22:43:27 +04:00
|
|
|
if (parity_errors < rm->rm_firstdatacol - n ||
|
2008-11-20 23:01:55 +03:00
|
|
|
(zio->io_flags & ZIO_FLAG_RESILVER)) {
|
|
|
|
n = raidz_parity_verify(zio, rm);
|
|
|
|
unexpected_errors += n;
|
|
|
|
ASSERT(parity_errors + n <=
|
|
|
|
rm->rm_firstdatacol);
|
|
|
|
}
|
|
|
|
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This isn't a typical situation -- either we got a read error or
|
|
|
|
* a child silently returned bad data. Read every block so we can
|
|
|
|
* try again with as much data and parity as we can track down. If
|
|
|
|
* we've already been through once before, all children will be marked
|
|
|
|
* as tried so we'll proceed to combinatorial reconstruction.
|
|
|
|
*/
|
|
|
|
unexpected_errors = 1;
|
|
|
|
rm->rm_missingdata = 0;
|
|
|
|
rm->rm_missingparity = 0;
|
|
|
|
|
|
|
|
for (c = 0; c < rm->rm_cols; c++) {
|
|
|
|
if (rm->rm_col[c].rc_tried)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
zio_vdev_io_redone(zio);
|
|
|
|
do {
|
|
|
|
rc = &rm->rm_col[c];
|
|
|
|
if (rc->rc_tried)
|
|
|
|
continue;
|
|
|
|
zio_nowait(zio_vdev_child_io(zio, NULL,
|
|
|
|
vd->vdev_child[rc->rc_devidx],
|
2016-07-22 18:52:49 +03:00
|
|
|
rc->rc_offset, rc->rc_abd, rc->rc_size,
|
2008-12-03 23:09:06 +03:00
|
|
|
zio->io_type, zio->io_priority, 0,
|
2008-11-20 23:01:55 +03:00
|
|
|
vdev_raidz_child_done, rc));
|
|
|
|
} while (++c < rm->rm_cols);
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
return;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* At this point we've attempted to reconstruct the data given the
|
|
|
|
* errors we detected, and we've attempted to read all columns. There
|
|
|
|
* must, therefore, be one or more additional problems -- silent errors
|
|
|
|
* resulting in invalid data rather than explicit I/O errors resulting
|
2009-08-18 22:43:27 +04:00
|
|
|
* in absent data. We check if there is enough additional data to
|
|
|
|
* possibly reconstruct the data and then perform combinatorial
|
|
|
|
* reconstruction over all possible combinations. If that fails,
|
|
|
|
* we're cooked.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
2010-05-29 00:45:14 +04:00
|
|
|
if (total_errors > rm->rm_firstdatacol) {
|
2008-12-03 23:09:06 +03:00
|
|
|
zio->io_error = vdev_raidz_worst_error(rm);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
} else if (total_errors < rm->rm_firstdatacol &&
|
|
|
|
(code = vdev_raidz_combrec(zio, total_errors, data_errors)) != 0) {
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
2009-08-18 22:43:27 +04:00
|
|
|
* If we didn't use all the available parity for the
|
|
|
|
* combinatorial reconstruction, verify that the remaining
|
|
|
|
* parity is correct.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
2009-08-18 22:43:27 +04:00
|
|
|
if (code != (1 << rm->rm_firstdatacol) - 1)
|
|
|
|
(void) raidz_parity_verify(zio, rm);
|
|
|
|
} else {
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
2010-05-29 00:45:14 +04:00
|
|
|
* We're here because either:
|
|
|
|
*
|
|
|
|
* total_errors == rm_first_datacol, or
|
|
|
|
* vdev_raidz_combrec() failed
|
|
|
|
*
|
|
|
|
* In either case, there is enough bad data to prevent
|
|
|
|
* reconstruction.
|
|
|
|
*
|
|
|
|
* Start checksum ereports for all children which haven't
|
|
|
|
* failed, and the IO wasn't speculative.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
2013-03-08 22:41:28 +04:00
|
|
|
zio->io_error = SET_ERROR(ECKSUM);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2009-08-18 22:43:27 +04:00
|
|
|
if (!(zio->io_flags & ZIO_FLAG_SPECULATIVE)) {
|
|
|
|
for (c = 0; c < rm->rm_cols; c++) {
|
2019-03-15 04:21:53 +03:00
|
|
|
vdev_t *cvd;
|
2009-08-18 22:43:27 +04:00
|
|
|
rc = &rm->rm_col[c];
|
2019-03-15 04:21:53 +03:00
|
|
|
cvd = vd->vdev_child[rc->rc_devidx];
|
2010-05-29 00:45:14 +04:00
|
|
|
if (rc->rc_error == 0) {
|
|
|
|
zio_bad_cksum_t zbc;
|
|
|
|
zbc.zbc_has_cksum = 0;
|
|
|
|
zbc.zbc_injected =
|
|
|
|
rm->rm_ecksuminjected;
|
|
|
|
|
2019-03-15 04:21:53 +03:00
|
|
|
mutex_enter(&cvd->vdev_stat_lock);
|
|
|
|
cvd->vdev_stat.vs_checksum_errors++;
|
|
|
|
mutex_exit(&cvd->vdev_stat_lock);
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
zfs_ereport_start_checksum(
|
2019-03-15 04:21:53 +03:00
|
|
|
zio->io_spa, cvd,
|
Native Encryption for ZFS on Linux
This change incorporates three major pieces:
The first change is a keystore that manages wrapping
and encryption keys for encrypted datasets. These
commands mostly involve manipulating the new
DSL Crypto Key ZAP Objects that live in the MOS. Each
encrypted dataset has its own DSL Crypto Key that is
protected with a user's key. This level of indirection
allows users to change their keys without re-encrypting
their entire datasets. The change implements the new
subcommands "zfs load-key", "zfs unload-key" and
"zfs change-key" which allow the user to manage their
encryption keys and settings. In addition, several new
flags and properties have been added to allow dataset
creation and to make mounting and unmounting more
convenient.
The second piece of this patch provides the ability to
encrypt, decyrpt, and authenticate protected datasets.
Each object set maintains a Merkel tree of Message
Authentication Codes that protect the lower layers,
similarly to how checksums are maintained. This part
impacts the zio layer, which handles the actual
encryption and generation of MACs, as well as the ARC
and DMU, which need to be able to handle encrypted
buffers and protected data.
The last addition is the ability to do raw, encrypted
sends and receives. The idea here is to send raw
encrypted and compressed data and receive it exactly
as is on a backup system. This means that the dataset
on the receiving system is protected using the same
user key that is in use on the sending side. By doing
so, datasets can be efficiently backed up to an
untrusted system without fear of data being
compromised.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes #494
Closes #5769
2017-08-14 20:36:48 +03:00
|
|
|
&zio->io_bookmark, zio,
|
|
|
|
rc->rc_offset, rc->rc_size,
|
2010-05-29 00:45:14 +04:00
|
|
|
(void *)(uintptr_t)c, &zbc);
|
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
done:
|
|
|
|
zio_checksum_verified(zio);
|
|
|
|
|
2009-01-16 00:59:39 +03:00
|
|
|
if (zio->io_error == 0 && spa_writeable(zio->io_spa) &&
|
2008-11-20 23:01:55 +03:00
|
|
|
(unexpected_errors || (zio->io_flags & ZIO_FLAG_RESILVER))) {
|
|
|
|
/*
|
|
|
|
* Use the good data we have in hand to repair damaged children.
|
|
|
|
*/
|
|
|
|
for (c = 0; c < rm->rm_cols; c++) {
|
|
|
|
rc = &rm->rm_col[c];
|
|
|
|
cvd = vd->vdev_child[rc->rc_devidx];
|
|
|
|
|
|
|
|
if (rc->rc_error == 0)
|
|
|
|
continue;
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
zio_nowait(zio_vdev_child_io(zio, NULL, cvd,
|
2016-07-22 18:52:49 +03:00
|
|
|
rc->rc_offset, rc->rc_abd, rc->rc_size,
|
Illumos #4045 write throttle & i/o scheduler performance work
4045 zfs write throttle & i/o scheduler performance work
1. The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync
read, sync write, async read, async write, and scrub/resilver. The scheduler
issues a number of concurrent i/os from each class to the device. Once a class
has been selected, an i/o is selected from this class using either an elevator
algorithem (async, scrub classes) or FIFO (sync classes). The number of
concurrent async write i/os is tuned dynamically based on i/o load, to achieve
good sync i/o latency when there is not a high load of writes, and good write
throughput when there is. See the block comment in vdev_queue.c (reproduced
below) for more details.
2. The write throttle (dsl_pool_tempreserve_space() and
txg_constrain_throughput()) is rewritten to produce much more consistent delays
when under constant load. The new write throttle is based on the amount of
dirty data, rather than guesses about future performance of the system. When
there is a lot of dirty data, each transaction (e.g. write() syscall) will be
delayed by the same small amount. This eliminates the "brick wall of wait"
that the old write throttle could hit, causing all transactions to wait several
seconds until the next txg opens. One of the keys to the new write throttle is
decrementing the amount of dirty data as i/o completes, rather than at the end
of spa_sync(). Note that the write throttle is only applied once the i/o
scheduler is issuing the maximum number of outstanding async writes. See the
block comments in dsl_pool.c and above dmu_tx_delay() (reproduced below) for
more details.
This diff has several other effects, including:
* the commonly-tuned global variable zfs_vdev_max_pending has been removed;
use per-class zfs_vdev_*_max_active values or zfs_vdev_max_active instead.
* the size of each txg (meaning the amount of dirty data written, and thus the
time it takes to write out) is now controlled differently. There is no longer
an explicit time goal; the primary determinant is amount of dirty data.
Systems that are under light or medium load will now often see that a txg is
always syncing, but the impact to performance (e.g. read latency) is minimal.
Tune zfs_dirty_data_max and zfs_dirty_data_sync to control this.
* zio_taskq_batch_pct = 75 -- Only use 75% of all CPUs for compression,
checksum, etc. This improves latency by not allowing these CPU-intensive tasks
to consume all CPU (on machines with at least 4 CPU's; the percentage is
rounded up).
--matt
APPENDIX: problems with the current i/o scheduler
The current ZFS i/o scheduler (vdev_queue.c) is deadline based. The problem
with this is that if there are always i/os pending, then certain classes of
i/os can see very long delays.
For example, if there are always synchronous reads outstanding, then no async
writes will be serviced until they become "past due". One symptom of this
situation is that each pass of the txg sync takes at least several seconds
(typically 3 seconds).
If many i/os become "past due" (their deadline is in the past), then we must
service all of these overdue i/os before any new i/os. This happens when we
enqueue a batch of async writes for the txg sync, with deadlines 2.5 seconds in
the future. If we can't complete all the i/os in 2.5 seconds (e.g. because
there were always reads pending), then these i/os will become past due. Now we
must service all the "async" writes (which could be hundreds of megabytes)
before we service any reads, introducing considerable latency to synchronous
i/os (reads or ZIL writes).
Notes on porting to ZFS on Linux:
- zio_t gained new members io_physdone and io_phys_children. Because
object caches in the Linux port call the constructor only once at
allocation time, objects may contain residual data when retrieved
from the cache. Therefore zio_create() was updated to zero out the two
new fields.
- vdev_mirror_pending() relied on the depth of the per-vdev pending queue
(vq->vq_pending_tree) to select the least-busy leaf vdev to read from.
This tree has been replaced by vq->vq_active_tree which is now used
for the same purpose.
- vdev_queue_init() used the value of zfs_vdev_max_pending to determine
the number of vdev I/O buffers to pre-allocate. That global no longer
exists, so we instead use the sum of the *_max_active values for each of
the five I/O classes described above.
- The Illumos implementation of dmu_tx_delay() delays a transaction by
sleeping in condition variable embedded in the thread
(curthread->t_delay_cv). We do not have an equivalent CV to use in
Linux, so this change replaced the delay logic with a wrapper called
zfs_sleep_until(). This wrapper could be adopted upstream and in other
downstream ports to abstract away operating system-specific delay logic.
- These tunables are added as module parameters, and descriptions added
to the zfs-module-parameters.5 man page.
spa_asize_inflation
zfs_deadman_synctime_ms
zfs_vdev_max_active
zfs_vdev_async_write_active_min_dirty_percent
zfs_vdev_async_write_active_max_dirty_percent
zfs_vdev_async_read_max_active
zfs_vdev_async_read_min_active
zfs_vdev_async_write_max_active
zfs_vdev_async_write_min_active
zfs_vdev_scrub_max_active
zfs_vdev_scrub_min_active
zfs_vdev_sync_read_max_active
zfs_vdev_sync_read_min_active
zfs_vdev_sync_write_max_active
zfs_vdev_sync_write_min_active
zfs_dirty_data_max_percent
zfs_delay_min_dirty_percent
zfs_dirty_data_max_max_percent
zfs_dirty_data_max
zfs_dirty_data_max_max
zfs_dirty_data_sync
zfs_delay_scale
The latter four have type unsigned long, whereas they are uint64_t in
Illumos. This accommodates Linux's module_param() supported types, but
means they may overflow on 32-bit architectures.
The values zfs_dirty_data_max and zfs_dirty_data_max_max are the most
likely to overflow on 32-bit systems, since they express physical RAM
sizes in bytes. In fact, Illumos initializes zfs_dirty_data_max_max to
2^32 which does overflow. To resolve that, this port instead initializes
it in arc_init() to 25% of physical RAM, and adds the tunable
zfs_dirty_data_max_max_percent to override that percentage. While this
solution doesn't completely avoid the overflow issue, it should be a
reasonable default for most systems, and the minority of affected
systems can work around the issue by overriding the defaults.
- Fixed reversed logic in comment above zfs_delay_scale declaration.
- Clarified comments in vdev_queue.c regarding when per-queue minimums take
effect.
- Replaced dmu_tx_write_limit in the dmu_tx kstat file
with dmu_tx_dirty_delay and dmu_tx_dirty_over_max. The first counts
how many times a transaction has been delayed because the pool dirty
data has exceeded zfs_delay_min_dirty_percent. The latter counts how
many times the pool dirty data has exceeded zfs_dirty_data_max (which
we expect to never happen).
- The original patch would have regressed the bug fixed in
zfsonlinux/zfs@c418410, which prevented users from setting the
zfs_vdev_aggregation_limit tuning larger than SPA_MAXBLOCKSIZE.
A similar fix is added to vdev_queue_aggregate().
- In vdev_queue_io_to_issue(), dynamically allocate 'zio_t search' on the
heap instead of the stack. In Linux we can't afford such large
structures on the stack.
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Ned Bass <bass6@llnl.gov>
Reviewed by: Brendan Gregg <brendan.gregg@joyent.com>
Approved by: Robert Mustacchi <rm@joyent.com>
References:
http://www.illumos.org/issues/4045
illumos/illumos-gate@69962b5647e4a8b9b14998733b765925381b727e
Ported-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1913
2013-08-29 07:01:20 +04:00
|
|
|
ZIO_TYPE_WRITE, ZIO_PRIORITY_ASYNC_WRITE,
|
2009-01-16 00:59:39 +03:00
|
|
|
ZIO_FLAG_IO_REPAIR | (unexpected_errors ?
|
|
|
|
ZIO_FLAG_SELF_HEAL : 0), NULL, NULL));
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
vdev_raidz_state_change(vdev_t *vd, int faulted, int degraded)
|
|
|
|
{
|
|
|
|
if (faulted > vd->vdev_nparity)
|
|
|
|
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
|
|
|
|
VDEV_AUX_NO_REPLICAS);
|
|
|
|
else if (degraded + faulted != 0)
|
|
|
|
vdev_set_state(vd, B_FALSE, VDEV_STATE_DEGRADED, VDEV_AUX_NONE);
|
|
|
|
else
|
|
|
|
vdev_set_state(vd, B_FALSE, VDEV_STATE_HEALTHY, VDEV_AUX_NONE);
|
|
|
|
}
|
|
|
|
|
2017-05-13 03:28:03 +03:00
|
|
|
/*
|
|
|
|
* Determine if any portion of the provided block resides on a child vdev
|
|
|
|
* with a dirty DTL and therefore needs to be resilvered. The function
|
2019-09-03 03:56:41 +03:00
|
|
|
* assumes that at least one DTL is dirty which implies that full stripe
|
2017-05-13 03:28:03 +03:00
|
|
|
* width blocks must be resilvered.
|
|
|
|
*/
|
|
|
|
static boolean_t
|
|
|
|
vdev_raidz_need_resilver(vdev_t *vd, uint64_t offset, size_t psize)
|
|
|
|
{
|
|
|
|
uint64_t dcols = vd->vdev_children;
|
|
|
|
uint64_t nparity = vd->vdev_nparity;
|
|
|
|
uint64_t ashift = vd->vdev_top->vdev_ashift;
|
|
|
|
/* The starting RAIDZ (parent) vdev sector of the block. */
|
|
|
|
uint64_t b = offset >> ashift;
|
|
|
|
/* The zio's size in units of the vdev's minimum sector size. */
|
|
|
|
uint64_t s = ((psize - 1) >> ashift) + 1;
|
|
|
|
/* The first column for this stripe. */
|
|
|
|
uint64_t f = b % dcols;
|
|
|
|
|
|
|
|
if (s + nparity >= dcols)
|
|
|
|
return (B_TRUE);
|
|
|
|
|
|
|
|
for (uint64_t c = 0; c < s + nparity; c++) {
|
|
|
|
uint64_t devidx = (f + c) % dcols;
|
|
|
|
vdev_t *cvd = vd->vdev_child[devidx];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* dsl_scan_need_resilver() already checked vd with
|
|
|
|
* vdev_dtl_contains(). So here just check cvd with
|
|
|
|
* vdev_dtl_empty(), cheaper and a good approximation.
|
|
|
|
*/
|
|
|
|
if (!vdev_dtl_empty(cvd, DTL_PARTIAL))
|
|
|
|
return (B_TRUE);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (B_FALSE);
|
|
|
|
}
|
|
|
|
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
static void
|
Reduce loaded range tree memory usage
This patch implements a new tree structure for ZFS, and uses it to
store range trees more efficiently.
The new structure is approximately a B-tree, though there are some
small differences from the usual characterizations. The tree has core
nodes and leaf nodes; each contain data elements, which the elements
in the core nodes acting as separators between its children. The
difference between core and leaf nodes is that the core nodes have an
array of children, while leaf nodes don't. Every node in the tree may
be only partially full; in most cases, they are all at least 50% full
(in terms of element count) except for the root node, which can be
less full. Underfull nodes will steal from their neighbors or merge to
remain full enough, while overfull nodes will split in two. The data
elements are contained in tree-controlled buffers; they are copied
into these on insertion, and overwritten on deletion. This means that
the elements are not independently allocated, which reduces overhead,
but also means they can't be shared between trees (and also that
pointers to them are only valid until a side-effectful tree operation
occurs). The overhead varies based on how dense the tree is, but is
usually on the order of about 50% of the element size; the per-node
overheads are very small, and so don't make a significant difference.
The trees can accept arbitrary records; they accept a size and a
comparator to allow them to be used for a variety of purposes.
The new trees replace the AVL trees used in the range trees today.
Currently, the range_seg_t structure contains three 8 byte integers
of payload and two 24 byte avl_tree_node_ts to handle its storage in
both an offset-sorted tree and a size-sorted tree (total size: 64
bytes). In the new model, the range seg structures are usually two 4
byte integers, but a separate one needs to exist for the size-sorted
and offset-sorted tree. Between the raw size, the 50% overhead, and
the double storage, the new btrees are expected to use 8*1.5*2 = 24
bytes per record, or 33.3% as much memory as the AVL trees (this is
for the purposes of storing metaslab range trees; for other purposes,
like scrubs, they use ~50% as much memory).
We reduced the size of the payload in the range segments by teaching
range trees about starting offsets and shifts; since metaslabs have a
fixed starting offset, and they all operate in terms of disk sectors,
we can store the ranges using 4-byte integers as long as the size of
the metaslab divided by the sector size is less than 2^32. For 512-byte
sectors, this is a 2^41 (or 2TB) metaslab, which with the default
settings corresponds to a 256PB disk. 4k sector disks can handle
metaslabs up to 2^46 bytes, or 2^63 byte disks. Since we do not
anticipate disks of this size in the near future, there should be
almost no cases where metaslabs need 64-byte integers to store their
ranges. We do still have the capability to store 64-byte integer ranges
to account for cases where we are storing per-vdev (or per-dnode) trees,
which could reasonably go above the limits discussed. We also do not
store fill information in the compact version of the node, since it
is only used for sorted scrub.
We also optimized the metaslab loading process in various other ways
to offset some inefficiencies in the btree model. While individual
operations (find, insert, remove_from) are faster for the btree than
they are for the avl tree, remove usually requires a find operation,
while in the AVL tree model the element itself suffices. Some clever
changes actually caused an overall speedup in metaslab loading; we use
approximately 40% less cpu to load metaslabs in our tests on Illumos.
Another memory and performance optimization was achieved by changing
what is stored in the size-sorted trees. When a disk is heavily
fragmented, the df algorithm used by default in ZFS will almost always
find a number of small regions in its initial cursor-based search; it
will usually only fall back to the size-sorted tree to find larger
regions. If we increase the size of the cursor-based search slightly,
and don't store segments that are smaller than a tunable size floor
in the size-sorted tree, we can further cut memory usage down to
below 20% of what the AVL trees store. This also results in further
reductions in CPU time spent loading metaslabs.
The 16KiB size floor was chosen because it results in substantial memory
usage reduction while not usually resulting in situations where we can't
find an appropriate chunk with the cursor and are forced to use an
oversized chunk from the size-sorted tree. In addition, even if we do
have to use an oversized chunk from the size-sorted tree, the chunk
would be too small to use for ZIL allocations, so it isn't as big of a
loss as it might otherwise be. And often, more small allocations will
follow the initial one, and the cursor search will now find the
remainder of the chunk we didn't use all of and use it for subsequent
allocations. Practical testing has shown little or no change in
fragmentation as a result of this change.
If the size-sorted tree becomes empty while the offset sorted one still
has entries, it will load all the entries from the offset sorted tree
and disregard the size floor until it is unloaded again. This operation
occurs rarely with the default setting, only on incredibly thoroughly
fragmented pools.
There are some other small changes to zdb to teach it to handle btrees,
but nothing major.
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed by: Sebastien Roy seb@delphix.com
Reviewed-by: Igor Kozhukhov <igor@dilos.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #9181
2019-10-09 20:36:03 +03:00
|
|
|
vdev_raidz_xlate(vdev_t *cvd, const range_seg64_t *in, range_seg64_t *res)
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
{
|
|
|
|
vdev_t *raidvd = cvd->vdev_parent;
|
|
|
|
ASSERT(raidvd->vdev_ops == &vdev_raidz_ops);
|
|
|
|
|
|
|
|
uint64_t width = raidvd->vdev_children;
|
|
|
|
uint64_t tgt_col = cvd->vdev_id;
|
|
|
|
uint64_t ashift = raidvd->vdev_top->vdev_ashift;
|
|
|
|
|
|
|
|
/* make sure the offsets are block-aligned */
|
|
|
|
ASSERT0(in->rs_start % (1 << ashift));
|
|
|
|
ASSERT0(in->rs_end % (1 << ashift));
|
|
|
|
uint64_t b_start = in->rs_start >> ashift;
|
|
|
|
uint64_t b_end = in->rs_end >> ashift;
|
|
|
|
|
|
|
|
uint64_t start_row = 0;
|
|
|
|
if (b_start > tgt_col) /* avoid underflow */
|
|
|
|
start_row = ((b_start - tgt_col - 1) / width) + 1;
|
|
|
|
|
|
|
|
uint64_t end_row = 0;
|
|
|
|
if (b_end > tgt_col)
|
|
|
|
end_row = ((b_end - tgt_col - 1) / width) + 1;
|
|
|
|
|
|
|
|
res->rs_start = start_row << ashift;
|
|
|
|
res->rs_end = end_row << ashift;
|
|
|
|
|
|
|
|
ASSERT3U(res->rs_start, <=, in->rs_start);
|
|
|
|
ASSERT3U(res->rs_end - res->rs_start, <=, in->rs_end - in->rs_start);
|
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
vdev_ops_t vdev_raidz_ops = {
|
2019-06-21 04:29:02 +03:00
|
|
|
.vdev_op_open = vdev_raidz_open,
|
|
|
|
.vdev_op_close = vdev_raidz_close,
|
|
|
|
.vdev_op_asize = vdev_raidz_asize,
|
|
|
|
.vdev_op_io_start = vdev_raidz_io_start,
|
|
|
|
.vdev_op_io_done = vdev_raidz_io_done,
|
|
|
|
.vdev_op_state_change = vdev_raidz_state_change,
|
|
|
|
.vdev_op_need_resilver = vdev_raidz_need_resilver,
|
|
|
|
.vdev_op_hold = NULL,
|
|
|
|
.vdev_op_rele = NULL,
|
|
|
|
.vdev_op_remap = NULL,
|
|
|
|
.vdev_op_xlate = vdev_raidz_xlate,
|
|
|
|
.vdev_op_type = VDEV_TYPE_RAIDZ, /* name of this vdev type */
|
|
|
|
.vdev_op_leaf = B_FALSE /* not a leaf vdev */
|
2008-11-20 23:01:55 +03:00
|
|
|
};
|