2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
* or http://www.opensolaris.org/os/licensing.
|
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
/*
|
2010-05-29 00:45:14 +04:00
|
|
|
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
|
OpenZFS 9112 - Improve allocation performance on high-end systems
Overview
========
We parallelize the allocation process by creating the concept of
"allocators". There are a certain number of allocators per metaslab
group, defined by the value of a tunable at pool open time. Each
allocator for a given metaslab group has up to 2 active metaslabs; one
"primary", and one "secondary". The primary and secondary weight mean
the same thing they did in in the pre-allocator world; primary metaslabs
are used for most allocations, secondary metaslabs are used for ditto
blocks being allocated in the same metaslab group. There is also the
CLAIM weight, which has been separated out from the other weights, but
that is less important to understanding the patch. The active metaslabs
for each allocator are moved from their normal place in the metaslab
tree for the group to the back of the tree. This way, they will not be
selected for use by other allocators searching for new metaslabs unless
all the passive metaslabs are unsuitable for allocations. If that does
happen, the allocators will "steal" from each other to ensure that IOs
don't fail until there is truly no space left to perform allocations.
In addition, the alloc queue for each metaslab group has been broken
into a separate queue for each allocator. We don't want to dramatically
increase the number of inflight IOs on low-end systems, because it can
significantly increase txg times. On the other hand, we want to ensure
that there are enough IOs for each allocator to allow for good
coalescing before sending the IOs to the disk. As a result, we take a
compromise path; each allocator's alloc queue max depth starts at a
certain value for every txg. Every time an IO completes, we increase the
max depth. This should hopefully provide a good balance between the two
failure modes, while not dramatically increasing complexity.
We also parallelize the spa_alloc_tree and spa_alloc_lock, which cause
very similar contention when selecting IOs to allocate. This
parallelization uses the same allocator scheme as metaslab selection.
Performance Results
===================
Performance improvements from this change can vary significantly based
on the number of CPUs in the system, whether or not the system has a
NUMA architecture, the speed of the drives, the values for the various
tunables, and the workload being performed. For an fio async sequential
write workload on a 24 core NUMA system with 256 GB of RAM and 8 128 GB
SSDs, there is a roughly 25% performance improvement.
Future Work
===========
Analysis of the performance of the system with this patch applied shows
that a significant new bottleneck is the vdev disk queues, which also
need to be parallelized. Prototyping of this change has occurred, and
there was a performance improvement, but more work needs to be done
before its stability has been verified and it is ready to be upstreamed.
Authored by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Serapheim Dimitropoulos <serapheim.dimitro@delphix.com>
Reviewed by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Gordon Ross <gwr@nexenta.com>
Ported-by: Paul Dagnelie <pcd@delphix.com>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Porting Notes:
* Fix reservation test failures by increasing tolerance.
OpenZFS-issue: https://illumos.org/issues/9112
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/3f3cc3c3
Closes #7682
2018-02-12 23:56:06 +03:00
|
|
|
* Copyright (c) 2011, 2018 by Delphix. All rights reserved.
|
2015-12-23 23:02:43 +03:00
|
|
|
* Copyright 2015 Nexenta Systems, Inc. All rights reserved.
|
2015-04-02 06:44:32 +03:00
|
|
|
* Copyright (c) 2014 Spectra Logic Corporation, All rights reserved.
|
2016-06-16 01:47:05 +03:00
|
|
|
* Copyright 2013 Saso Kiselkov. All rights reserved.
|
2017-07-07 08:16:13 +03:00
|
|
|
* Copyright (c) 2017 Datto Inc.
|
2018-09-06 04:33:36 +03:00
|
|
|
* Copyright (c) 2017, Intel Corporation.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/zfs_context.h>
|
|
|
|
#include <sys/spa_impl.h>
|
|
|
|
#include <sys/zio.h>
|
|
|
|
#include <sys/zio_checksum.h>
|
|
|
|
#include <sys/zio_compress.h>
|
|
|
|
#include <sys/dmu.h>
|
|
|
|
#include <sys/dmu_tx.h>
|
|
|
|
#include <sys/zap.h>
|
|
|
|
#include <sys/zil.h>
|
|
|
|
#include <sys/vdev_impl.h>
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
#include <sys/vdev_initialize.h>
|
2014-05-13 06:36:35 +04:00
|
|
|
#include <sys/vdev_file.h>
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
#include <sys/vdev_raidz.h>
|
2008-11-20 23:01:55 +03:00
|
|
|
#include <sys/metaslab.h>
|
|
|
|
#include <sys/uberblock_impl.h>
|
|
|
|
#include <sys/txg.h>
|
|
|
|
#include <sys/avl.h>
|
|
|
|
#include <sys/unique.h>
|
|
|
|
#include <sys/dsl_pool.h>
|
|
|
|
#include <sys/dsl_dir.h>
|
|
|
|
#include <sys/dsl_prop.h>
|
2010-08-26 22:42:43 +04:00
|
|
|
#include <sys/fm/util.h>
|
2010-05-29 00:45:14 +04:00
|
|
|
#include <sys/dsl_scan.h>
|
2008-11-20 23:01:55 +03:00
|
|
|
#include <sys/fs/zfs.h>
|
|
|
|
#include <sys/metaslab_impl.h>
|
2008-12-03 23:09:06 +03:00
|
|
|
#include <sys/arc.h>
|
2010-05-29 00:45:14 +04:00
|
|
|
#include <sys/ddt.h>
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-07 03:09:05 +04:00
|
|
|
#include <sys/kstat.h>
|
2008-11-20 23:01:55 +03:00
|
|
|
#include "zfs_prop.h"
|
2016-06-16 01:47:05 +03:00
|
|
|
#include <sys/zfeature.h>
|
2018-03-10 00:37:15 +03:00
|
|
|
#include "qat.h"
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* SPA locking
|
|
|
|
*
|
|
|
|
* There are four basic locks for managing spa_t structures:
|
|
|
|
*
|
|
|
|
* spa_namespace_lock (global mutex)
|
|
|
|
*
|
|
|
|
* This lock must be acquired to do any of the following:
|
|
|
|
*
|
|
|
|
* - Lookup a spa_t by name
|
|
|
|
* - Add or remove a spa_t from the namespace
|
|
|
|
* - Increase spa_refcount from non-zero
|
|
|
|
* - Check if spa_refcount is zero
|
|
|
|
* - Rename a spa_t
|
|
|
|
* - add/remove/attach/detach devices
|
|
|
|
* - Held for the duration of create/destroy/import/export
|
|
|
|
*
|
|
|
|
* It does not need to handle recursion. A create or destroy may
|
|
|
|
* reference objects (files or zvols) in other pools, but by
|
|
|
|
* definition they must have an existing reference, and will never need
|
|
|
|
* to lookup a spa_t by name.
|
|
|
|
*
|
2018-09-26 20:29:26 +03:00
|
|
|
* spa_refcount (per-spa zfs_refcount_t protected by mutex)
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
|
|
|
* This reference count keep track of any active users of the spa_t. The
|
|
|
|
* spa_t cannot be destroyed or freed while this is non-zero. Internally,
|
|
|
|
* the refcount is never really 'zero' - opening a pool implicitly keeps
|
2008-12-03 23:09:06 +03:00
|
|
|
* some references in the DMU. Internally we check against spa_minref, but
|
2008-11-20 23:01:55 +03:00
|
|
|
* present the image of a zero/non-zero value to consumers.
|
|
|
|
*
|
2008-12-03 23:09:06 +03:00
|
|
|
* spa_config_lock[] (per-spa array of rwlocks)
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
|
|
|
* This protects the spa_t from config changes, and must be held in
|
|
|
|
* the following circumstances:
|
|
|
|
*
|
|
|
|
* - RW_READER to perform I/O to the spa
|
|
|
|
* - RW_WRITER to change the vdev config
|
|
|
|
*
|
|
|
|
* The locking order is fairly straightforward:
|
|
|
|
*
|
|
|
|
* spa_namespace_lock -> spa_refcount
|
|
|
|
*
|
|
|
|
* The namespace lock must be acquired to increase the refcount from 0
|
|
|
|
* or to check if it is zero.
|
|
|
|
*
|
2008-12-03 23:09:06 +03:00
|
|
|
* spa_refcount -> spa_config_lock[]
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
|
|
|
* There must be at least one valid reference on the spa_t to acquire
|
|
|
|
* the config lock.
|
|
|
|
*
|
2008-12-03 23:09:06 +03:00
|
|
|
* spa_namespace_lock -> spa_config_lock[]
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
|
|
|
* The namespace lock must always be taken before the config lock.
|
|
|
|
*
|
|
|
|
*
|
2008-12-03 23:09:06 +03:00
|
|
|
* The spa_namespace_lock can be acquired directly and is globally visible.
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
2008-12-03 23:09:06 +03:00
|
|
|
* The namespace is manipulated using the following functions, all of which
|
|
|
|
* require the spa_namespace_lock to be held.
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
|
|
|
* spa_lookup() Lookup a spa_t by name.
|
|
|
|
*
|
|
|
|
* spa_add() Create a new spa_t in the namespace.
|
|
|
|
*
|
|
|
|
* spa_remove() Remove a spa_t from the namespace. This also
|
|
|
|
* frees up any memory associated with the spa_t.
|
|
|
|
*
|
|
|
|
* spa_next() Returns the next spa_t in the system, or the
|
|
|
|
* first if NULL is passed.
|
|
|
|
*
|
|
|
|
* spa_evict_all() Shutdown and remove all spa_t structures in
|
|
|
|
* the system.
|
|
|
|
*
|
|
|
|
* spa_guid_exists() Determine whether a pool/device guid exists.
|
|
|
|
*
|
|
|
|
* The spa_refcount is manipulated using the following functions:
|
|
|
|
*
|
|
|
|
* spa_open_ref() Adds a reference to the given spa_t. Must be
|
|
|
|
* called with spa_namespace_lock held if the
|
|
|
|
* refcount is currently zero.
|
|
|
|
*
|
|
|
|
* spa_close() Remove a reference from the spa_t. This will
|
|
|
|
* not free the spa_t or remove it from the
|
|
|
|
* namespace. No locking is required.
|
|
|
|
*
|
|
|
|
* spa_refcount_zero() Returns true if the refcount is currently
|
|
|
|
* zero. Must be called with spa_namespace_lock
|
|
|
|
* held.
|
|
|
|
*
|
2008-12-03 23:09:06 +03:00
|
|
|
* The spa_config_lock[] is an array of rwlocks, ordered as follows:
|
|
|
|
* SCL_CONFIG > SCL_STATE > SCL_ALLOC > SCL_ZIO > SCL_FREE > SCL_VDEV.
|
|
|
|
* spa_config_lock[] is manipulated with spa_config_{enter,exit,held}().
|
|
|
|
*
|
|
|
|
* To read the configuration, it suffices to hold one of these locks as reader.
|
|
|
|
* To modify the configuration, you must hold all locks as writer. To modify
|
|
|
|
* vdev state without altering the vdev tree's topology (e.g. online/offline),
|
|
|
|
* you must hold SCL_STATE and SCL_ZIO as writer.
|
|
|
|
*
|
|
|
|
* We use these distinct config locks to avoid recursive lock entry.
|
|
|
|
* For example, spa_sync() (which holds SCL_CONFIG as reader) induces
|
|
|
|
* block allocations (SCL_ALLOC), which may require reading space maps
|
|
|
|
* from disk (dmu_read() -> zio_read() -> SCL_ZIO).
|
|
|
|
*
|
|
|
|
* The spa config locks cannot be normal rwlocks because we need the
|
|
|
|
* ability to hand off ownership. For example, SCL_ZIO is acquired
|
|
|
|
* by the issuing thread and later released by an interrupt thread.
|
|
|
|
* They do, however, obey the usual write-wanted semantics to prevent
|
|
|
|
* writer (i.e. system administrator) starvation.
|
|
|
|
*
|
|
|
|
* The lock acquisition rules are as follows:
|
|
|
|
*
|
|
|
|
* SCL_CONFIG
|
|
|
|
* Protects changes to the vdev tree topology, such as vdev
|
|
|
|
* add/remove/attach/detach. Protects the dirty config list
|
|
|
|
* (spa_config_dirty_list) and the set of spares and l2arc devices.
|
|
|
|
*
|
|
|
|
* SCL_STATE
|
|
|
|
* Protects changes to pool state and vdev state, such as vdev
|
|
|
|
* online/offline/fault/degrade/clear. Protects the dirty state list
|
|
|
|
* (spa_state_dirty_list) and global pool state (spa_state).
|
|
|
|
*
|
|
|
|
* SCL_ALLOC
|
|
|
|
* Protects changes to metaslab groups and classes.
|
|
|
|
* Held as reader by metaslab_alloc() and metaslab_claim().
|
|
|
|
*
|
|
|
|
* SCL_ZIO
|
|
|
|
* Held by bp-level zios (those which have no io_vd upon entry)
|
|
|
|
* to prevent changes to the vdev tree. The bp-level zio implicitly
|
|
|
|
* protects all of its vdev child zios, which do not hold SCL_ZIO.
|
|
|
|
*
|
|
|
|
* SCL_FREE
|
|
|
|
* Protects changes to metaslab groups and classes.
|
|
|
|
* Held as reader by metaslab_free(). SCL_FREE is distinct from
|
|
|
|
* SCL_ALLOC, and lower than SCL_ZIO, so that we can safely free
|
|
|
|
* blocks in zio_done() while another i/o that holds either
|
|
|
|
* SCL_ALLOC or SCL_ZIO is waiting for this i/o to complete.
|
|
|
|
*
|
|
|
|
* SCL_VDEV
|
|
|
|
* Held as reader to prevent changes to the vdev tree during trivial
|
2010-05-29 00:45:14 +04:00
|
|
|
* inquiries such as bp_get_dsize(). SCL_VDEV is distinct from the
|
2008-12-03 23:09:06 +03:00
|
|
|
* other locks, and lower than all of them, to ensure that it's safe
|
|
|
|
* to acquire regardless of caller context.
|
|
|
|
*
|
|
|
|
* In addition, the following rules apply:
|
|
|
|
*
|
|
|
|
* (a) spa_props_lock protects pool properties, spa_config and spa_config_list.
|
|
|
|
* The lock ordering is SCL_CONFIG > spa_props_lock.
|
|
|
|
*
|
|
|
|
* (b) I/O operations on leaf vdevs. For any zio operation that takes
|
|
|
|
* an explicit vdev_t argument -- such as zio_ioctl(), zio_read_phys(),
|
|
|
|
* or zio_write_phys() -- the caller must ensure that the config cannot
|
|
|
|
* cannot change in the interim, and that the vdev cannot be reopened.
|
|
|
|
* SCL_STATE as reader suffices for both.
|
2008-11-20 23:01:55 +03:00
|
|
|
*
|
|
|
|
* The vdev configuration is protected by spa_vdev_enter() / spa_vdev_exit().
|
|
|
|
*
|
|
|
|
* spa_vdev_enter() Acquire the namespace lock and the config lock
|
|
|
|
* for writing.
|
|
|
|
*
|
|
|
|
* spa_vdev_exit() Release the config lock, wait for all I/O
|
|
|
|
* to complete, sync the updated configs to the
|
|
|
|
* cache, and release the namespace lock.
|
|
|
|
*
|
2008-12-03 23:09:06 +03:00
|
|
|
* vdev state is protected by spa_vdev_state_enter() / spa_vdev_state_exit().
|
|
|
|
* Like spa_vdev_enter/exit, these are convenience wrappers -- the actual
|
|
|
|
* locking is, always, based on spa_namespace_lock and spa_config_lock[].
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
|
|
|
|
|
|
|
static avl_tree_t spa_namespace_avl;
|
|
|
|
kmutex_t spa_namespace_lock;
|
|
|
|
static kcondvar_t spa_namespace_cv;
|
|
|
|
int spa_max_replication_override = SPA_DVAS_PER_BP;
|
|
|
|
|
|
|
|
static kmutex_t spa_spare_lock;
|
|
|
|
static avl_tree_t spa_spare_avl;
|
|
|
|
static kmutex_t spa_l2cache_lock;
|
|
|
|
static avl_tree_t spa_l2cache_avl;
|
|
|
|
|
|
|
|
kmem_cache_t *spa_buffer_pool;
|
2009-01-16 00:59:39 +03:00
|
|
|
int spa_mode_global;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
Swap DTRACE_PROBE* with Linux tracepoints
This patch leverages Linux tracepoints from within the ZFS on Linux
code base. It also refactors the debug code to bring it back in sync
with Illumos.
The information exported via tracepoints can be used for a variety of
reasons (e.g. debugging, tuning, general exploration/understanding,
etc). It is advantageous to use Linux tracepoints as the mechanism to
export this kind of information (as opposed to something else) for a
number of reasons:
* A number of external tools can make use of our tracepoints
"automatically" (e.g. perf, systemtap)
* Tracepoints are designed to be extremely cheap when disabled
* It's one of the "accepted" ways to export this kind of
information; many other kernel subsystems use tracepoints too.
Unfortunately, though, there are a few caveats as well:
* Linux tracepoints appear to only be available to GPL licensed
modules due to the way certain kernel functions are exported.
Thus, to actually make use of the tracepoints introduced by this
patch, one might have to patch and re-compile the kernel;
exporting the necessary functions to non-GPL modules.
* Prior to upstream kernel version v3.14-rc6-30-g66cc69e, Linux
tracepoints are not available for unsigned kernel modules
(tracepoints will get disabled due to the module's 'F' taint).
Thus, one either has to sign the zfs kernel module prior to
loading it, or use a kernel versioned v3.14-rc6-30-g66cc69e or
newer.
Assuming the above two requirements are satisfied, lets look at an
example of how this patch can be used and what information it exposes
(all commands run as 'root'):
# list all zfs tracepoints available
$ ls /sys/kernel/debug/tracing/events/zfs
enable filter zfs_arc__delete
zfs_arc__evict zfs_arc__hit zfs_arc__miss
zfs_l2arc__evict zfs_l2arc__hit zfs_l2arc__iodone
zfs_l2arc__miss zfs_l2arc__read zfs_l2arc__write
zfs_new_state__mfu zfs_new_state__mru
# enable all zfs tracepoints, clear the tracepoint ring buffer
$ echo 1 > /sys/kernel/debug/tracing/events/zfs/enable
$ echo 0 > /sys/kernel/debug/tracing/trace
# import zpool called 'tank', inspect tracepoint data (each line was
# truncated, they're too long for a commit message otherwise)
$ zpool import tank
$ cat /sys/kernel/debug/tracing/trace | head -n35
# tracer: nop
#
# entries-in-buffer/entries-written: 1219/1219 #P:8
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
lt-zpool-30132 [003] .... 91344.200050: zfs_arc__miss: hdr...
z_rd_int/0-30156 [003] .... 91344.200611: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.201173: zfs_arc__miss: hdr...
z_rd_int/1-30157 [003] .... 91344.201756: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.201795: zfs_arc__miss: hdr...
z_rd_int/2-30158 [003] .... 91344.202099: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202126: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202130: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202134: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202146: zfs_arc__miss: hdr...
z_rd_int/3-30159 [003] .... 91344.202457: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202484: zfs_arc__miss: hdr...
z_rd_int/4-30160 [003] .... 91344.202866: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202891: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.203034: zfs_arc__miss: hdr...
z_rd_iss/1-30149 [001] .... 91344.203749: zfs_new_state__mru...
lt-zpool-30132 [001] .... 91344.203789: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.203878: zfs_arc__miss: hdr...
z_rd_iss/3-30151 [001] .... 91344.204315: zfs_new_state__mru...
lt-zpool-30132 [001] .... 91344.204332: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204337: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204352: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204356: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204360: zfs_arc__hit: hdr ...
To highlight the kind of detailed information that is being exported
using this infrastructure, I've taken the first tracepoint line from the
output above and reformatted it such that it fits in 80 columns:
lt-zpool-30132 [003] .... 91344.200050: zfs_arc__miss:
hdr {
dva 0x1:0x40082
birth 15491
cksum0 0x163edbff3a
flags 0x640
datacnt 1
type 1
size 2048
spa 3133524293419867460
state_type 0
access 0
mru_hits 0
mru_ghost_hits 0
mfu_hits 0
mfu_ghost_hits 0
l2_hits 0
refcount 1
} bp {
dva0 0x1:0x40082
dva1 0x1:0x3000e5
dva2 0x1:0x5a006e
cksum 0x163edbff3a:0x75af30b3dd6:0x1499263ff5f2b:0x288bd118815e00
lsize 2048
} zb {
objset 0
object 0
level -1
blkid 0
}
For the specific tracepoint shown here, 'zfs_arc__miss', data is
exported detailing the arc_buf_hdr_t (hdr), blkptr_t (bp), and
zbookmark_t (zb) that caused the ARC miss (down to the exact DVA!).
This kind of precise and detailed information can be extremely valuable
when trying to answer certain kinds of questions.
For anybody unfamiliar but looking to build on this, I found the XFS
source code along with the following three web links to be extremely
helpful:
* http://lwn.net/Articles/379903/
* http://lwn.net/Articles/381064/
* http://lwn.net/Articles/383362/
I should also node the more "boring" aspects of this patch:
* The ZFS_LINUX_COMPILE_IFELSE autoconf macro was modified to
support a sixth paramter. This parameter is used to populate the
contents of the new conftest.h file. If no sixth parameter is
provided, conftest.h will be empty.
* The ZFS_LINUX_TRY_COMPILE_HEADER autoconf macro was introduced.
This macro is nearly identical to the ZFS_LINUX_TRY_COMPILE macro,
except it has support for a fifth option that is then passed as
the sixth parameter to ZFS_LINUX_COMPILE_IFELSE.
These autoconf changes were needed to test the availability of the Linux
tracepoint macros. Due to the odd nature of the Linux tracepoint macro
API, a separate ".h" must be created (the path and filename is used
internally by the kernel's define_trace.h file).
* The HAVE_DECLARE_EVENT_CLASS autoconf macro was introduced. This
is to determine if we can safely enable the Linux tracepoint
functionality. We need to selectively disable the tracepoint code
due to the kernel exporting certain functions as GPL only. Without
this check, the build process will fail at link time.
In addition, the SET_ERROR macro was modified into a tracepoint as well.
To do this, the 'sdt.h' file was moved into the 'include/sys' directory
and now contains a userspace portion and a kernel space portion. The
dprintf and zfs_dbgmsg* interfaces are now implemented as tracepoint as
well.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2014-06-13 21:54:48 +04:00
|
|
|
#ifdef ZFS_DEBUG
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 19:30:13 +03:00
|
|
|
/*
|
|
|
|
* Everything except dprintf, set_error, spa, and indirect_remap is on
|
|
|
|
* by default in debug builds.
|
|
|
|
*/
|
|
|
|
int zfs_flags = ~(ZFS_DEBUG_DPRINTF | ZFS_DEBUG_SET_ERROR |
|
2017-01-04 02:18:33 +03:00
|
|
|
ZFS_DEBUG_INDIRECT_REMAP);
|
Swap DTRACE_PROBE* with Linux tracepoints
This patch leverages Linux tracepoints from within the ZFS on Linux
code base. It also refactors the debug code to bring it back in sync
with Illumos.
The information exported via tracepoints can be used for a variety of
reasons (e.g. debugging, tuning, general exploration/understanding,
etc). It is advantageous to use Linux tracepoints as the mechanism to
export this kind of information (as opposed to something else) for a
number of reasons:
* A number of external tools can make use of our tracepoints
"automatically" (e.g. perf, systemtap)
* Tracepoints are designed to be extremely cheap when disabled
* It's one of the "accepted" ways to export this kind of
information; many other kernel subsystems use tracepoints too.
Unfortunately, though, there are a few caveats as well:
* Linux tracepoints appear to only be available to GPL licensed
modules due to the way certain kernel functions are exported.
Thus, to actually make use of the tracepoints introduced by this
patch, one might have to patch and re-compile the kernel;
exporting the necessary functions to non-GPL modules.
* Prior to upstream kernel version v3.14-rc6-30-g66cc69e, Linux
tracepoints are not available for unsigned kernel modules
(tracepoints will get disabled due to the module's 'F' taint).
Thus, one either has to sign the zfs kernel module prior to
loading it, or use a kernel versioned v3.14-rc6-30-g66cc69e or
newer.
Assuming the above two requirements are satisfied, lets look at an
example of how this patch can be used and what information it exposes
(all commands run as 'root'):
# list all zfs tracepoints available
$ ls /sys/kernel/debug/tracing/events/zfs
enable filter zfs_arc__delete
zfs_arc__evict zfs_arc__hit zfs_arc__miss
zfs_l2arc__evict zfs_l2arc__hit zfs_l2arc__iodone
zfs_l2arc__miss zfs_l2arc__read zfs_l2arc__write
zfs_new_state__mfu zfs_new_state__mru
# enable all zfs tracepoints, clear the tracepoint ring buffer
$ echo 1 > /sys/kernel/debug/tracing/events/zfs/enable
$ echo 0 > /sys/kernel/debug/tracing/trace
# import zpool called 'tank', inspect tracepoint data (each line was
# truncated, they're too long for a commit message otherwise)
$ zpool import tank
$ cat /sys/kernel/debug/tracing/trace | head -n35
# tracer: nop
#
# entries-in-buffer/entries-written: 1219/1219 #P:8
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
lt-zpool-30132 [003] .... 91344.200050: zfs_arc__miss: hdr...
z_rd_int/0-30156 [003] .... 91344.200611: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.201173: zfs_arc__miss: hdr...
z_rd_int/1-30157 [003] .... 91344.201756: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.201795: zfs_arc__miss: hdr...
z_rd_int/2-30158 [003] .... 91344.202099: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202126: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202130: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202134: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202146: zfs_arc__miss: hdr...
z_rd_int/3-30159 [003] .... 91344.202457: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202484: zfs_arc__miss: hdr...
z_rd_int/4-30160 [003] .... 91344.202866: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202891: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.203034: zfs_arc__miss: hdr...
z_rd_iss/1-30149 [001] .... 91344.203749: zfs_new_state__mru...
lt-zpool-30132 [001] .... 91344.203789: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.203878: zfs_arc__miss: hdr...
z_rd_iss/3-30151 [001] .... 91344.204315: zfs_new_state__mru...
lt-zpool-30132 [001] .... 91344.204332: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204337: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204352: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204356: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204360: zfs_arc__hit: hdr ...
To highlight the kind of detailed information that is being exported
using this infrastructure, I've taken the first tracepoint line from the
output above and reformatted it such that it fits in 80 columns:
lt-zpool-30132 [003] .... 91344.200050: zfs_arc__miss:
hdr {
dva 0x1:0x40082
birth 15491
cksum0 0x163edbff3a
flags 0x640
datacnt 1
type 1
size 2048
spa 3133524293419867460
state_type 0
access 0
mru_hits 0
mru_ghost_hits 0
mfu_hits 0
mfu_ghost_hits 0
l2_hits 0
refcount 1
} bp {
dva0 0x1:0x40082
dva1 0x1:0x3000e5
dva2 0x1:0x5a006e
cksum 0x163edbff3a:0x75af30b3dd6:0x1499263ff5f2b:0x288bd118815e00
lsize 2048
} zb {
objset 0
object 0
level -1
blkid 0
}
For the specific tracepoint shown here, 'zfs_arc__miss', data is
exported detailing the arc_buf_hdr_t (hdr), blkptr_t (bp), and
zbookmark_t (zb) that caused the ARC miss (down to the exact DVA!).
This kind of precise and detailed information can be extremely valuable
when trying to answer certain kinds of questions.
For anybody unfamiliar but looking to build on this, I found the XFS
source code along with the following three web links to be extremely
helpful:
* http://lwn.net/Articles/379903/
* http://lwn.net/Articles/381064/
* http://lwn.net/Articles/383362/
I should also node the more "boring" aspects of this patch:
* The ZFS_LINUX_COMPILE_IFELSE autoconf macro was modified to
support a sixth paramter. This parameter is used to populate the
contents of the new conftest.h file. If no sixth parameter is
provided, conftest.h will be empty.
* The ZFS_LINUX_TRY_COMPILE_HEADER autoconf macro was introduced.
This macro is nearly identical to the ZFS_LINUX_TRY_COMPILE macro,
except it has support for a fifth option that is then passed as
the sixth parameter to ZFS_LINUX_COMPILE_IFELSE.
These autoconf changes were needed to test the availability of the Linux
tracepoint macros. Due to the odd nature of the Linux tracepoint macro
API, a separate ".h" must be created (the path and filename is used
internally by the kernel's define_trace.h file).
* The HAVE_DECLARE_EVENT_CLASS autoconf macro was introduced. This
is to determine if we can safely enable the Linux tracepoint
functionality. We need to selectively disable the tracepoint code
due to the kernel exporting certain functions as GPL only. Without
this check, the build process will fail at link time.
In addition, the SET_ERROR macro was modified into a tracepoint as well.
To do this, the 'sdt.h' file was moved into the 'include/sys' directory
and now contains a userspace portion and a kernel space portion. The
dprintf and zfs_dbgmsg* interfaces are now implemented as tracepoint as
well.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2014-06-13 21:54:48 +04:00
|
|
|
#else
|
|
|
|
int zfs_flags = 0;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* zfs_recover can be set to nonzero to attempt to recover from
|
|
|
|
* otherwise-fatal errors, typically caused by on-disk corruption. When
|
|
|
|
* set, calls to zfs_panic_recover() will turn into warning messages.
|
|
|
|
* This should only be used as a last resort, as it typically results
|
|
|
|
* in leaked space, or worse.
|
|
|
|
*/
|
|
|
|
int zfs_recover = B_FALSE;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If destroy encounters an EIO while reading metadata (e.g. indirect
|
|
|
|
* blocks), space referenced by the missing metadata can not be freed.
|
|
|
|
* Normally this causes the background destroy to become "stalled", as
|
|
|
|
* it is unable to make forward progress. While in this stalled state,
|
|
|
|
* all remaining space to free from the error-encountering filesystem is
|
|
|
|
* "temporarily leaked". Set this flag to cause it to ignore the EIO,
|
|
|
|
* permanently leak the space from indirect blocks that can not be read,
|
|
|
|
* and continue to free everything else that it can.
|
|
|
|
*
|
|
|
|
* The default, "stalling" behavior is useful if the storage partially
|
|
|
|
* fails (i.e. some but not all i/os fail), and then later recovers. In
|
|
|
|
* this case, we will be able to continue pool operations while it is
|
|
|
|
* partially failed, and when it recovers, we can continue to free the
|
|
|
|
* space, with no leaks. However, note that this case is actually
|
|
|
|
* fairly rare.
|
|
|
|
*
|
|
|
|
* Typically pools either (a) fail completely (but perhaps temporarily,
|
|
|
|
* e.g. a top-level vdev going offline), or (b) have localized,
|
|
|
|
* permanent errors (e.g. disk returns the wrong data due to bit flip or
|
|
|
|
* firmware bug). In case (a), this setting does not matter because the
|
|
|
|
* pool will be suspended and the sync thread will not be able to make
|
|
|
|
* forward progress regardless. In case (b), because the error is
|
|
|
|
* permanent, the best we can do is leak the minimum amount of space,
|
|
|
|
* which is what setting this flag will do. Therefore, it is reasonable
|
|
|
|
* for this flag to normally be set, but we chose the more conservative
|
|
|
|
* approach of not setting it, so that there is no possibility of
|
|
|
|
* leaking space in the "partial temporary" failure case.
|
|
|
|
*/
|
|
|
|
int zfs_free_leak_on_eio = B_FALSE;
|
|
|
|
|
2013-04-30 02:49:23 +04:00
|
|
|
/*
|
Illumos #4045 write throttle & i/o scheduler performance work
4045 zfs write throttle & i/o scheduler performance work
1. The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync
read, sync write, async read, async write, and scrub/resilver. The scheduler
issues a number of concurrent i/os from each class to the device. Once a class
has been selected, an i/o is selected from this class using either an elevator
algorithem (async, scrub classes) or FIFO (sync classes). The number of
concurrent async write i/os is tuned dynamically based on i/o load, to achieve
good sync i/o latency when there is not a high load of writes, and good write
throughput when there is. See the block comment in vdev_queue.c (reproduced
below) for more details.
2. The write throttle (dsl_pool_tempreserve_space() and
txg_constrain_throughput()) is rewritten to produce much more consistent delays
when under constant load. The new write throttle is based on the amount of
dirty data, rather than guesses about future performance of the system. When
there is a lot of dirty data, each transaction (e.g. write() syscall) will be
delayed by the same small amount. This eliminates the "brick wall of wait"
that the old write throttle could hit, causing all transactions to wait several
seconds until the next txg opens. One of the keys to the new write throttle is
decrementing the amount of dirty data as i/o completes, rather than at the end
of spa_sync(). Note that the write throttle is only applied once the i/o
scheduler is issuing the maximum number of outstanding async writes. See the
block comments in dsl_pool.c and above dmu_tx_delay() (reproduced below) for
more details.
This diff has several other effects, including:
* the commonly-tuned global variable zfs_vdev_max_pending has been removed;
use per-class zfs_vdev_*_max_active values or zfs_vdev_max_active instead.
* the size of each txg (meaning the amount of dirty data written, and thus the
time it takes to write out) is now controlled differently. There is no longer
an explicit time goal; the primary determinant is amount of dirty data.
Systems that are under light or medium load will now often see that a txg is
always syncing, but the impact to performance (e.g. read latency) is minimal.
Tune zfs_dirty_data_max and zfs_dirty_data_sync to control this.
* zio_taskq_batch_pct = 75 -- Only use 75% of all CPUs for compression,
checksum, etc. This improves latency by not allowing these CPU-intensive tasks
to consume all CPU (on machines with at least 4 CPU's; the percentage is
rounded up).
--matt
APPENDIX: problems with the current i/o scheduler
The current ZFS i/o scheduler (vdev_queue.c) is deadline based. The problem
with this is that if there are always i/os pending, then certain classes of
i/os can see very long delays.
For example, if there are always synchronous reads outstanding, then no async
writes will be serviced until they become "past due". One symptom of this
situation is that each pass of the txg sync takes at least several seconds
(typically 3 seconds).
If many i/os become "past due" (their deadline is in the past), then we must
service all of these overdue i/os before any new i/os. This happens when we
enqueue a batch of async writes for the txg sync, with deadlines 2.5 seconds in
the future. If we can't complete all the i/os in 2.5 seconds (e.g. because
there were always reads pending), then these i/os will become past due. Now we
must service all the "async" writes (which could be hundreds of megabytes)
before we service any reads, introducing considerable latency to synchronous
i/os (reads or ZIL writes).
Notes on porting to ZFS on Linux:
- zio_t gained new members io_physdone and io_phys_children. Because
object caches in the Linux port call the constructor only once at
allocation time, objects may contain residual data when retrieved
from the cache. Therefore zio_create() was updated to zero out the two
new fields.
- vdev_mirror_pending() relied on the depth of the per-vdev pending queue
(vq->vq_pending_tree) to select the least-busy leaf vdev to read from.
This tree has been replaced by vq->vq_active_tree which is now used
for the same purpose.
- vdev_queue_init() used the value of zfs_vdev_max_pending to determine
the number of vdev I/O buffers to pre-allocate. That global no longer
exists, so we instead use the sum of the *_max_active values for each of
the five I/O classes described above.
- The Illumos implementation of dmu_tx_delay() delays a transaction by
sleeping in condition variable embedded in the thread
(curthread->t_delay_cv). We do not have an equivalent CV to use in
Linux, so this change replaced the delay logic with a wrapper called
zfs_sleep_until(). This wrapper could be adopted upstream and in other
downstream ports to abstract away operating system-specific delay logic.
- These tunables are added as module parameters, and descriptions added
to the zfs-module-parameters.5 man page.
spa_asize_inflation
zfs_deadman_synctime_ms
zfs_vdev_max_active
zfs_vdev_async_write_active_min_dirty_percent
zfs_vdev_async_write_active_max_dirty_percent
zfs_vdev_async_read_max_active
zfs_vdev_async_read_min_active
zfs_vdev_async_write_max_active
zfs_vdev_async_write_min_active
zfs_vdev_scrub_max_active
zfs_vdev_scrub_min_active
zfs_vdev_sync_read_max_active
zfs_vdev_sync_read_min_active
zfs_vdev_sync_write_max_active
zfs_vdev_sync_write_min_active
zfs_dirty_data_max_percent
zfs_delay_min_dirty_percent
zfs_dirty_data_max_max_percent
zfs_dirty_data_max
zfs_dirty_data_max_max
zfs_dirty_data_sync
zfs_delay_scale
The latter four have type unsigned long, whereas they are uint64_t in
Illumos. This accommodates Linux's module_param() supported types, but
means they may overflow on 32-bit architectures.
The values zfs_dirty_data_max and zfs_dirty_data_max_max are the most
likely to overflow on 32-bit systems, since they express physical RAM
sizes in bytes. In fact, Illumos initializes zfs_dirty_data_max_max to
2^32 which does overflow. To resolve that, this port instead initializes
it in arc_init() to 25% of physical RAM, and adds the tunable
zfs_dirty_data_max_max_percent to override that percentage. While this
solution doesn't completely avoid the overflow issue, it should be a
reasonable default for most systems, and the minority of affected
systems can work around the issue by overriding the defaults.
- Fixed reversed logic in comment above zfs_delay_scale declaration.
- Clarified comments in vdev_queue.c regarding when per-queue minimums take
effect.
- Replaced dmu_tx_write_limit in the dmu_tx kstat file
with dmu_tx_dirty_delay and dmu_tx_dirty_over_max. The first counts
how many times a transaction has been delayed because the pool dirty
data has exceeded zfs_delay_min_dirty_percent. The latter counts how
many times the pool dirty data has exceeded zfs_dirty_data_max (which
we expect to never happen).
- The original patch would have regressed the bug fixed in
zfsonlinux/zfs@c418410, which prevented users from setting the
zfs_vdev_aggregation_limit tuning larger than SPA_MAXBLOCKSIZE.
A similar fix is added to vdev_queue_aggregate().
- In vdev_queue_io_to_issue(), dynamically allocate 'zio_t search' on the
heap instead of the stack. In Linux we can't afford such large
structures on the stack.
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Ned Bass <bass6@llnl.gov>
Reviewed by: Brendan Gregg <brendan.gregg@joyent.com>
Approved by: Robert Mustacchi <rm@joyent.com>
References:
http://www.illumos.org/issues/4045
illumos/illumos-gate@69962b5647e4a8b9b14998733b765925381b727e
Ported-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1913
2013-08-29 07:01:20 +04:00
|
|
|
* Expiration time in milliseconds. This value has two meanings. First it is
|
|
|
|
* used to determine when the spa_deadman() logic should fire. By default the
|
2017-12-19 01:06:07 +03:00
|
|
|
* spa_deadman() will fire if spa_sync() has not completed in 600 seconds.
|
Illumos #4045 write throttle & i/o scheduler performance work
4045 zfs write throttle & i/o scheduler performance work
1. The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync
read, sync write, async read, async write, and scrub/resilver. The scheduler
issues a number of concurrent i/os from each class to the device. Once a class
has been selected, an i/o is selected from this class using either an elevator
algorithem (async, scrub classes) or FIFO (sync classes). The number of
concurrent async write i/os is tuned dynamically based on i/o load, to achieve
good sync i/o latency when there is not a high load of writes, and good write
throughput when there is. See the block comment in vdev_queue.c (reproduced
below) for more details.
2. The write throttle (dsl_pool_tempreserve_space() and
txg_constrain_throughput()) is rewritten to produce much more consistent delays
when under constant load. The new write throttle is based on the amount of
dirty data, rather than guesses about future performance of the system. When
there is a lot of dirty data, each transaction (e.g. write() syscall) will be
delayed by the same small amount. This eliminates the "brick wall of wait"
that the old write throttle could hit, causing all transactions to wait several
seconds until the next txg opens. One of the keys to the new write throttle is
decrementing the amount of dirty data as i/o completes, rather than at the end
of spa_sync(). Note that the write throttle is only applied once the i/o
scheduler is issuing the maximum number of outstanding async writes. See the
block comments in dsl_pool.c and above dmu_tx_delay() (reproduced below) for
more details.
This diff has several other effects, including:
* the commonly-tuned global variable zfs_vdev_max_pending has been removed;
use per-class zfs_vdev_*_max_active values or zfs_vdev_max_active instead.
* the size of each txg (meaning the amount of dirty data written, and thus the
time it takes to write out) is now controlled differently. There is no longer
an explicit time goal; the primary determinant is amount of dirty data.
Systems that are under light or medium load will now often see that a txg is
always syncing, but the impact to performance (e.g. read latency) is minimal.
Tune zfs_dirty_data_max and zfs_dirty_data_sync to control this.
* zio_taskq_batch_pct = 75 -- Only use 75% of all CPUs for compression,
checksum, etc. This improves latency by not allowing these CPU-intensive tasks
to consume all CPU (on machines with at least 4 CPU's; the percentage is
rounded up).
--matt
APPENDIX: problems with the current i/o scheduler
The current ZFS i/o scheduler (vdev_queue.c) is deadline based. The problem
with this is that if there are always i/os pending, then certain classes of
i/os can see very long delays.
For example, if there are always synchronous reads outstanding, then no async
writes will be serviced until they become "past due". One symptom of this
situation is that each pass of the txg sync takes at least several seconds
(typically 3 seconds).
If many i/os become "past due" (their deadline is in the past), then we must
service all of these overdue i/os before any new i/os. This happens when we
enqueue a batch of async writes for the txg sync, with deadlines 2.5 seconds in
the future. If we can't complete all the i/os in 2.5 seconds (e.g. because
there were always reads pending), then these i/os will become past due. Now we
must service all the "async" writes (which could be hundreds of megabytes)
before we service any reads, introducing considerable latency to synchronous
i/os (reads or ZIL writes).
Notes on porting to ZFS on Linux:
- zio_t gained new members io_physdone and io_phys_children. Because
object caches in the Linux port call the constructor only once at
allocation time, objects may contain residual data when retrieved
from the cache. Therefore zio_create() was updated to zero out the two
new fields.
- vdev_mirror_pending() relied on the depth of the per-vdev pending queue
(vq->vq_pending_tree) to select the least-busy leaf vdev to read from.
This tree has been replaced by vq->vq_active_tree which is now used
for the same purpose.
- vdev_queue_init() used the value of zfs_vdev_max_pending to determine
the number of vdev I/O buffers to pre-allocate. That global no longer
exists, so we instead use the sum of the *_max_active values for each of
the five I/O classes described above.
- The Illumos implementation of dmu_tx_delay() delays a transaction by
sleeping in condition variable embedded in the thread
(curthread->t_delay_cv). We do not have an equivalent CV to use in
Linux, so this change replaced the delay logic with a wrapper called
zfs_sleep_until(). This wrapper could be adopted upstream and in other
downstream ports to abstract away operating system-specific delay logic.
- These tunables are added as module parameters, and descriptions added
to the zfs-module-parameters.5 man page.
spa_asize_inflation
zfs_deadman_synctime_ms
zfs_vdev_max_active
zfs_vdev_async_write_active_min_dirty_percent
zfs_vdev_async_write_active_max_dirty_percent
zfs_vdev_async_read_max_active
zfs_vdev_async_read_min_active
zfs_vdev_async_write_max_active
zfs_vdev_async_write_min_active
zfs_vdev_scrub_max_active
zfs_vdev_scrub_min_active
zfs_vdev_sync_read_max_active
zfs_vdev_sync_read_min_active
zfs_vdev_sync_write_max_active
zfs_vdev_sync_write_min_active
zfs_dirty_data_max_percent
zfs_delay_min_dirty_percent
zfs_dirty_data_max_max_percent
zfs_dirty_data_max
zfs_dirty_data_max_max
zfs_dirty_data_sync
zfs_delay_scale
The latter four have type unsigned long, whereas they are uint64_t in
Illumos. This accommodates Linux's module_param() supported types, but
means they may overflow on 32-bit architectures.
The values zfs_dirty_data_max and zfs_dirty_data_max_max are the most
likely to overflow on 32-bit systems, since they express physical RAM
sizes in bytes. In fact, Illumos initializes zfs_dirty_data_max_max to
2^32 which does overflow. To resolve that, this port instead initializes
it in arc_init() to 25% of physical RAM, and adds the tunable
zfs_dirty_data_max_max_percent to override that percentage. While this
solution doesn't completely avoid the overflow issue, it should be a
reasonable default for most systems, and the minority of affected
systems can work around the issue by overriding the defaults.
- Fixed reversed logic in comment above zfs_delay_scale declaration.
- Clarified comments in vdev_queue.c regarding when per-queue minimums take
effect.
- Replaced dmu_tx_write_limit in the dmu_tx kstat file
with dmu_tx_dirty_delay and dmu_tx_dirty_over_max. The first counts
how many times a transaction has been delayed because the pool dirty
data has exceeded zfs_delay_min_dirty_percent. The latter counts how
many times the pool dirty data has exceeded zfs_dirty_data_max (which
we expect to never happen).
- The original patch would have regressed the bug fixed in
zfsonlinux/zfs@c418410, which prevented users from setting the
zfs_vdev_aggregation_limit tuning larger than SPA_MAXBLOCKSIZE.
A similar fix is added to vdev_queue_aggregate().
- In vdev_queue_io_to_issue(), dynamically allocate 'zio_t search' on the
heap instead of the stack. In Linux we can't afford such large
structures on the stack.
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Ned Bass <bass6@llnl.gov>
Reviewed by: Brendan Gregg <brendan.gregg@joyent.com>
Approved by: Robert Mustacchi <rm@joyent.com>
References:
http://www.illumos.org/issues/4045
illumos/illumos-gate@69962b5647e4a8b9b14998733b765925381b727e
Ported-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1913
2013-08-29 07:01:20 +04:00
|
|
|
* Secondly, the value determines if an I/O is considered "hung". Any I/O that
|
|
|
|
* has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
|
2017-12-19 01:06:07 +03:00
|
|
|
* in one of three behaviors controlled by zfs_deadman_failmode.
|
2013-04-30 02:49:23 +04:00
|
|
|
*/
|
2017-12-19 01:06:07 +03:00
|
|
|
unsigned long zfs_deadman_synctime_ms = 600000ULL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This value controls the maximum amount of time zio_wait() will block for an
|
|
|
|
* outstanding IO. By default this is 300 seconds at which point the "hung"
|
|
|
|
* behavior will be applied as described for zfs_deadman_synctime_ms.
|
|
|
|
*/
|
|
|
|
unsigned long zfs_deadman_ziotime_ms = 300000ULL;
|
2013-04-30 02:49:23 +04:00
|
|
|
|
2017-02-01 01:19:08 +03:00
|
|
|
/*
|
|
|
|
* Check time in milliseconds. This defines the frequency at which we check
|
|
|
|
* for hung I/O.
|
|
|
|
*/
|
2018-10-10 23:48:33 +03:00
|
|
|
unsigned long zfs_deadman_checktime_ms = 60000ULL;
|
2017-02-01 01:19:08 +03:00
|
|
|
|
2013-04-30 02:49:23 +04:00
|
|
|
/*
|
|
|
|
* By default the deadman is enabled.
|
|
|
|
*/
|
|
|
|
int zfs_deadman_enabled = 1;
|
|
|
|
|
2017-12-19 01:06:07 +03:00
|
|
|
/*
|
|
|
|
* Controls the behavior of the deadman when it detects a "hung" I/O.
|
|
|
|
* Valid values are zfs_deadman_failmode=<wait|continue|panic>.
|
|
|
|
*
|
|
|
|
* wait - Wait for the "hung" I/O (default)
|
|
|
|
* continue - Attempt to recover from a "hung" I/O
|
|
|
|
* panic - Panic the system
|
|
|
|
*/
|
|
|
|
char *zfs_deadman_failmode = "wait";
|
|
|
|
|
Illumos #4045 write throttle & i/o scheduler performance work
4045 zfs write throttle & i/o scheduler performance work
1. The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync
read, sync write, async read, async write, and scrub/resilver. The scheduler
issues a number of concurrent i/os from each class to the device. Once a class
has been selected, an i/o is selected from this class using either an elevator
algorithem (async, scrub classes) or FIFO (sync classes). The number of
concurrent async write i/os is tuned dynamically based on i/o load, to achieve
good sync i/o latency when there is not a high load of writes, and good write
throughput when there is. See the block comment in vdev_queue.c (reproduced
below) for more details.
2. The write throttle (dsl_pool_tempreserve_space() and
txg_constrain_throughput()) is rewritten to produce much more consistent delays
when under constant load. The new write throttle is based on the amount of
dirty data, rather than guesses about future performance of the system. When
there is a lot of dirty data, each transaction (e.g. write() syscall) will be
delayed by the same small amount. This eliminates the "brick wall of wait"
that the old write throttle could hit, causing all transactions to wait several
seconds until the next txg opens. One of the keys to the new write throttle is
decrementing the amount of dirty data as i/o completes, rather than at the end
of spa_sync(). Note that the write throttle is only applied once the i/o
scheduler is issuing the maximum number of outstanding async writes. See the
block comments in dsl_pool.c and above dmu_tx_delay() (reproduced below) for
more details.
This diff has several other effects, including:
* the commonly-tuned global variable zfs_vdev_max_pending has been removed;
use per-class zfs_vdev_*_max_active values or zfs_vdev_max_active instead.
* the size of each txg (meaning the amount of dirty data written, and thus the
time it takes to write out) is now controlled differently. There is no longer
an explicit time goal; the primary determinant is amount of dirty data.
Systems that are under light or medium load will now often see that a txg is
always syncing, but the impact to performance (e.g. read latency) is minimal.
Tune zfs_dirty_data_max and zfs_dirty_data_sync to control this.
* zio_taskq_batch_pct = 75 -- Only use 75% of all CPUs for compression,
checksum, etc. This improves latency by not allowing these CPU-intensive tasks
to consume all CPU (on machines with at least 4 CPU's; the percentage is
rounded up).
--matt
APPENDIX: problems with the current i/o scheduler
The current ZFS i/o scheduler (vdev_queue.c) is deadline based. The problem
with this is that if there are always i/os pending, then certain classes of
i/os can see very long delays.
For example, if there are always synchronous reads outstanding, then no async
writes will be serviced until they become "past due". One symptom of this
situation is that each pass of the txg sync takes at least several seconds
(typically 3 seconds).
If many i/os become "past due" (their deadline is in the past), then we must
service all of these overdue i/os before any new i/os. This happens when we
enqueue a batch of async writes for the txg sync, with deadlines 2.5 seconds in
the future. If we can't complete all the i/os in 2.5 seconds (e.g. because
there were always reads pending), then these i/os will become past due. Now we
must service all the "async" writes (which could be hundreds of megabytes)
before we service any reads, introducing considerable latency to synchronous
i/os (reads or ZIL writes).
Notes on porting to ZFS on Linux:
- zio_t gained new members io_physdone and io_phys_children. Because
object caches in the Linux port call the constructor only once at
allocation time, objects may contain residual data when retrieved
from the cache. Therefore zio_create() was updated to zero out the two
new fields.
- vdev_mirror_pending() relied on the depth of the per-vdev pending queue
(vq->vq_pending_tree) to select the least-busy leaf vdev to read from.
This tree has been replaced by vq->vq_active_tree which is now used
for the same purpose.
- vdev_queue_init() used the value of zfs_vdev_max_pending to determine
the number of vdev I/O buffers to pre-allocate. That global no longer
exists, so we instead use the sum of the *_max_active values for each of
the five I/O classes described above.
- The Illumos implementation of dmu_tx_delay() delays a transaction by
sleeping in condition variable embedded in the thread
(curthread->t_delay_cv). We do not have an equivalent CV to use in
Linux, so this change replaced the delay logic with a wrapper called
zfs_sleep_until(). This wrapper could be adopted upstream and in other
downstream ports to abstract away operating system-specific delay logic.
- These tunables are added as module parameters, and descriptions added
to the zfs-module-parameters.5 man page.
spa_asize_inflation
zfs_deadman_synctime_ms
zfs_vdev_max_active
zfs_vdev_async_write_active_min_dirty_percent
zfs_vdev_async_write_active_max_dirty_percent
zfs_vdev_async_read_max_active
zfs_vdev_async_read_min_active
zfs_vdev_async_write_max_active
zfs_vdev_async_write_min_active
zfs_vdev_scrub_max_active
zfs_vdev_scrub_min_active
zfs_vdev_sync_read_max_active
zfs_vdev_sync_read_min_active
zfs_vdev_sync_write_max_active
zfs_vdev_sync_write_min_active
zfs_dirty_data_max_percent
zfs_delay_min_dirty_percent
zfs_dirty_data_max_max_percent
zfs_dirty_data_max
zfs_dirty_data_max_max
zfs_dirty_data_sync
zfs_delay_scale
The latter four have type unsigned long, whereas they are uint64_t in
Illumos. This accommodates Linux's module_param() supported types, but
means they may overflow on 32-bit architectures.
The values zfs_dirty_data_max and zfs_dirty_data_max_max are the most
likely to overflow on 32-bit systems, since they express physical RAM
sizes in bytes. In fact, Illumos initializes zfs_dirty_data_max_max to
2^32 which does overflow. To resolve that, this port instead initializes
it in arc_init() to 25% of physical RAM, and adds the tunable
zfs_dirty_data_max_max_percent to override that percentage. While this
solution doesn't completely avoid the overflow issue, it should be a
reasonable default for most systems, and the minority of affected
systems can work around the issue by overriding the defaults.
- Fixed reversed logic in comment above zfs_delay_scale declaration.
- Clarified comments in vdev_queue.c regarding when per-queue minimums take
effect.
- Replaced dmu_tx_write_limit in the dmu_tx kstat file
with dmu_tx_dirty_delay and dmu_tx_dirty_over_max. The first counts
how many times a transaction has been delayed because the pool dirty
data has exceeded zfs_delay_min_dirty_percent. The latter counts how
many times the pool dirty data has exceeded zfs_dirty_data_max (which
we expect to never happen).
- The original patch would have regressed the bug fixed in
zfsonlinux/zfs@c418410, which prevented users from setting the
zfs_vdev_aggregation_limit tuning larger than SPA_MAXBLOCKSIZE.
A similar fix is added to vdev_queue_aggregate().
- In vdev_queue_io_to_issue(), dynamically allocate 'zio_t search' on the
heap instead of the stack. In Linux we can't afford such large
structures on the stack.
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Ned Bass <bass6@llnl.gov>
Reviewed by: Brendan Gregg <brendan.gregg@joyent.com>
Approved by: Robert Mustacchi <rm@joyent.com>
References:
http://www.illumos.org/issues/4045
illumos/illumos-gate@69962b5647e4a8b9b14998733b765925381b727e
Ported-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1913
2013-08-29 07:01:20 +04:00
|
|
|
/*
|
|
|
|
* The worst case is single-sector max-parity RAID-Z blocks, in which
|
|
|
|
* case the space requirement is exactly (VDEV_RAIDZ_MAXPARITY + 1)
|
|
|
|
* times the size; so just assume that. Add to this the fact that
|
|
|
|
* we can have up to 3 DVAs per bp, and one more factor of 2 because
|
|
|
|
* the block may be dittoed with up to 3 DVAs by ddt_sync(). All together,
|
|
|
|
* the worst case is:
|
|
|
|
* (VDEV_RAIDZ_MAXPARITY + 1) * SPA_DVAS_PER_BP * 2 == 24
|
|
|
|
*/
|
|
|
|
int spa_asize_inflation = 24;
|
|
|
|
|
2014-11-03 23:28:43 +03:00
|
|
|
/*
|
|
|
|
* Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space in
|
|
|
|
* the pool to be consumed. This ensures that we don't run the pool
|
|
|
|
* completely out of space, due to unaccounted changes (e.g. to the MOS).
|
|
|
|
* It also limits the worst-case time to allocate space. If we have
|
|
|
|
* less than this amount of free space, most ZPL operations (e.g. write,
|
|
|
|
* create) will return ENOSPC.
|
|
|
|
*
|
|
|
|
* Certain operations (e.g. file removal, most administrative actions) can
|
|
|
|
* use half the slop space. They will only return ENOSPC if less than half
|
|
|
|
* the slop space is free. Typically, once the pool has less than the slop
|
|
|
|
* space free, the user will use these operations to free up space in the pool.
|
|
|
|
* These are the operations that call dsl_pool_adjustedsize() with the netfree
|
|
|
|
* argument set to TRUE.
|
|
|
|
*
|
2016-12-17 01:11:29 +03:00
|
|
|
* Operations that are almost guaranteed to free up space in the absence of
|
|
|
|
* a pool checkpoint can use up to three quarters of the slop space
|
|
|
|
* (e.g zfs destroy).
|
|
|
|
*
|
2014-11-03 23:28:43 +03:00
|
|
|
* A very restricted set of operations are always permitted, regardless of
|
|
|
|
* the amount of free space. These are the operations that call
|
2016-12-17 01:11:29 +03:00
|
|
|
* dsl_sync_task(ZFS_SPACE_CHECK_NONE). If these operations result in a net
|
|
|
|
* increase in the amount of space used, it is possible to run the pool
|
|
|
|
* completely out of space, causing it to be permanently read-only.
|
2014-11-03 23:28:43 +03:00
|
|
|
*
|
2016-07-14 02:48:01 +03:00
|
|
|
* Note that on very small pools, the slop space will be larger than
|
|
|
|
* 3.2%, in an effort to have it be at least spa_min_slop (128MB),
|
|
|
|
* but we never allow it to be more than half the pool size.
|
|
|
|
*
|
2014-11-03 23:28:43 +03:00
|
|
|
* See also the comments in zfs_space_check_t.
|
|
|
|
*/
|
|
|
|
int spa_slop_shift = 5;
|
2016-07-14 02:48:01 +03:00
|
|
|
uint64_t spa_min_slop = 128 * 1024 * 1024;
|
OpenZFS 9112 - Improve allocation performance on high-end systems
Overview
========
We parallelize the allocation process by creating the concept of
"allocators". There are a certain number of allocators per metaslab
group, defined by the value of a tunable at pool open time. Each
allocator for a given metaslab group has up to 2 active metaslabs; one
"primary", and one "secondary". The primary and secondary weight mean
the same thing they did in in the pre-allocator world; primary metaslabs
are used for most allocations, secondary metaslabs are used for ditto
blocks being allocated in the same metaslab group. There is also the
CLAIM weight, which has been separated out from the other weights, but
that is less important to understanding the patch. The active metaslabs
for each allocator are moved from their normal place in the metaslab
tree for the group to the back of the tree. This way, they will not be
selected for use by other allocators searching for new metaslabs unless
all the passive metaslabs are unsuitable for allocations. If that does
happen, the allocators will "steal" from each other to ensure that IOs
don't fail until there is truly no space left to perform allocations.
In addition, the alloc queue for each metaslab group has been broken
into a separate queue for each allocator. We don't want to dramatically
increase the number of inflight IOs on low-end systems, because it can
significantly increase txg times. On the other hand, we want to ensure
that there are enough IOs for each allocator to allow for good
coalescing before sending the IOs to the disk. As a result, we take a
compromise path; each allocator's alloc queue max depth starts at a
certain value for every txg. Every time an IO completes, we increase the
max depth. This should hopefully provide a good balance between the two
failure modes, while not dramatically increasing complexity.
We also parallelize the spa_alloc_tree and spa_alloc_lock, which cause
very similar contention when selecting IOs to allocate. This
parallelization uses the same allocator scheme as metaslab selection.
Performance Results
===================
Performance improvements from this change can vary significantly based
on the number of CPUs in the system, whether or not the system has a
NUMA architecture, the speed of the drives, the values for the various
tunables, and the workload being performed. For an fio async sequential
write workload on a 24 core NUMA system with 256 GB of RAM and 8 128 GB
SSDs, there is a roughly 25% performance improvement.
Future Work
===========
Analysis of the performance of the system with this patch applied shows
that a significant new bottleneck is the vdev disk queues, which also
need to be parallelized. Prototyping of this change has occurred, and
there was a performance improvement, but more work needs to be done
before its stability has been verified and it is ready to be upstreamed.
Authored by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Serapheim Dimitropoulos <serapheim.dimitro@delphix.com>
Reviewed by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Gordon Ross <gwr@nexenta.com>
Ported-by: Paul Dagnelie <pcd@delphix.com>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Porting Notes:
* Fix reservation test failures by increasing tolerance.
OpenZFS-issue: https://illumos.org/issues/9112
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/3f3cc3c3
Closes #7682
2018-02-12 23:56:06 +03:00
|
|
|
int spa_allocators = 4;
|
|
|
|
|
2014-11-03 23:28:43 +03:00
|
|
|
|
2016-03-10 18:16:02 +03:00
|
|
|
/*PRINTFLIKE2*/
|
|
|
|
void
|
|
|
|
spa_load_failed(spa_t *spa, const char *fmt, ...)
|
|
|
|
{
|
|
|
|
va_list adx;
|
|
|
|
char buf[256];
|
|
|
|
|
|
|
|
va_start(adx, fmt);
|
|
|
|
(void) vsnprintf(buf, sizeof (buf), fmt, adx);
|
|
|
|
va_end(adx);
|
|
|
|
|
OpenZFS 9075 - Improve ZFS pool import/load process and corrupted pool recovery
Some work has been done lately to improve the debugability of the ZFS pool
load (and import) process. This includes:
7638 Refactor spa_load_impl into several functions
8961 SPA load/import should tell us why it failed
7277 zdb should be able to print zfs_dbgmsg's
To iterate on top of that, there's a few changes that were made to make the
import process more resilient and crash free. One of the first tasks during the
pool load process is to parse a config provided from userland that describes
what devices the pool is composed of. A vdev tree is generated from that config,
and then all the vdevs are opened.
The Meta Object Set (MOS) of the pool is accessed, and several metadata objects
that are necessary to load the pool are read. The exact configuration of the
pool is also stored inside the MOS. Since the configuration provided from
userland is external and might not accurately describe the vdev tree
of the pool at the txg that is being loaded, it cannot be relied upon to safely
operate the pool. For that reason, the configuration in the MOS is read early
on. In the past, the two configurations were compared together and if there was
a mismatch then the load process was aborted and an error was returned.
The latter was a good way to ensure a pool does not get corrupted, however it
made the pool load process needlessly fragile in cases where the vdev
configuration changed or the userland configuration was outdated. Since the MOS
is stored in 3 copies, the configuration provided by userland doesn't have to be
perfect in order to read its contents. Hence, a new approach has been adopted:
The pool is first opened with the untrusted userland configuration just so that
the real configuration can be read from the MOS. The trusted MOS configuration
is then used to generate a new vdev tree and the pool is re-opened.
When the pool is opened with an untrusted configuration, writes are disabled
to avoid accidentally damaging it. During reads, some sanity checks are
performed on block pointers to see if each DVA points to a known vdev;
when the configuration is untrusted, instead of panicking the system if those
checks fail we simply avoid issuing reads to the invalid DVAs.
This new two-step pool load process now allows rewinding pools accross
vdev tree changes such as device replacement, addition, etc. Loading a pool
from an external config file in a clustering environment also becomes much
safer now since the pool will import even if the config is outdated and didn't,
for instance, register a recent device addition.
With this code in place, it became relatively easy to implement a
long-sought-after feature: the ability to import a pool with missing top level
(i.e. non-redundant) devices. Note that since this almost guarantees some loss
of data, this feature is for now restricted to a read-only import.
Porting notes (ZTS):
* Fix 'make dist' target in zpool_import
* The maximum path length allowed by tar is 99 characters. Several
of the new test cases exceeded this limit resulting in them not
being included in the tarball. Shorten the names slightly.
* Set/get tunables using accessor functions.
* Get last synced txg via the "zfs_txg_history" mechanism.
* Clear zinject handlers in cleanup for import_cache_device_replaced
and import_rewind_device_replaced in order that the zpool can be
exported if there is an error.
* Increase FILESIZE to 8G in zfs-test.sh to allow for a larger
ext4 file system to be created on ZFS_DISK2. Also, there's
no need to partition ZFS_DISK2 at all. The partitioning had
already been disabled for multipath devices. Among other things,
the partitioning steals some space from the ext4 file system,
makes it difficult to accurately calculate the paramters to
parted and can make some of the tests fail.
* Increase FS_SIZE and FILE_SIZE in the zpool_import test
configuration now that FILESIZE is larger.
* Write more data in order that device evacuation take lonnger in
a couple tests.
* Use mkdir -p to avoid errors when the directory already exists.
* Remove use of sudo in import_rewind_config_changed.
Authored by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Approved by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://illumos.org/issues/9075
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/619c0123
Closes #7459
2016-07-22 17:39:36 +03:00
|
|
|
zfs_dbgmsg("spa_load(%s, config %s): FAILED: %s", spa->spa_name,
|
|
|
|
spa->spa_trust_config ? "trusted" : "untrusted", buf);
|
2016-03-10 18:16:02 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*PRINTFLIKE2*/
|
|
|
|
void
|
|
|
|
spa_load_note(spa_t *spa, const char *fmt, ...)
|
|
|
|
{
|
|
|
|
va_list adx;
|
|
|
|
char buf[256];
|
|
|
|
|
|
|
|
va_start(adx, fmt);
|
|
|
|
(void) vsnprintf(buf, sizeof (buf), fmt, adx);
|
|
|
|
va_end(adx);
|
|
|
|
|
OpenZFS 9075 - Improve ZFS pool import/load process and corrupted pool recovery
Some work has been done lately to improve the debugability of the ZFS pool
load (and import) process. This includes:
7638 Refactor spa_load_impl into several functions
8961 SPA load/import should tell us why it failed
7277 zdb should be able to print zfs_dbgmsg's
To iterate on top of that, there's a few changes that were made to make the
import process more resilient and crash free. One of the first tasks during the
pool load process is to parse a config provided from userland that describes
what devices the pool is composed of. A vdev tree is generated from that config,
and then all the vdevs are opened.
The Meta Object Set (MOS) of the pool is accessed, and several metadata objects
that are necessary to load the pool are read. The exact configuration of the
pool is also stored inside the MOS. Since the configuration provided from
userland is external and might not accurately describe the vdev tree
of the pool at the txg that is being loaded, it cannot be relied upon to safely
operate the pool. For that reason, the configuration in the MOS is read early
on. In the past, the two configurations were compared together and if there was
a mismatch then the load process was aborted and an error was returned.
The latter was a good way to ensure a pool does not get corrupted, however it
made the pool load process needlessly fragile in cases where the vdev
configuration changed or the userland configuration was outdated. Since the MOS
is stored in 3 copies, the configuration provided by userland doesn't have to be
perfect in order to read its contents. Hence, a new approach has been adopted:
The pool is first opened with the untrusted userland configuration just so that
the real configuration can be read from the MOS. The trusted MOS configuration
is then used to generate a new vdev tree and the pool is re-opened.
When the pool is opened with an untrusted configuration, writes are disabled
to avoid accidentally damaging it. During reads, some sanity checks are
performed on block pointers to see if each DVA points to a known vdev;
when the configuration is untrusted, instead of panicking the system if those
checks fail we simply avoid issuing reads to the invalid DVAs.
This new two-step pool load process now allows rewinding pools accross
vdev tree changes such as device replacement, addition, etc. Loading a pool
from an external config file in a clustering environment also becomes much
safer now since the pool will import even if the config is outdated and didn't,
for instance, register a recent device addition.
With this code in place, it became relatively easy to implement a
long-sought-after feature: the ability to import a pool with missing top level
(i.e. non-redundant) devices. Note that since this almost guarantees some loss
of data, this feature is for now restricted to a read-only import.
Porting notes (ZTS):
* Fix 'make dist' target in zpool_import
* The maximum path length allowed by tar is 99 characters. Several
of the new test cases exceeded this limit resulting in them not
being included in the tarball. Shorten the names slightly.
* Set/get tunables using accessor functions.
* Get last synced txg via the "zfs_txg_history" mechanism.
* Clear zinject handlers in cleanup for import_cache_device_replaced
and import_rewind_device_replaced in order that the zpool can be
exported if there is an error.
* Increase FILESIZE to 8G in zfs-test.sh to allow for a larger
ext4 file system to be created on ZFS_DISK2. Also, there's
no need to partition ZFS_DISK2 at all. The partitioning had
already been disabled for multipath devices. Among other things,
the partitioning steals some space from the ext4 file system,
makes it difficult to accurately calculate the paramters to
parted and can make some of the tests fail.
* Increase FS_SIZE and FILE_SIZE in the zpool_import test
configuration now that FILESIZE is larger.
* Write more data in order that device evacuation take lonnger in
a couple tests.
* Use mkdir -p to avoid errors when the directory already exists.
* Remove use of sudo in import_rewind_config_changed.
Authored by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Approved by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://illumos.org/issues/9075
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/619c0123
Closes #7459
2016-07-22 17:39:36 +03:00
|
|
|
zfs_dbgmsg("spa_load(%s, config %s): %s", spa->spa_name,
|
|
|
|
spa->spa_trust_config ? "trusted" : "untrusted", buf);
|
2016-03-10 18:16:02 +03:00
|
|
|
}
|
|
|
|
|
2018-09-06 04:33:36 +03:00
|
|
|
/*
|
|
|
|
* By default dedup and user data indirects land in the special class
|
|
|
|
*/
|
|
|
|
int zfs_ddt_data_is_special = B_TRUE;
|
|
|
|
int zfs_user_indirect_is_special = B_TRUE;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The percentage of special class final space reserved for metadata only.
|
|
|
|
* Once we allocate 100 - zfs_special_class_metadata_reserve_pct we only
|
|
|
|
* let metadata into the class.
|
|
|
|
*/
|
|
|
|
int zfs_special_class_metadata_reserve_pct = 25;
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA config locking
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
static void
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_init(spa_t *spa)
|
|
|
|
{
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < SCL_LOCKS; i++) {
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_t *scl = &spa->spa_config_lock[i];
|
|
|
|
mutex_init(&scl->scl_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
cv_init(&scl->scl_cv, NULL, CV_DEFAULT, NULL);
|
2018-10-01 20:42:05 +03:00
|
|
|
zfs_refcount_create_untracked(&scl->scl_count);
|
2008-12-03 23:09:06 +03:00
|
|
|
scl->scl_writer = NULL;
|
|
|
|
scl->scl_write_wanted = 0;
|
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_destroy(spa_t *spa)
|
|
|
|
{
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < SCL_LOCKS; i++) {
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_t *scl = &spa->spa_config_lock[i];
|
|
|
|
mutex_destroy(&scl->scl_lock);
|
|
|
|
cv_destroy(&scl->scl_cv);
|
2018-10-01 20:42:05 +03:00
|
|
|
zfs_refcount_destroy(&scl->scl_count);
|
2008-12-03 23:09:06 +03:00
|
|
|
ASSERT(scl->scl_writer == NULL);
|
|
|
|
ASSERT(scl->scl_write_wanted == 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
spa_config_tryenter(spa_t *spa, int locks, void *tag, krw_t rw)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < SCL_LOCKS; i++) {
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_t *scl = &spa->spa_config_lock[i];
|
|
|
|
if (!(locks & (1 << i)))
|
|
|
|
continue;
|
|
|
|
mutex_enter(&scl->scl_lock);
|
|
|
|
if (rw == RW_READER) {
|
|
|
|
if (scl->scl_writer || scl->scl_write_wanted) {
|
|
|
|
mutex_exit(&scl->scl_lock);
|
2015-12-23 23:02:43 +03:00
|
|
|
spa_config_exit(spa, locks & ((1 << i) - 1),
|
|
|
|
tag);
|
2008-12-03 23:09:06 +03:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
ASSERT(scl->scl_writer != curthread);
|
2018-10-01 20:42:05 +03:00
|
|
|
if (!zfs_refcount_is_zero(&scl->scl_count)) {
|
2008-12-03 23:09:06 +03:00
|
|
|
mutex_exit(&scl->scl_lock);
|
2015-12-23 23:02:43 +03:00
|
|
|
spa_config_exit(spa, locks & ((1 << i) - 1),
|
|
|
|
tag);
|
2008-12-03 23:09:06 +03:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
scl->scl_writer = curthread;
|
|
|
|
}
|
2018-09-26 20:29:26 +03:00
|
|
|
(void) zfs_refcount_add(&scl->scl_count, tag);
|
2008-12-03 23:09:06 +03:00
|
|
|
mutex_exit(&scl->scl_lock);
|
|
|
|
}
|
|
|
|
return (1);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_enter(spa_t *spa, int locks, void *tag, krw_t rw)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2009-08-18 22:43:27 +04:00
|
|
|
int wlocks_held = 0;
|
|
|
|
|
2013-09-04 16:00:57 +04:00
|
|
|
ASSERT3U(SCL_LOCKS, <, sizeof (wlocks_held) * NBBY);
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < SCL_LOCKS; i++) {
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_t *scl = &spa->spa_config_lock[i];
|
2009-08-18 22:43:27 +04:00
|
|
|
if (scl->scl_writer == curthread)
|
|
|
|
wlocks_held |= (1 << i);
|
2008-12-03 23:09:06 +03:00
|
|
|
if (!(locks & (1 << i)))
|
|
|
|
continue;
|
|
|
|
mutex_enter(&scl->scl_lock);
|
|
|
|
if (rw == RW_READER) {
|
|
|
|
while (scl->scl_writer || scl->scl_write_wanted) {
|
|
|
|
cv_wait(&scl->scl_cv, &scl->scl_lock);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
ASSERT(scl->scl_writer != curthread);
|
2018-10-01 20:42:05 +03:00
|
|
|
while (!zfs_refcount_is_zero(&scl->scl_count)) {
|
2008-12-03 23:09:06 +03:00
|
|
|
scl->scl_write_wanted++;
|
|
|
|
cv_wait(&scl->scl_cv, &scl->scl_lock);
|
|
|
|
scl->scl_write_wanted--;
|
|
|
|
}
|
|
|
|
scl->scl_writer = curthread;
|
|
|
|
}
|
2018-09-26 20:29:26 +03:00
|
|
|
(void) zfs_refcount_add(&scl->scl_count, tag);
|
2008-12-03 23:09:06 +03:00
|
|
|
mutex_exit(&scl->scl_lock);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 19:30:13 +03:00
|
|
|
ASSERT3U(wlocks_held, <=, locks);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_exit(spa_t *spa, int locks, void *tag)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = SCL_LOCKS - 1; i >= 0; i--) {
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_t *scl = &spa->spa_config_lock[i];
|
|
|
|
if (!(locks & (1 << i)))
|
|
|
|
continue;
|
|
|
|
mutex_enter(&scl->scl_lock);
|
2018-10-01 20:42:05 +03:00
|
|
|
ASSERT(!zfs_refcount_is_zero(&scl->scl_count));
|
|
|
|
if (zfs_refcount_remove(&scl->scl_count, tag) == 0) {
|
2008-12-03 23:09:06 +03:00
|
|
|
ASSERT(scl->scl_writer == NULL ||
|
|
|
|
scl->scl_writer == curthread);
|
|
|
|
scl->scl_writer = NULL; /* OK in either case */
|
|
|
|
cv_broadcast(&scl->scl_cv);
|
|
|
|
}
|
|
|
|
mutex_exit(&scl->scl_lock);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
int
|
|
|
|
spa_config_held(spa_t *spa, int locks, krw_t rw)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2017-11-04 23:25:13 +03:00
|
|
|
int locks_held = 0;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < SCL_LOCKS; i++) {
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_t *scl = &spa->spa_config_lock[i];
|
|
|
|
if (!(locks & (1 << i)))
|
|
|
|
continue;
|
2018-10-01 20:42:05 +03:00
|
|
|
if ((rw == RW_READER &&
|
|
|
|
!zfs_refcount_is_zero(&scl->scl_count)) ||
|
2008-12-03 23:09:06 +03:00
|
|
|
(rw == RW_WRITER && scl->scl_writer == curthread))
|
|
|
|
locks_held |= 1 << i;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (locks_held);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA namespace functions
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Lookup the named spa_t in the AVL tree. The spa_namespace_lock must be held.
|
|
|
|
* Returns NULL if no matching spa_t is found.
|
|
|
|
*/
|
|
|
|
spa_t *
|
|
|
|
spa_lookup(const char *name)
|
|
|
|
{
|
2008-12-03 23:09:06 +03:00
|
|
|
static spa_t search; /* spa_t is large; don't allocate on stack */
|
|
|
|
spa_t *spa;
|
2008-11-20 23:01:55 +03:00
|
|
|
avl_index_t where;
|
|
|
|
char *cp;
|
|
|
|
|
|
|
|
ASSERT(MUTEX_HELD(&spa_namespace_lock));
|
|
|
|
|
2013-09-04 16:00:57 +04:00
|
|
|
(void) strlcpy(search.spa_name, name, sizeof (search.spa_name));
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* If it's a full dataset name, figure out the pool name and
|
|
|
|
* just use that.
|
|
|
|
*/
|
2013-12-12 02:33:41 +04:00
|
|
|
cp = strpbrk(search.spa_name, "/@#");
|
2013-09-04 16:00:57 +04:00
|
|
|
if (cp != NULL)
|
2008-11-20 23:01:55 +03:00
|
|
|
*cp = '\0';
|
|
|
|
|
|
|
|
spa = avl_find(&spa_namespace_avl, &search, &where);
|
|
|
|
|
|
|
|
return (spa);
|
|
|
|
}
|
|
|
|
|
2013-04-30 02:49:23 +04:00
|
|
|
/*
|
|
|
|
* Fires when spa_sync has not completed within zfs_deadman_synctime_ms.
|
|
|
|
* If the zfs_deadman_enabled flag is set then it inspects all vdev queues
|
|
|
|
* looking for potentially hung I/Os.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
spa_deadman(void *arg)
|
|
|
|
{
|
|
|
|
spa_t *spa = arg;
|
|
|
|
|
2017-02-01 01:19:08 +03:00
|
|
|
/* Disable the deadman if the pool is suspended. */
|
|
|
|
if (spa_suspended(spa))
|
|
|
|
return;
|
|
|
|
|
2013-04-30 02:49:23 +04:00
|
|
|
zfs_dbgmsg("slow spa_sync: started %llu seconds ago, calls %llu",
|
|
|
|
(gethrtime() - spa->spa_sync_starttime) / NANOSEC,
|
|
|
|
++spa->spa_deadman_calls);
|
|
|
|
if (zfs_deadman_enabled)
|
2017-12-19 01:06:07 +03:00
|
|
|
vdev_deadman(spa->spa_root_vdev, FTAG);
|
2013-04-30 02:49:23 +04:00
|
|
|
|
2016-12-01 00:56:50 +03:00
|
|
|
spa->spa_deadman_tqid = taskq_dispatch_delay(system_delay_taskq,
|
2016-03-07 16:35:29 +03:00
|
|
|
spa_deadman, spa, TQ_SLEEP, ddi_get_lbolt() +
|
2017-02-01 01:19:08 +03:00
|
|
|
MSEC_TO_TICK(zfs_deadman_checktime_ms));
|
2013-04-30 02:49:23 +04:00
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* Create an uninitialized spa_t with the given name. Requires
|
|
|
|
* spa_namespace_lock. The caller must ensure that the spa_t doesn't already
|
|
|
|
* exist by calling spa_lookup() first.
|
|
|
|
*/
|
|
|
|
spa_t *
|
2010-05-29 00:45:14 +04:00
|
|
|
spa_add(const char *name, nvlist_t *config, const char *altroot)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
|
|
|
spa_t *spa;
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_dirent_t *dp;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
ASSERT(MUTEX_HELD(&spa_namespace_lock));
|
|
|
|
|
2014-11-21 03:09:39 +03:00
|
|
|
spa = kmem_zalloc(sizeof (spa_t), KM_SLEEP);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
mutex_init(&spa->spa_async_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
mutex_init(&spa->spa_errlist_lock, NULL, MUTEX_DEFAULT, NULL);
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_init(&spa->spa_errlog_lock, NULL, MUTEX_DEFAULT, NULL);
|
2015-04-02 06:44:32 +03:00
|
|
|
mutex_init(&spa->spa_evicting_os_lock, NULL, MUTEX_DEFAULT, NULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
mutex_init(&spa->spa_history_lock, NULL, MUTEX_DEFAULT, NULL);
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_init(&spa->spa_proc_lock, NULL, MUTEX_DEFAULT, NULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
mutex_init(&spa->spa_props_lock, NULL, MUTEX_DEFAULT, NULL);
|
2016-06-16 01:47:05 +03:00
|
|
|
mutex_init(&spa->spa_cksum_tmpls_lock, NULL, MUTEX_DEFAULT, NULL);
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_init(&spa->spa_scrub_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
mutex_init(&spa->spa_suspend_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
mutex_init(&spa->spa_vdev_top_lock, NULL, MUTEX_DEFAULT, NULL);
|
2015-04-23 22:32:59 +03:00
|
|
|
mutex_init(&spa->spa_feat_stats_lock, NULL, MUTEX_DEFAULT, NULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
cv_init(&spa->spa_async_cv, NULL, CV_DEFAULT, NULL);
|
2015-04-02 06:44:32 +03:00
|
|
|
cv_init(&spa->spa_evicting_os_cv, NULL, CV_DEFAULT, NULL);
|
2010-05-29 00:45:14 +04:00
|
|
|
cv_init(&spa->spa_proc_cv, NULL, CV_DEFAULT, NULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
cv_init(&spa->spa_scrub_io_cv, NULL, CV_DEFAULT, NULL);
|
2008-12-03 23:09:06 +03:00
|
|
|
cv_init(&spa->spa_suspend_cv, NULL, CV_DEFAULT, NULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int t = 0; t < TXG_SIZE; t++)
|
2010-05-29 00:45:14 +04:00
|
|
|
bplist_create(&spa->spa_free_bplist[t]);
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
(void) strlcpy(spa->spa_name, name, sizeof (spa->spa_name));
|
2008-11-20 23:01:55 +03:00
|
|
|
spa->spa_state = POOL_STATE_UNINITIALIZED;
|
|
|
|
spa->spa_freeze_txg = UINT64_MAX;
|
|
|
|
spa->spa_final_txg = UINT64_MAX;
|
2010-05-29 00:45:14 +04:00
|
|
|
spa->spa_load_max_txg = UINT64_MAX;
|
|
|
|
spa->spa_proc = &p0;
|
|
|
|
spa->spa_proc_state = SPA_PROC_NONE;
|
OpenZFS 9075 - Improve ZFS pool import/load process and corrupted pool recovery
Some work has been done lately to improve the debugability of the ZFS pool
load (and import) process. This includes:
7638 Refactor spa_load_impl into several functions
8961 SPA load/import should tell us why it failed
7277 zdb should be able to print zfs_dbgmsg's
To iterate on top of that, there's a few changes that were made to make the
import process more resilient and crash free. One of the first tasks during the
pool load process is to parse a config provided from userland that describes
what devices the pool is composed of. A vdev tree is generated from that config,
and then all the vdevs are opened.
The Meta Object Set (MOS) of the pool is accessed, and several metadata objects
that are necessary to load the pool are read. The exact configuration of the
pool is also stored inside the MOS. Since the configuration provided from
userland is external and might not accurately describe the vdev tree
of the pool at the txg that is being loaded, it cannot be relied upon to safely
operate the pool. For that reason, the configuration in the MOS is read early
on. In the past, the two configurations were compared together and if there was
a mismatch then the load process was aborted and an error was returned.
The latter was a good way to ensure a pool does not get corrupted, however it
made the pool load process needlessly fragile in cases where the vdev
configuration changed or the userland configuration was outdated. Since the MOS
is stored in 3 copies, the configuration provided by userland doesn't have to be
perfect in order to read its contents. Hence, a new approach has been adopted:
The pool is first opened with the untrusted userland configuration just so that
the real configuration can be read from the MOS. The trusted MOS configuration
is then used to generate a new vdev tree and the pool is re-opened.
When the pool is opened with an untrusted configuration, writes are disabled
to avoid accidentally damaging it. During reads, some sanity checks are
performed on block pointers to see if each DVA points to a known vdev;
when the configuration is untrusted, instead of panicking the system if those
checks fail we simply avoid issuing reads to the invalid DVAs.
This new two-step pool load process now allows rewinding pools accross
vdev tree changes such as device replacement, addition, etc. Loading a pool
from an external config file in a clustering environment also becomes much
safer now since the pool will import even if the config is outdated and didn't,
for instance, register a recent device addition.
With this code in place, it became relatively easy to implement a
long-sought-after feature: the ability to import a pool with missing top level
(i.e. non-redundant) devices. Note that since this almost guarantees some loss
of data, this feature is for now restricted to a read-only import.
Porting notes (ZTS):
* Fix 'make dist' target in zpool_import
* The maximum path length allowed by tar is 99 characters. Several
of the new test cases exceeded this limit resulting in them not
being included in the tarball. Shorten the names slightly.
* Set/get tunables using accessor functions.
* Get last synced txg via the "zfs_txg_history" mechanism.
* Clear zinject handlers in cleanup for import_cache_device_replaced
and import_rewind_device_replaced in order that the zpool can be
exported if there is an error.
* Increase FILESIZE to 8G in zfs-test.sh to allow for a larger
ext4 file system to be created on ZFS_DISK2. Also, there's
no need to partition ZFS_DISK2 at all. The partitioning had
already been disabled for multipath devices. Among other things,
the partitioning steals some space from the ext4 file system,
makes it difficult to accurately calculate the paramters to
parted and can make some of the tests fail.
* Increase FS_SIZE and FILE_SIZE in the zpool_import test
configuration now that FILESIZE is larger.
* Write more data in order that device evacuation take lonnger in
a couple tests.
* Use mkdir -p to avoid errors when the directory already exists.
* Remove use of sudo in import_rewind_config_changed.
Authored by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Approved by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://illumos.org/issues/9075
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/619c0123
Closes #7459
2016-07-22 17:39:36 +03:00
|
|
|
spa->spa_trust_config = B_TRUE;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
Illumos #4045 write throttle & i/o scheduler performance work
4045 zfs write throttle & i/o scheduler performance work
1. The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync
read, sync write, async read, async write, and scrub/resilver. The scheduler
issues a number of concurrent i/os from each class to the device. Once a class
has been selected, an i/o is selected from this class using either an elevator
algorithem (async, scrub classes) or FIFO (sync classes). The number of
concurrent async write i/os is tuned dynamically based on i/o load, to achieve
good sync i/o latency when there is not a high load of writes, and good write
throughput when there is. See the block comment in vdev_queue.c (reproduced
below) for more details.
2. The write throttle (dsl_pool_tempreserve_space() and
txg_constrain_throughput()) is rewritten to produce much more consistent delays
when under constant load. The new write throttle is based on the amount of
dirty data, rather than guesses about future performance of the system. When
there is a lot of dirty data, each transaction (e.g. write() syscall) will be
delayed by the same small amount. This eliminates the "brick wall of wait"
that the old write throttle could hit, causing all transactions to wait several
seconds until the next txg opens. One of the keys to the new write throttle is
decrementing the amount of dirty data as i/o completes, rather than at the end
of spa_sync(). Note that the write throttle is only applied once the i/o
scheduler is issuing the maximum number of outstanding async writes. See the
block comments in dsl_pool.c and above dmu_tx_delay() (reproduced below) for
more details.
This diff has several other effects, including:
* the commonly-tuned global variable zfs_vdev_max_pending has been removed;
use per-class zfs_vdev_*_max_active values or zfs_vdev_max_active instead.
* the size of each txg (meaning the amount of dirty data written, and thus the
time it takes to write out) is now controlled differently. There is no longer
an explicit time goal; the primary determinant is amount of dirty data.
Systems that are under light or medium load will now often see that a txg is
always syncing, but the impact to performance (e.g. read latency) is minimal.
Tune zfs_dirty_data_max and zfs_dirty_data_sync to control this.
* zio_taskq_batch_pct = 75 -- Only use 75% of all CPUs for compression,
checksum, etc. This improves latency by not allowing these CPU-intensive tasks
to consume all CPU (on machines with at least 4 CPU's; the percentage is
rounded up).
--matt
APPENDIX: problems with the current i/o scheduler
The current ZFS i/o scheduler (vdev_queue.c) is deadline based. The problem
with this is that if there are always i/os pending, then certain classes of
i/os can see very long delays.
For example, if there are always synchronous reads outstanding, then no async
writes will be serviced until they become "past due". One symptom of this
situation is that each pass of the txg sync takes at least several seconds
(typically 3 seconds).
If many i/os become "past due" (their deadline is in the past), then we must
service all of these overdue i/os before any new i/os. This happens when we
enqueue a batch of async writes for the txg sync, with deadlines 2.5 seconds in
the future. If we can't complete all the i/os in 2.5 seconds (e.g. because
there were always reads pending), then these i/os will become past due. Now we
must service all the "async" writes (which could be hundreds of megabytes)
before we service any reads, introducing considerable latency to synchronous
i/os (reads or ZIL writes).
Notes on porting to ZFS on Linux:
- zio_t gained new members io_physdone and io_phys_children. Because
object caches in the Linux port call the constructor only once at
allocation time, objects may contain residual data when retrieved
from the cache. Therefore zio_create() was updated to zero out the two
new fields.
- vdev_mirror_pending() relied on the depth of the per-vdev pending queue
(vq->vq_pending_tree) to select the least-busy leaf vdev to read from.
This tree has been replaced by vq->vq_active_tree which is now used
for the same purpose.
- vdev_queue_init() used the value of zfs_vdev_max_pending to determine
the number of vdev I/O buffers to pre-allocate. That global no longer
exists, so we instead use the sum of the *_max_active values for each of
the five I/O classes described above.
- The Illumos implementation of dmu_tx_delay() delays a transaction by
sleeping in condition variable embedded in the thread
(curthread->t_delay_cv). We do not have an equivalent CV to use in
Linux, so this change replaced the delay logic with a wrapper called
zfs_sleep_until(). This wrapper could be adopted upstream and in other
downstream ports to abstract away operating system-specific delay logic.
- These tunables are added as module parameters, and descriptions added
to the zfs-module-parameters.5 man page.
spa_asize_inflation
zfs_deadman_synctime_ms
zfs_vdev_max_active
zfs_vdev_async_write_active_min_dirty_percent
zfs_vdev_async_write_active_max_dirty_percent
zfs_vdev_async_read_max_active
zfs_vdev_async_read_min_active
zfs_vdev_async_write_max_active
zfs_vdev_async_write_min_active
zfs_vdev_scrub_max_active
zfs_vdev_scrub_min_active
zfs_vdev_sync_read_max_active
zfs_vdev_sync_read_min_active
zfs_vdev_sync_write_max_active
zfs_vdev_sync_write_min_active
zfs_dirty_data_max_percent
zfs_delay_min_dirty_percent
zfs_dirty_data_max_max_percent
zfs_dirty_data_max
zfs_dirty_data_max_max
zfs_dirty_data_sync
zfs_delay_scale
The latter four have type unsigned long, whereas they are uint64_t in
Illumos. This accommodates Linux's module_param() supported types, but
means they may overflow on 32-bit architectures.
The values zfs_dirty_data_max and zfs_dirty_data_max_max are the most
likely to overflow on 32-bit systems, since they express physical RAM
sizes in bytes. In fact, Illumos initializes zfs_dirty_data_max_max to
2^32 which does overflow. To resolve that, this port instead initializes
it in arc_init() to 25% of physical RAM, and adds the tunable
zfs_dirty_data_max_max_percent to override that percentage. While this
solution doesn't completely avoid the overflow issue, it should be a
reasonable default for most systems, and the minority of affected
systems can work around the issue by overriding the defaults.
- Fixed reversed logic in comment above zfs_delay_scale declaration.
- Clarified comments in vdev_queue.c regarding when per-queue minimums take
effect.
- Replaced dmu_tx_write_limit in the dmu_tx kstat file
with dmu_tx_dirty_delay and dmu_tx_dirty_over_max. The first counts
how many times a transaction has been delayed because the pool dirty
data has exceeded zfs_delay_min_dirty_percent. The latter counts how
many times the pool dirty data has exceeded zfs_dirty_data_max (which
we expect to never happen).
- The original patch would have regressed the bug fixed in
zfsonlinux/zfs@c418410, which prevented users from setting the
zfs_vdev_aggregation_limit tuning larger than SPA_MAXBLOCKSIZE.
A similar fix is added to vdev_queue_aggregate().
- In vdev_queue_io_to_issue(), dynamically allocate 'zio_t search' on the
heap instead of the stack. In Linux we can't afford such large
structures on the stack.
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Ned Bass <bass6@llnl.gov>
Reviewed by: Brendan Gregg <brendan.gregg@joyent.com>
Approved by: Robert Mustacchi <rm@joyent.com>
References:
http://www.illumos.org/issues/4045
illumos/illumos-gate@69962b5647e4a8b9b14998733b765925381b727e
Ported-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1913
2013-08-29 07:01:20 +04:00
|
|
|
spa->spa_deadman_synctime = MSEC2NSEC(zfs_deadman_synctime_ms);
|
2017-12-19 01:06:07 +03:00
|
|
|
spa->spa_deadman_ziotime = MSEC2NSEC(zfs_deadman_ziotime_ms);
|
|
|
|
spa_set_deadman_failmode(spa, zfs_deadman_failmode);
|
2013-04-30 02:49:23 +04:00
|
|
|
|
2018-10-01 20:42:05 +03:00
|
|
|
zfs_refcount_create(&spa->spa_refcount);
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_init(spa);
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-07 03:09:05 +04:00
|
|
|
spa_stats_init(spa);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
avl_add(&spa_namespace_avl, spa);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the alternate root, if there is one.
|
|
|
|
*/
|
2015-04-26 07:25:45 +03:00
|
|
|
if (altroot)
|
2008-11-20 23:01:55 +03:00
|
|
|
spa->spa_root = spa_strdup(altroot);
|
|
|
|
|
OpenZFS 9112 - Improve allocation performance on high-end systems
Overview
========
We parallelize the allocation process by creating the concept of
"allocators". There are a certain number of allocators per metaslab
group, defined by the value of a tunable at pool open time. Each
allocator for a given metaslab group has up to 2 active metaslabs; one
"primary", and one "secondary". The primary and secondary weight mean
the same thing they did in in the pre-allocator world; primary metaslabs
are used for most allocations, secondary metaslabs are used for ditto
blocks being allocated in the same metaslab group. There is also the
CLAIM weight, which has been separated out from the other weights, but
that is less important to understanding the patch. The active metaslabs
for each allocator are moved from their normal place in the metaslab
tree for the group to the back of the tree. This way, they will not be
selected for use by other allocators searching for new metaslabs unless
all the passive metaslabs are unsuitable for allocations. If that does
happen, the allocators will "steal" from each other to ensure that IOs
don't fail until there is truly no space left to perform allocations.
In addition, the alloc queue for each metaslab group has been broken
into a separate queue for each allocator. We don't want to dramatically
increase the number of inflight IOs on low-end systems, because it can
significantly increase txg times. On the other hand, we want to ensure
that there are enough IOs for each allocator to allow for good
coalescing before sending the IOs to the disk. As a result, we take a
compromise path; each allocator's alloc queue max depth starts at a
certain value for every txg. Every time an IO completes, we increase the
max depth. This should hopefully provide a good balance between the two
failure modes, while not dramatically increasing complexity.
We also parallelize the spa_alloc_tree and spa_alloc_lock, which cause
very similar contention when selecting IOs to allocate. This
parallelization uses the same allocator scheme as metaslab selection.
Performance Results
===================
Performance improvements from this change can vary significantly based
on the number of CPUs in the system, whether or not the system has a
NUMA architecture, the speed of the drives, the values for the various
tunables, and the workload being performed. For an fio async sequential
write workload on a 24 core NUMA system with 256 GB of RAM and 8 128 GB
SSDs, there is a roughly 25% performance improvement.
Future Work
===========
Analysis of the performance of the system with this patch applied shows
that a significant new bottleneck is the vdev disk queues, which also
need to be parallelized. Prototyping of this change has occurred, and
there was a performance improvement, but more work needs to be done
before its stability has been verified and it is ready to be upstreamed.
Authored by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Serapheim Dimitropoulos <serapheim.dimitro@delphix.com>
Reviewed by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Gordon Ross <gwr@nexenta.com>
Ported-by: Paul Dagnelie <pcd@delphix.com>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Porting Notes:
* Fix reservation test failures by increasing tolerance.
OpenZFS-issue: https://illumos.org/issues/9112
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/3f3cc3c3
Closes #7682
2018-02-12 23:56:06 +03:00
|
|
|
spa->spa_alloc_count = spa_allocators;
|
|
|
|
spa->spa_alloc_locks = kmem_zalloc(spa->spa_alloc_count *
|
|
|
|
sizeof (kmutex_t), KM_SLEEP);
|
|
|
|
spa->spa_alloc_trees = kmem_zalloc(spa->spa_alloc_count *
|
|
|
|
sizeof (avl_tree_t), KM_SLEEP);
|
|
|
|
for (int i = 0; i < spa->spa_alloc_count; i++) {
|
|
|
|
mutex_init(&spa->spa_alloc_locks[i], NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
avl_create(&spa->spa_alloc_trees[i], zio_bookmark_compare,
|
|
|
|
sizeof (zio_t), offsetof(zio_t, io_alloc_node));
|
|
|
|
}
|
2016-10-14 03:59:18 +03:00
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
/*
|
|
|
|
* Every pool starts with the default cachefile
|
|
|
|
*/
|
|
|
|
list_create(&spa->spa_config_list, sizeof (spa_config_dirent_t),
|
|
|
|
offsetof(spa_config_dirent_t, scd_link));
|
|
|
|
|
2014-11-21 03:09:39 +03:00
|
|
|
dp = kmem_zalloc(sizeof (spa_config_dirent_t), KM_SLEEP);
|
2010-05-29 00:45:14 +04:00
|
|
|
dp->scd_path = altroot ? NULL : spa_strdup(spa_config_path);
|
2008-12-03 23:09:06 +03:00
|
|
|
list_insert_head(&spa->spa_config_list, dp);
|
|
|
|
|
2010-08-27 01:24:34 +04:00
|
|
|
VERIFY(nvlist_alloc(&spa->spa_load_info, NV_UNIQUE_NAME,
|
2014-11-21 03:09:39 +03:00
|
|
|
KM_SLEEP) == 0);
|
2010-08-27 01:24:34 +04:00
|
|
|
|
2012-12-14 03:24:15 +04:00
|
|
|
if (config != NULL) {
|
|
|
|
nvlist_t *features;
|
|
|
|
|
|
|
|
if (nvlist_lookup_nvlist(config, ZPOOL_CONFIG_FEATURES_FOR_READ,
|
|
|
|
&features) == 0) {
|
|
|
|
VERIFY(nvlist_dup(features, &spa->spa_label_features,
|
|
|
|
0) == 0);
|
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
VERIFY(nvlist_dup(config, &spa->spa_config, 0) == 0);
|
2012-12-14 03:24:15 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (spa->spa_label_features == NULL) {
|
|
|
|
VERIFY(nvlist_alloc(&spa->spa_label_features, NV_UNIQUE_NAME,
|
2014-11-21 03:09:39 +03:00
|
|
|
KM_SLEEP) == 0);
|
2012-12-14 03:24:15 +04:00
|
|
|
}
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2015-05-20 07:14:01 +03:00
|
|
|
spa->spa_min_ashift = INT_MAX;
|
|
|
|
spa->spa_max_ashift = 0;
|
|
|
|
|
2016-12-03 02:59:35 +03:00
|
|
|
/* Reset cached value */
|
|
|
|
spa->spa_dedup_dspace = ~0ULL;
|
|
|
|
|
2013-12-09 22:37:51 +04:00
|
|
|
/*
|
|
|
|
* As a pool is being created, treat all features as disabled by
|
|
|
|
* setting SPA_FEATURE_DISABLED for all entries in the feature
|
|
|
|
* refcount cache.
|
|
|
|
*/
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int i = 0; i < SPA_FEATURES; i++) {
|
2013-12-09 22:37:51 +04:00
|
|
|
spa->spa_feat_refcount_cache[i] = SPA_FEATURE_DISABLED;
|
|
|
|
}
|
|
|
|
|
2019-03-12 20:37:06 +03:00
|
|
|
list_create(&spa->spa_leaf_list, sizeof (vdev_t),
|
|
|
|
offsetof(vdev_t, vdev_leaf_node));
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
return (spa);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Removes a spa_t from the namespace, freeing up any memory used. Requires
|
|
|
|
* spa_namespace_lock. This is called only after the spa_t has been closed and
|
|
|
|
* deactivated.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
spa_remove(spa_t *spa)
|
|
|
|
{
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_dirent_t *dp;
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
ASSERT(MUTEX_HELD(&spa_namespace_lock));
|
|
|
|
ASSERT(spa->spa_state == POOL_STATE_UNINITIALIZED);
|
2018-10-01 20:42:05 +03:00
|
|
|
ASSERT3U(zfs_refcount_count(&spa->spa_refcount), ==, 0);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
nvlist_free(spa->spa_config_splitting);
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
avl_remove(&spa_namespace_avl, spa);
|
|
|
|
cv_broadcast(&spa_namespace_cv);
|
|
|
|
|
2015-04-26 07:25:45 +03:00
|
|
|
if (spa->spa_root)
|
2008-11-20 23:01:55 +03:00
|
|
|
spa_strfree(spa->spa_root);
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
while ((dp = list_head(&spa->spa_config_list)) != NULL) {
|
|
|
|
list_remove(&spa->spa_config_list, dp);
|
|
|
|
if (dp->scd_path != NULL)
|
|
|
|
spa_strfree(dp->scd_path);
|
|
|
|
kmem_free(dp, sizeof (spa_config_dirent_t));
|
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
|
OpenZFS 9112 - Improve allocation performance on high-end systems
Overview
========
We parallelize the allocation process by creating the concept of
"allocators". There are a certain number of allocators per metaslab
group, defined by the value of a tunable at pool open time. Each
allocator for a given metaslab group has up to 2 active metaslabs; one
"primary", and one "secondary". The primary and secondary weight mean
the same thing they did in in the pre-allocator world; primary metaslabs
are used for most allocations, secondary metaslabs are used for ditto
blocks being allocated in the same metaslab group. There is also the
CLAIM weight, which has been separated out from the other weights, but
that is less important to understanding the patch. The active metaslabs
for each allocator are moved from their normal place in the metaslab
tree for the group to the back of the tree. This way, they will not be
selected for use by other allocators searching for new metaslabs unless
all the passive metaslabs are unsuitable for allocations. If that does
happen, the allocators will "steal" from each other to ensure that IOs
don't fail until there is truly no space left to perform allocations.
In addition, the alloc queue for each metaslab group has been broken
into a separate queue for each allocator. We don't want to dramatically
increase the number of inflight IOs on low-end systems, because it can
significantly increase txg times. On the other hand, we want to ensure
that there are enough IOs for each allocator to allow for good
coalescing before sending the IOs to the disk. As a result, we take a
compromise path; each allocator's alloc queue max depth starts at a
certain value for every txg. Every time an IO completes, we increase the
max depth. This should hopefully provide a good balance between the two
failure modes, while not dramatically increasing complexity.
We also parallelize the spa_alloc_tree and spa_alloc_lock, which cause
very similar contention when selecting IOs to allocate. This
parallelization uses the same allocator scheme as metaslab selection.
Performance Results
===================
Performance improvements from this change can vary significantly based
on the number of CPUs in the system, whether or not the system has a
NUMA architecture, the speed of the drives, the values for the various
tunables, and the workload being performed. For an fio async sequential
write workload on a 24 core NUMA system with 256 GB of RAM and 8 128 GB
SSDs, there is a roughly 25% performance improvement.
Future Work
===========
Analysis of the performance of the system with this patch applied shows
that a significant new bottleneck is the vdev disk queues, which also
need to be parallelized. Prototyping of this change has occurred, and
there was a performance improvement, but more work needs to be done
before its stability has been verified and it is ready to be upstreamed.
Authored by: Paul Dagnelie <pcd@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Serapheim Dimitropoulos <serapheim.dimitro@delphix.com>
Reviewed by: Alexander Motin <mav@FreeBSD.org>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Gordon Ross <gwr@nexenta.com>
Ported-by: Paul Dagnelie <pcd@delphix.com>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Porting Notes:
* Fix reservation test failures by increasing tolerance.
OpenZFS-issue: https://illumos.org/issues/9112
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/3f3cc3c3
Closes #7682
2018-02-12 23:56:06 +03:00
|
|
|
for (int i = 0; i < spa->spa_alloc_count; i++) {
|
|
|
|
avl_destroy(&spa->spa_alloc_trees[i]);
|
|
|
|
mutex_destroy(&spa->spa_alloc_locks[i]);
|
|
|
|
}
|
|
|
|
kmem_free(spa->spa_alloc_locks, spa->spa_alloc_count *
|
|
|
|
sizeof (kmutex_t));
|
|
|
|
kmem_free(spa->spa_alloc_trees, spa->spa_alloc_count *
|
|
|
|
sizeof (avl_tree_t));
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
list_destroy(&spa->spa_config_list);
|
2019-03-12 20:37:06 +03:00
|
|
|
list_destroy(&spa->spa_leaf_list);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2012-12-14 03:24:15 +04:00
|
|
|
nvlist_free(spa->spa_label_features);
|
2010-08-27 01:24:34 +04:00
|
|
|
nvlist_free(spa->spa_load_info);
|
2015-02-26 23:24:11 +03:00
|
|
|
nvlist_free(spa->spa_feat_stats);
|
2008-11-20 23:01:55 +03:00
|
|
|
spa_config_set(spa, NULL);
|
|
|
|
|
2018-10-01 20:42:05 +03:00
|
|
|
zfs_refcount_destroy(&spa->spa_refcount);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
Add visibility in to arc_read
This change is an attempt to add visibility into the arc_read calls
occurring on a system, in real time. To do this, a list was added to the
in memory SPA data structure for a pool, with each element on the list
corresponding to a call to arc_read. These entries are then exported
through the kstat interface, which can then be interpreted in userspace.
For each arc_read call, the following information is exported:
* A unique identifier (uint64_t)
* The time the entry was added to the list (hrtime_t)
(*not* wall clock time; relative to the other entries on the list)
* The objset ID (uint64_t)
* The object number (uint64_t)
* The indirection level (uint64_t)
* The block ID (uint64_t)
* The name of the function originating the arc_read call (char[24])
* The arc_flags from the arc_read call (uint32_t)
* The PID of the reading thread (pid_t)
* The command or name of thread originating read (char[16])
From this exported information one can see, in real time, exactly what
is being read, what function is generating the read, and whether or not
the read was found to be already cached.
There is still some work to be done, but this should serve as a good
starting point.
Specifically, dbuf_read's are not accounted for in the currently
exported information. Thus, a follow up patch should probably be added
to export these calls that never call into arc_read (they only hit the
dbuf hash table). In addition, it might be nice to create a utility
similar to "arcstat.py" to digest the exported information and display
it in a more readable format. Or perhaps, log the information and allow
for it to be "replayed" at a later time.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-09-07 03:09:05 +04:00
|
|
|
spa_stats_destroy(spa);
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_lock_destroy(spa);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int t = 0; t < TXG_SIZE; t++)
|
2010-05-29 00:45:14 +04:00
|
|
|
bplist_destroy(&spa->spa_free_bplist[t]);
|
|
|
|
|
2016-06-16 01:47:05 +03:00
|
|
|
zio_checksum_templates_free(spa);
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
cv_destroy(&spa->spa_async_cv);
|
2015-04-02 06:44:32 +03:00
|
|
|
cv_destroy(&spa->spa_evicting_os_cv);
|
2010-05-29 00:45:14 +04:00
|
|
|
cv_destroy(&spa->spa_proc_cv);
|
2008-11-20 23:01:55 +03:00
|
|
|
cv_destroy(&spa->spa_scrub_io_cv);
|
2008-12-03 23:09:06 +03:00
|
|
|
cv_destroy(&spa->spa_suspend_cv);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
mutex_destroy(&spa->spa_async_lock);
|
|
|
|
mutex_destroy(&spa->spa_errlist_lock);
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_destroy(&spa->spa_errlog_lock);
|
2015-04-02 06:44:32 +03:00
|
|
|
mutex_destroy(&spa->spa_evicting_os_lock);
|
2008-11-20 23:01:55 +03:00
|
|
|
mutex_destroy(&spa->spa_history_lock);
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_destroy(&spa->spa_proc_lock);
|
2008-11-20 23:01:55 +03:00
|
|
|
mutex_destroy(&spa->spa_props_lock);
|
2016-06-16 01:47:05 +03:00
|
|
|
mutex_destroy(&spa->spa_cksum_tmpls_lock);
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_destroy(&spa->spa_scrub_lock);
|
2008-12-03 23:09:06 +03:00
|
|
|
mutex_destroy(&spa->spa_suspend_lock);
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_destroy(&spa->spa_vdev_top_lock);
|
2015-04-23 22:32:59 +03:00
|
|
|
mutex_destroy(&spa->spa_feat_stats_lock);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
kmem_free(spa, sizeof (spa_t));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Given a pool, return the next pool in the namespace, or NULL if there is
|
|
|
|
* none. If 'prev' is NULL, return the first pool.
|
|
|
|
*/
|
|
|
|
spa_t *
|
|
|
|
spa_next(spa_t *prev)
|
|
|
|
{
|
|
|
|
ASSERT(MUTEX_HELD(&spa_namespace_lock));
|
|
|
|
|
|
|
|
if (prev)
|
|
|
|
return (AVL_NEXT(&spa_namespace_avl, prev));
|
|
|
|
else
|
|
|
|
return (avl_first(&spa_namespace_avl));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA refcount functions
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add a reference to the given spa_t. Must have at least one reference, or
|
|
|
|
* have the namespace lock held.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
spa_open_ref(spa_t *spa, void *tag)
|
|
|
|
{
|
2018-10-01 20:42:05 +03:00
|
|
|
ASSERT(zfs_refcount_count(&spa->spa_refcount) >= spa->spa_minref ||
|
2008-11-20 23:01:55 +03:00
|
|
|
MUTEX_HELD(&spa_namespace_lock));
|
2018-09-26 20:29:26 +03:00
|
|
|
(void) zfs_refcount_add(&spa->spa_refcount, tag);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove a reference to the given spa_t. Must have at least one reference, or
|
|
|
|
* have the namespace lock held.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
spa_close(spa_t *spa, void *tag)
|
|
|
|
{
|
2018-10-01 20:42:05 +03:00
|
|
|
ASSERT(zfs_refcount_count(&spa->spa_refcount) > spa->spa_minref ||
|
2008-11-20 23:01:55 +03:00
|
|
|
MUTEX_HELD(&spa_namespace_lock));
|
2018-10-01 20:42:05 +03:00
|
|
|
(void) zfs_refcount_remove(&spa->spa_refcount, tag);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2015-04-02 06:44:32 +03:00
|
|
|
/*
|
|
|
|
* Remove a reference to the given spa_t held by a dsl dir that is
|
|
|
|
* being asynchronously released. Async releases occur from a taskq
|
|
|
|
* performing eviction of dsl datasets and dirs. The namespace lock
|
|
|
|
* isn't held and the hold by the object being evicted may contribute to
|
|
|
|
* spa_minref (e.g. dataset or directory released during pool export),
|
|
|
|
* so the asserts in spa_close() do not apply.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
spa_async_close(spa_t *spa, void *tag)
|
|
|
|
{
|
2018-10-01 20:42:05 +03:00
|
|
|
(void) zfs_refcount_remove(&spa->spa_refcount, tag);
|
2015-04-02 06:44:32 +03:00
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* Check to see if the spa refcount is zero. Must be called with
|
2008-12-03 23:09:06 +03:00
|
|
|
* spa_namespace_lock held. We really compare against spa_minref, which is the
|
2008-11-20 23:01:55 +03:00
|
|
|
* number of references acquired when opening a pool
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
spa_refcount_zero(spa_t *spa)
|
|
|
|
{
|
|
|
|
ASSERT(MUTEX_HELD(&spa_namespace_lock));
|
|
|
|
|
2018-10-01 20:42:05 +03:00
|
|
|
return (zfs_refcount_count(&spa->spa_refcount) == spa->spa_minref);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA spare and l2cache tracking
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Hot spares and cache devices are tracked using the same code below,
|
|
|
|
* for 'auxiliary' devices.
|
|
|
|
*/
|
|
|
|
|
|
|
|
typedef struct spa_aux {
|
|
|
|
uint64_t aux_guid;
|
|
|
|
uint64_t aux_pool;
|
|
|
|
avl_node_t aux_avl;
|
|
|
|
int aux_count;
|
|
|
|
} spa_aux_t;
|
|
|
|
|
2016-08-27 21:12:53 +03:00
|
|
|
static inline int
|
2008-11-20 23:01:55 +03:00
|
|
|
spa_aux_compare(const void *a, const void *b)
|
|
|
|
{
|
2016-08-27 21:12:53 +03:00
|
|
|
const spa_aux_t *sa = (const spa_aux_t *)a;
|
|
|
|
const spa_aux_t *sb = (const spa_aux_t *)b;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2016-08-27 21:12:53 +03:00
|
|
|
return (AVL_CMP(sa->aux_guid, sb->aux_guid));
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_aux_add(vdev_t *vd, avl_tree_t *avl)
|
|
|
|
{
|
|
|
|
avl_index_t where;
|
|
|
|
spa_aux_t search;
|
|
|
|
spa_aux_t *aux;
|
|
|
|
|
|
|
|
search.aux_guid = vd->vdev_guid;
|
|
|
|
if ((aux = avl_find(avl, &search, &where)) != NULL) {
|
|
|
|
aux->aux_count++;
|
|
|
|
} else {
|
2014-11-21 03:09:39 +03:00
|
|
|
aux = kmem_zalloc(sizeof (spa_aux_t), KM_SLEEP);
|
2008-11-20 23:01:55 +03:00
|
|
|
aux->aux_guid = vd->vdev_guid;
|
|
|
|
aux->aux_count = 1;
|
|
|
|
avl_insert(avl, aux, where);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_aux_remove(vdev_t *vd, avl_tree_t *avl)
|
|
|
|
{
|
|
|
|
spa_aux_t search;
|
|
|
|
spa_aux_t *aux;
|
|
|
|
avl_index_t where;
|
|
|
|
|
|
|
|
search.aux_guid = vd->vdev_guid;
|
|
|
|
aux = avl_find(avl, &search, &where);
|
|
|
|
|
|
|
|
ASSERT(aux != NULL);
|
|
|
|
|
|
|
|
if (--aux->aux_count == 0) {
|
|
|
|
avl_remove(avl, aux);
|
|
|
|
kmem_free(aux, sizeof (spa_aux_t));
|
|
|
|
} else if (aux->aux_pool == spa_guid(vd->vdev_spa)) {
|
|
|
|
aux->aux_pool = 0ULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean_t
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_aux_exists(uint64_t guid, uint64_t *pool, int *refcnt, avl_tree_t *avl)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
|
|
|
spa_aux_t search, *found;
|
|
|
|
|
|
|
|
search.aux_guid = guid;
|
2008-12-03 23:09:06 +03:00
|
|
|
found = avl_find(avl, &search, NULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
if (pool) {
|
|
|
|
if (found)
|
|
|
|
*pool = found->aux_pool;
|
|
|
|
else
|
|
|
|
*pool = 0ULL;
|
|
|
|
}
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
if (refcnt) {
|
|
|
|
if (found)
|
|
|
|
*refcnt = found->aux_count;
|
|
|
|
else
|
|
|
|
*refcnt = 0;
|
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
return (found != NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_aux_activate(vdev_t *vd, avl_tree_t *avl)
|
|
|
|
{
|
|
|
|
spa_aux_t search, *found;
|
|
|
|
avl_index_t where;
|
|
|
|
|
|
|
|
search.aux_guid = vd->vdev_guid;
|
|
|
|
found = avl_find(avl, &search, &where);
|
|
|
|
ASSERT(found != NULL);
|
|
|
|
ASSERT(found->aux_pool == 0ULL);
|
|
|
|
|
|
|
|
found->aux_pool = spa_guid(vd->vdev_spa);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Spares are tracked globally due to the following constraints:
|
|
|
|
*
|
|
|
|
* - A spare may be part of multiple pools.
|
|
|
|
* - A spare may be added to a pool even if it's actively in use within
|
|
|
|
* another pool.
|
|
|
|
* - A spare in use in any pool can only be the source of a replacement if
|
|
|
|
* the target is a spare in the same pool.
|
|
|
|
*
|
|
|
|
* We keep track of all spares on the system through the use of a reference
|
|
|
|
* counted AVL tree. When a vdev is added as a spare, or used as a replacement
|
|
|
|
* spare, then we bump the reference count in the AVL tree. In addition, we set
|
|
|
|
* the 'vdev_isspare' member to indicate that the device is a spare (active or
|
|
|
|
* inactive). When a spare is made active (used to replace a device in the
|
|
|
|
* pool), we also keep track of which pool its been made a part of.
|
|
|
|
*
|
|
|
|
* The 'spa_spare_lock' protects the AVL tree. These functions are normally
|
|
|
|
* called under the spa_namespace lock as part of vdev reconfiguration. The
|
|
|
|
* separate spare lock exists for the status query path, which does not need to
|
|
|
|
* be completely consistent with respect to other vdev configuration changes.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int
|
|
|
|
spa_spare_compare(const void *a, const void *b)
|
|
|
|
{
|
|
|
|
return (spa_aux_compare(a, b));
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_spare_add(vdev_t *vd)
|
|
|
|
{
|
|
|
|
mutex_enter(&spa_spare_lock);
|
|
|
|
ASSERT(!vd->vdev_isspare);
|
|
|
|
spa_aux_add(vd, &spa_spare_avl);
|
|
|
|
vd->vdev_isspare = B_TRUE;
|
|
|
|
mutex_exit(&spa_spare_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_spare_remove(vdev_t *vd)
|
|
|
|
{
|
|
|
|
mutex_enter(&spa_spare_lock);
|
|
|
|
ASSERT(vd->vdev_isspare);
|
|
|
|
spa_aux_remove(vd, &spa_spare_avl);
|
|
|
|
vd->vdev_isspare = B_FALSE;
|
|
|
|
mutex_exit(&spa_spare_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean_t
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_spare_exists(uint64_t guid, uint64_t *pool, int *refcnt)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
|
|
|
boolean_t found;
|
|
|
|
|
|
|
|
mutex_enter(&spa_spare_lock);
|
2008-12-03 23:09:06 +03:00
|
|
|
found = spa_aux_exists(guid, pool, refcnt, &spa_spare_avl);
|
2008-11-20 23:01:55 +03:00
|
|
|
mutex_exit(&spa_spare_lock);
|
|
|
|
|
|
|
|
return (found);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_spare_activate(vdev_t *vd)
|
|
|
|
{
|
|
|
|
mutex_enter(&spa_spare_lock);
|
|
|
|
ASSERT(vd->vdev_isspare);
|
|
|
|
spa_aux_activate(vd, &spa_spare_avl);
|
|
|
|
mutex_exit(&spa_spare_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Level 2 ARC devices are tracked globally for the same reasons as spares.
|
|
|
|
* Cache devices currently only support one pool per cache device, and so
|
|
|
|
* for these devices the aux reference count is currently unused beyond 1.
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int
|
|
|
|
spa_l2cache_compare(const void *a, const void *b)
|
|
|
|
{
|
|
|
|
return (spa_aux_compare(a, b));
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_l2cache_add(vdev_t *vd)
|
|
|
|
{
|
|
|
|
mutex_enter(&spa_l2cache_lock);
|
|
|
|
ASSERT(!vd->vdev_isl2cache);
|
|
|
|
spa_aux_add(vd, &spa_l2cache_avl);
|
|
|
|
vd->vdev_isl2cache = B_TRUE;
|
|
|
|
mutex_exit(&spa_l2cache_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_l2cache_remove(vdev_t *vd)
|
|
|
|
{
|
|
|
|
mutex_enter(&spa_l2cache_lock);
|
|
|
|
ASSERT(vd->vdev_isl2cache);
|
|
|
|
spa_aux_remove(vd, &spa_l2cache_avl);
|
|
|
|
vd->vdev_isl2cache = B_FALSE;
|
|
|
|
mutex_exit(&spa_l2cache_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean_t
|
|
|
|
spa_l2cache_exists(uint64_t guid, uint64_t *pool)
|
|
|
|
{
|
|
|
|
boolean_t found;
|
|
|
|
|
|
|
|
mutex_enter(&spa_l2cache_lock);
|
2008-12-03 23:09:06 +03:00
|
|
|
found = spa_aux_exists(guid, pool, NULL, &spa_l2cache_avl);
|
2008-11-20 23:01:55 +03:00
|
|
|
mutex_exit(&spa_l2cache_lock);
|
|
|
|
|
|
|
|
return (found);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_l2cache_activate(vdev_t *vd)
|
|
|
|
{
|
|
|
|
mutex_enter(&spa_l2cache_lock);
|
|
|
|
ASSERT(vd->vdev_isl2cache);
|
|
|
|
spa_aux_activate(vd, &spa_l2cache_avl);
|
|
|
|
mutex_exit(&spa_l2cache_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* SPA vdev locking
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Lock the given spa_t for the purpose of adding or removing a vdev.
|
|
|
|
* Grabs the global spa_namespace_lock plus the spa config lock for writing.
|
|
|
|
* It returns the next transaction group for the spa_t.
|
|
|
|
*/
|
|
|
|
uint64_t
|
|
|
|
spa_vdev_enter(spa_t *spa)
|
|
|
|
{
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_enter(&spa->spa_vdev_top_lock);
|
2008-11-20 23:01:55 +03:00
|
|
|
mutex_enter(&spa_namespace_lock);
|
2010-05-29 00:45:14 +04:00
|
|
|
return (spa_vdev_config_enter(spa));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Internal implementation for spa_vdev_enter(). Used when a vdev
|
|
|
|
* operation requires multiple syncs (i.e. removing a device) while
|
|
|
|
* keeping the spa_namespace_lock held.
|
|
|
|
*/
|
|
|
|
uint64_t
|
|
|
|
spa_vdev_config_enter(spa_t *spa)
|
|
|
|
{
|
|
|
|
ASSERT(MUTEX_HELD(&spa_namespace_lock));
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_enter(spa, SCL_ALL, spa, RW_WRITER);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
return (spa_last_synced_txg(spa) + 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2010-05-29 00:45:14 +04:00
|
|
|
* Used in combination with spa_vdev_config_enter() to allow the syncing
|
|
|
|
* of multiple transactions without releasing the spa_namespace_lock.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
2010-05-29 00:45:14 +04:00
|
|
|
void
|
|
|
|
spa_vdev_config_exit(spa_t *spa, vdev_t *vd, uint64_t txg, int error, char *tag)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2017-11-04 23:25:13 +03:00
|
|
|
ASSERT(MUTEX_HELD(&spa_namespace_lock));
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
int config_changed = B_FALSE;
|
|
|
|
|
|
|
|
ASSERT(txg > spa_last_synced_txg(spa));
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
spa->spa_pending_vdev = NULL;
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* Reassess the DTLs.
|
|
|
|
*/
|
|
|
|
vdev_dtl_reassess(spa->spa_root_vdev, 0, 0, B_FALSE);
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
if (error == 0 && !list_is_empty(&spa->spa_config_dirty_list)) {
|
2008-11-20 23:01:55 +03:00
|
|
|
config_changed = B_TRUE;
|
2010-05-29 00:45:14 +04:00
|
|
|
spa->spa_config_generation++;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
/*
|
|
|
|
* Verify the metaslab classes.
|
|
|
|
*/
|
|
|
|
ASSERT(metaslab_class_validate(spa_normal_class(spa)) == 0);
|
|
|
|
ASSERT(metaslab_class_validate(spa_log_class(spa)) == 0);
|
2018-09-06 04:33:36 +03:00
|
|
|
ASSERT(metaslab_class_validate(spa_special_class(spa)) == 0);
|
|
|
|
ASSERT(metaslab_class_validate(spa_dedup_class(spa)) == 0);
|
2010-05-29 00:45:14 +04:00
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_exit(spa, SCL_ALL, spa);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
/*
|
|
|
|
* Panic the system if the specified tag requires it. This
|
|
|
|
* is useful for ensuring that configurations are updated
|
|
|
|
* transactionally.
|
|
|
|
*/
|
|
|
|
if (zio_injection_enabled)
|
|
|
|
zio_handle_panic_injection(spa, tag, 0);
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* Note: this txg_wait_synced() is important because it ensures
|
|
|
|
* that there won't be more than one config change per txg.
|
|
|
|
* This allows us to use the txg as the generation number.
|
|
|
|
*/
|
|
|
|
if (error == 0)
|
|
|
|
txg_wait_synced(spa->spa_dsl_pool, txg);
|
|
|
|
|
|
|
|
if (vd != NULL) {
|
Illumos #4101, #4102, #4103, #4105, #4106
4101 metaslab_debug should allow for fine-grained control
4102 space_maps should store more information about themselves
4103 space map object blocksize should be increased
4105 removing a mirrored log device results in a leaked object
4106 asynchronously load metaslab
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Sebastien Roy <seb@delphix.com>
Approved by: Garrett D'Amore <garrett@damore.org>
Prior to this patch, space_maps were preferred solely based on the
amount of free space left in each. Unfortunately, this heuristic didn't
contain any information about the make-up of that free space, which
meant we could keep preferring and loading a highly fragmented space map
that wouldn't actually have enough contiguous space to satisfy the
allocation; then unloading that space_map and repeating the process.
This change modifies the space_map's to store additional information
about the contiguous space in the space_map, so that we can use this
information to make a better decision about which space_map to load.
This requires reallocating all space_map objects to increase their
bonus buffer size sizes enough to fit the new metadata.
The above feature can be enabled via a new feature flag introduced by
this change: com.delphix:spacemap_histogram
In addition to the above, this patch allows the space_map block size to
be increase. Currently the block size is set to be 4K in size, which has
certain implications including the following:
* 4K sector devices will not see any compression benefit
* large space_maps require more metadata on-disk
* large space_maps require more time to load (typically random reads)
Now the space_map block size can adjust as needed up to the maximum size
set via the space_map_max_blksz variable.
A bug was fixed which resulted in potentially leaking an object when
removing a mirrored log device. The previous logic for vdev_remove() did
not deal with removing top-level vdevs that are interior vdevs (i.e.
mirror) correctly. The problem would occur when removing a mirrored log
device, and result in the DTL space map object being leaked; because
top-level vdevs don't have DTL space map objects associated with them.
References:
https://www.illumos.org/issues/4101
https://www.illumos.org/issues/4102
https://www.illumos.org/issues/4103
https://www.illumos.org/issues/4105
https://www.illumos.org/issues/4106
https://github.com/illumos/illumos-gate/commit/0713e23
Porting notes:
A handful of kmem_alloc() calls were converted to kmem_zalloc(). Also,
the KM_PUSHPAGE and TQ_PUSHPAGE flags were used as necessary.
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2488
2013-10-02 01:25:53 +04:00
|
|
|
ASSERT(!vd->vdev_detached || vd->vdev_dtl_sm == NULL);
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
if (vd->vdev_ops->vdev_op_leaf) {
|
|
|
|
mutex_enter(&vd->vdev_initialize_lock);
|
2018-12-19 19:20:39 +03:00
|
|
|
vdev_initialize_stop(vd, VDEV_INITIALIZE_CANCELED,
|
|
|
|
NULL);
|
OpenZFS 9102 - zfs should be able to initialize storage devices
PROBLEM
========
The first access to a block incurs a performance penalty on some platforms
(e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are
"thick provisioned", where supported by the platform (VMware). This can
create a large delay in getting a new virtual machines up and running (or
adding storage to an existing Engine). If the thick provision step is
omitted, write performance will be suboptimal until all blocks on the LUN
have been written.
SOLUTION
=========
This feature introduces a way to 'initialize' the disks at install or in the
background to make sure we don't incur this first read penalty.
When an entire LUN is added to ZFS, we make all space available immediately,
and allow ZFS to find unallocated space and zero it out. This works with
concurrent writes to arbitrary offsets, ensuring that we don't zero out
something that has been (or is in the middle of being) written. This scheme
can also be applied to existing pools (affecting only free regions on the
vdev). Detailed design:
- new subcommand:zpool initialize [-cs] <pool> [<vdev> ...]
- start, suspend, or cancel initialization
- Creates new open-context thread for each vdev
- Thread iterates through all metaslabs in this vdev
- Each metaslab:
- select a metaslab
- load the metaslab
- mark the metaslab as being zeroed
- walk all free ranges within that metaslab and translate
them to ranges on the leaf vdev
- issue a "zeroing" I/O on the leaf vdev that corresponds to
a free range on the metaslab we're working on
- continue until all free ranges for this metaslab have been
"zeroed"
- reset/unmark the metaslab being zeroed
- if more metaslabs exist, then repeat above tasks.
- if no more metaslabs, then we're done.
- progress for the initialization is stored on-disk in the vdev’s
leaf zap object. The following information is stored:
- the last offset that has been initialized
- the state of the initialization process (i.e. active,
suspended, or canceled)
- the start time for the initialization
- progress is reported via the zpool status command and shows
information for each of the vdevs that are initializing
Porting notes:
- Added zfs_initialize_value module parameter to set the pattern
written by "zpool initialize".
- Added zfs_vdev_{initializing,removal}_{min,max}_active module options.
Authored by: George Wilson <george.wilson@delphix.com>
Reviewed by: John Wren Kennedy <john.kennedy@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: loli10K <ezomori.nozomu@gmail.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Richard Lowe <richlowe@richlowe.net>
Signed-off-by: Tim Chase <tim@chase2k.com>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/9102
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb
Closes #8230
2018-12-19 17:54:59 +03:00
|
|
|
mutex_exit(&vd->vdev_initialize_lock);
|
|
|
|
}
|
|
|
|
|
2009-01-16 00:59:39 +03:00
|
|
|
spa_config_enter(spa, SCL_ALL, spa, RW_WRITER);
|
2008-11-20 23:01:55 +03:00
|
|
|
vdev_free(vd);
|
2009-01-16 00:59:39 +03:00
|
|
|
spa_config_exit(spa, SCL_ALL, spa);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the config changed, update the config cache.
|
|
|
|
*/
|
|
|
|
if (config_changed)
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 19:30:13 +03:00
|
|
|
spa_write_cachefile(spa, B_FALSE, B_TRUE);
|
2010-05-29 00:45:14 +04:00
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
/*
|
|
|
|
* Unlock the spa_t after adding or removing a vdev. Besides undoing the
|
|
|
|
* locking of spa_vdev_enter(), we also want make sure the transactions have
|
|
|
|
* synced to disk, and then update the global configuration cache with the new
|
|
|
|
* information.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
spa_vdev_exit(spa_t *spa, vdev_t *vd, uint64_t txg, int error)
|
|
|
|
{
|
|
|
|
spa_vdev_config_exit(spa, vd, txg, error, FTAG);
|
2008-11-20 23:01:55 +03:00
|
|
|
mutex_exit(&spa_namespace_lock);
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_exit(&spa->spa_vdev_top_lock);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
/*
|
|
|
|
* Lock the given spa_t for the purpose of changing vdev state.
|
|
|
|
*/
|
|
|
|
void
|
2010-05-29 00:45:14 +04:00
|
|
|
spa_vdev_state_enter(spa_t *spa, int oplocks)
|
2008-12-03 23:09:06 +03:00
|
|
|
{
|
2010-05-29 00:45:14 +04:00
|
|
|
int locks = SCL_STATE_ALL | oplocks;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Root pools may need to read of the underlying devfs filesystem
|
|
|
|
* when opening up a vdev. Unfortunately if we're holding the
|
|
|
|
* SCL_ZIO lock it will result in a deadlock when we try to issue
|
|
|
|
* the read from the root filesystem. Instead we "prefetch"
|
|
|
|
* the associated vnodes that we need prior to opening the
|
|
|
|
* underlying devices and cache them so that we can prevent
|
|
|
|
* any I/O when we are doing the actual open.
|
|
|
|
*/
|
|
|
|
if (spa_is_root(spa)) {
|
|
|
|
int low = locks & ~(SCL_ZIO - 1);
|
|
|
|
int high = locks & ~low;
|
|
|
|
|
|
|
|
spa_config_enter(spa, high, spa, RW_WRITER);
|
|
|
|
vdev_hold(spa->spa_root_vdev);
|
|
|
|
spa_config_enter(spa, low, spa, RW_WRITER);
|
|
|
|
} else {
|
|
|
|
spa_config_enter(spa, locks, spa, RW_WRITER);
|
|
|
|
}
|
|
|
|
spa->spa_vdev_locks = locks;
|
2008-12-03 23:09:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
spa_vdev_state_exit(spa_t *spa, vdev_t *vd, int error)
|
|
|
|
{
|
2010-05-29 00:45:14 +04:00
|
|
|
boolean_t config_changed = B_FALSE;
|
2017-05-19 22:30:16 +03:00
|
|
|
vdev_t *vdev_top;
|
|
|
|
|
|
|
|
if (vd == NULL || vd == spa->spa_root_vdev) {
|
|
|
|
vdev_top = spa->spa_root_vdev;
|
|
|
|
} else {
|
|
|
|
vdev_top = vd->vdev_top;
|
|
|
|
}
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
if (vd != NULL || error == 0)
|
2017-05-19 22:30:16 +03:00
|
|
|
vdev_dtl_reassess(vdev_top, 0, 0, B_FALSE);
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
if (vd != NULL) {
|
2017-05-19 22:30:16 +03:00
|
|
|
if (vd != spa->spa_root_vdev)
|
|
|
|
vdev_state_dirty(vdev_top);
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
config_changed = B_TRUE;
|
|
|
|
spa->spa_config_generation++;
|
|
|
|
}
|
2008-12-03 23:09:06 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
if (spa_is_root(spa))
|
|
|
|
vdev_rele(spa->spa_root_vdev);
|
|
|
|
|
|
|
|
ASSERT3U(spa->spa_vdev_locks, >=, SCL_STATE_ALL);
|
|
|
|
spa_config_exit(spa, spa->spa_vdev_locks, spa);
|
2008-12-03 23:09:06 +03:00
|
|
|
|
2009-01-16 00:59:39 +03:00
|
|
|
/*
|
|
|
|
* If anything changed, wait for it to sync. This ensures that,
|
|
|
|
* from the system administrator's perspective, zpool(1M) commands
|
|
|
|
* are synchronous. This is important for things like zpool offline:
|
|
|
|
* when the command completes, you expect no further I/O from ZFS.
|
|
|
|
*/
|
|
|
|
if (vd != NULL)
|
|
|
|
txg_wait_synced(spa->spa_dsl_pool, 0);
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
/*
|
|
|
|
* If the config changed, update the config cache.
|
|
|
|
*/
|
|
|
|
if (config_changed) {
|
|
|
|
mutex_enter(&spa_namespace_lock);
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 19:30:13 +03:00
|
|
|
spa_write_cachefile(spa, B_FALSE, B_TRUE);
|
2010-05-29 00:45:14 +04:00
|
|
|
mutex_exit(&spa_namespace_lock);
|
|
|
|
}
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* Miscellaneous functions
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
2012-12-14 03:24:15 +04:00
|
|
|
void
|
2013-12-09 22:37:51 +04:00
|
|
|
spa_activate_mos_feature(spa_t *spa, const char *feature, dmu_tx_t *tx)
|
2012-12-14 03:24:15 +04:00
|
|
|
{
|
2013-10-08 21:13:05 +04:00
|
|
|
if (!nvlist_exists(spa->spa_label_features, feature)) {
|
|
|
|
fnvlist_add_boolean(spa->spa_label_features, feature);
|
2013-12-09 22:37:51 +04:00
|
|
|
/*
|
|
|
|
* When we are creating the pool (tx_txg==TXG_INITIAL), we can't
|
|
|
|
* dirty the vdev config because lock SCL_CONFIG is not held.
|
|
|
|
* Thankfully, in this case we don't need to dirty the config
|
|
|
|
* because it will be written out anyway when we finish
|
|
|
|
* creating the pool.
|
|
|
|
*/
|
|
|
|
if (tx->tx_txg != TXG_INITIAL)
|
|
|
|
vdev_config_dirty(spa->spa_root_vdev);
|
2013-10-08 21:13:05 +04:00
|
|
|
}
|
2012-12-14 03:24:15 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_deactivate_mos_feature(spa_t *spa, const char *feature)
|
|
|
|
{
|
2013-10-08 21:13:05 +04:00
|
|
|
if (nvlist_remove_all(spa->spa_label_features, feature) == 0)
|
|
|
|
vdev_config_dirty(spa->spa_root_vdev);
|
2012-12-14 03:24:15 +04:00
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
2010-08-27 01:24:34 +04:00
|
|
|
* Return the spa_t associated with given pool_guid, if it exists. If
|
|
|
|
* device_guid is non-zero, determine whether the pool exists *and* contains
|
|
|
|
* a device with the specified device_guid.
|
2008-11-20 23:01:55 +03:00
|
|
|
*/
|
2010-08-27 01:24:34 +04:00
|
|
|
spa_t *
|
|
|
|
spa_by_guid(uint64_t pool_guid, uint64_t device_guid)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
|
|
|
spa_t *spa;
|
|
|
|
avl_tree_t *t = &spa_namespace_avl;
|
|
|
|
|
|
|
|
ASSERT(MUTEX_HELD(&spa_namespace_lock));
|
|
|
|
|
|
|
|
for (spa = avl_first(t); spa != NULL; spa = AVL_NEXT(t, spa)) {
|
|
|
|
if (spa->spa_state == POOL_STATE_UNINITIALIZED)
|
|
|
|
continue;
|
|
|
|
if (spa->spa_root_vdev == NULL)
|
|
|
|
continue;
|
|
|
|
if (spa_guid(spa) == pool_guid) {
|
|
|
|
if (device_guid == 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (vdev_lookup_by_guid(spa->spa_root_vdev,
|
|
|
|
device_guid) != NULL)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check any devices we may be in the process of adding.
|
|
|
|
*/
|
|
|
|
if (spa->spa_pending_vdev) {
|
|
|
|
if (vdev_lookup_by_guid(spa->spa_pending_vdev,
|
|
|
|
device_guid) != NULL)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-08-27 01:24:34 +04:00
|
|
|
return (spa);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Determine whether a pool with the given pool_guid exists.
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
spa_guid_exists(uint64_t pool_guid, uint64_t device_guid)
|
|
|
|
{
|
|
|
|
return (spa_by_guid(pool_guid, device_guid) != NULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
char *
|
|
|
|
spa_strdup(const char *s)
|
|
|
|
{
|
|
|
|
size_t len;
|
|
|
|
char *new;
|
|
|
|
|
|
|
|
len = strlen(s);
|
2014-11-21 03:09:39 +03:00
|
|
|
new = kmem_alloc(len + 1, KM_SLEEP);
|
2008-11-20 23:01:55 +03:00
|
|
|
bcopy(s, new, len);
|
|
|
|
new[len] = '\0';
|
|
|
|
|
|
|
|
return (new);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_strfree(char *s)
|
|
|
|
{
|
|
|
|
kmem_free(s, strlen(s) + 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_get_random(uint64_t range)
|
|
|
|
{
|
|
|
|
uint64_t r;
|
|
|
|
|
|
|
|
ASSERT(range != 0);
|
|
|
|
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 06:20:35 +03:00
|
|
|
if (range == 1)
|
|
|
|
return (0);
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
(void) random_get_pseudo_bytes((void *)&r, sizeof (uint64_t));
|
|
|
|
|
|
|
|
return (r % range);
|
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
uint64_t
|
|
|
|
spa_generate_guid(spa_t *spa)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2010-05-29 00:45:14 +04:00
|
|
|
uint64_t guid = spa_get_random(-1ULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
if (spa != NULL) {
|
|
|
|
while (guid == 0 || spa_guid_exists(spa_guid(spa), guid))
|
|
|
|
guid = spa_get_random(-1ULL);
|
|
|
|
} else {
|
|
|
|
while (guid == 0 || spa_guid_exists(guid, 0))
|
|
|
|
guid = spa_get_random(-1ULL);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
return (guid);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2013-12-09 22:37:51 +04:00
|
|
|
snprintf_blkptr(char *buf, size_t buflen, const blkptr_t *bp)
|
2010-05-29 00:45:14 +04:00
|
|
|
{
|
2012-12-14 03:24:15 +04:00
|
|
|
char type[256];
|
2010-05-29 00:45:14 +04:00
|
|
|
char *checksum = NULL;
|
|
|
|
char *compress = NULL;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
if (bp != NULL) {
|
2012-12-14 03:24:15 +04:00
|
|
|
if (BP_GET_TYPE(bp) & DMU_OT_NEWTYPE) {
|
|
|
|
dmu_object_byteswap_t bswap =
|
|
|
|
DMU_OT_BYTESWAP(BP_GET_TYPE(bp));
|
|
|
|
(void) snprintf(type, sizeof (type), "bswap %s %s",
|
|
|
|
DMU_OT_IS_METADATA(BP_GET_TYPE(bp)) ?
|
|
|
|
"metadata" : "data",
|
|
|
|
dmu_ot_byteswap[bswap].ob_name);
|
|
|
|
} else {
|
|
|
|
(void) strlcpy(type, dmu_ot[BP_GET_TYPE(bp)].ot_name,
|
|
|
|
sizeof (type));
|
|
|
|
}
|
2014-06-06 01:19:08 +04:00
|
|
|
if (!BP_IS_EMBEDDED(bp)) {
|
|
|
|
checksum =
|
|
|
|
zio_checksum_table[BP_GET_CHECKSUM(bp)].ci_name;
|
|
|
|
}
|
2010-05-29 00:45:14 +04:00
|
|
|
compress = zio_compress_table[BP_GET_COMPRESS(bp)].ci_name;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2013-12-09 22:37:51 +04:00
|
|
|
SNPRINTF_BLKPTR(snprintf, ' ', buf, buflen, bp, type, checksum,
|
2018-04-06 23:30:26 +03:00
|
|
|
compress);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_freeze(spa_t *spa)
|
|
|
|
{
|
|
|
|
uint64_t freeze_txg = 0;
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_enter(spa, SCL_ALL, FTAG, RW_WRITER);
|
2008-11-20 23:01:55 +03:00
|
|
|
if (spa->spa_freeze_txg == UINT64_MAX) {
|
|
|
|
freeze_txg = spa_last_synced_txg(spa) + TXG_SIZE;
|
|
|
|
spa->spa_freeze_txg = freeze_txg;
|
|
|
|
}
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_exit(spa, SCL_ALL, FTAG);
|
2008-11-20 23:01:55 +03:00
|
|
|
if (freeze_txg != 0)
|
|
|
|
txg_wait_synced(spa_get_dsl(spa), freeze_txg);
|
|
|
|
}
|
|
|
|
|
Swap DTRACE_PROBE* with Linux tracepoints
This patch leverages Linux tracepoints from within the ZFS on Linux
code base. It also refactors the debug code to bring it back in sync
with Illumos.
The information exported via tracepoints can be used for a variety of
reasons (e.g. debugging, tuning, general exploration/understanding,
etc). It is advantageous to use Linux tracepoints as the mechanism to
export this kind of information (as opposed to something else) for a
number of reasons:
* A number of external tools can make use of our tracepoints
"automatically" (e.g. perf, systemtap)
* Tracepoints are designed to be extremely cheap when disabled
* It's one of the "accepted" ways to export this kind of
information; many other kernel subsystems use tracepoints too.
Unfortunately, though, there are a few caveats as well:
* Linux tracepoints appear to only be available to GPL licensed
modules due to the way certain kernel functions are exported.
Thus, to actually make use of the tracepoints introduced by this
patch, one might have to patch and re-compile the kernel;
exporting the necessary functions to non-GPL modules.
* Prior to upstream kernel version v3.14-rc6-30-g66cc69e, Linux
tracepoints are not available for unsigned kernel modules
(tracepoints will get disabled due to the module's 'F' taint).
Thus, one either has to sign the zfs kernel module prior to
loading it, or use a kernel versioned v3.14-rc6-30-g66cc69e or
newer.
Assuming the above two requirements are satisfied, lets look at an
example of how this patch can be used and what information it exposes
(all commands run as 'root'):
# list all zfs tracepoints available
$ ls /sys/kernel/debug/tracing/events/zfs
enable filter zfs_arc__delete
zfs_arc__evict zfs_arc__hit zfs_arc__miss
zfs_l2arc__evict zfs_l2arc__hit zfs_l2arc__iodone
zfs_l2arc__miss zfs_l2arc__read zfs_l2arc__write
zfs_new_state__mfu zfs_new_state__mru
# enable all zfs tracepoints, clear the tracepoint ring buffer
$ echo 1 > /sys/kernel/debug/tracing/events/zfs/enable
$ echo 0 > /sys/kernel/debug/tracing/trace
# import zpool called 'tank', inspect tracepoint data (each line was
# truncated, they're too long for a commit message otherwise)
$ zpool import tank
$ cat /sys/kernel/debug/tracing/trace | head -n35
# tracer: nop
#
# entries-in-buffer/entries-written: 1219/1219 #P:8
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
lt-zpool-30132 [003] .... 91344.200050: zfs_arc__miss: hdr...
z_rd_int/0-30156 [003] .... 91344.200611: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.201173: zfs_arc__miss: hdr...
z_rd_int/1-30157 [003] .... 91344.201756: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.201795: zfs_arc__miss: hdr...
z_rd_int/2-30158 [003] .... 91344.202099: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202126: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202130: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202134: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202146: zfs_arc__miss: hdr...
z_rd_int/3-30159 [003] .... 91344.202457: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202484: zfs_arc__miss: hdr...
z_rd_int/4-30160 [003] .... 91344.202866: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202891: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.203034: zfs_arc__miss: hdr...
z_rd_iss/1-30149 [001] .... 91344.203749: zfs_new_state__mru...
lt-zpool-30132 [001] .... 91344.203789: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.203878: zfs_arc__miss: hdr...
z_rd_iss/3-30151 [001] .... 91344.204315: zfs_new_state__mru...
lt-zpool-30132 [001] .... 91344.204332: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204337: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204352: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204356: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204360: zfs_arc__hit: hdr ...
To highlight the kind of detailed information that is being exported
using this infrastructure, I've taken the first tracepoint line from the
output above and reformatted it such that it fits in 80 columns:
lt-zpool-30132 [003] .... 91344.200050: zfs_arc__miss:
hdr {
dva 0x1:0x40082
birth 15491
cksum0 0x163edbff3a
flags 0x640
datacnt 1
type 1
size 2048
spa 3133524293419867460
state_type 0
access 0
mru_hits 0
mru_ghost_hits 0
mfu_hits 0
mfu_ghost_hits 0
l2_hits 0
refcount 1
} bp {
dva0 0x1:0x40082
dva1 0x1:0x3000e5
dva2 0x1:0x5a006e
cksum 0x163edbff3a:0x75af30b3dd6:0x1499263ff5f2b:0x288bd118815e00
lsize 2048
} zb {
objset 0
object 0
level -1
blkid 0
}
For the specific tracepoint shown here, 'zfs_arc__miss', data is
exported detailing the arc_buf_hdr_t (hdr), blkptr_t (bp), and
zbookmark_t (zb) that caused the ARC miss (down to the exact DVA!).
This kind of precise and detailed information can be extremely valuable
when trying to answer certain kinds of questions.
For anybody unfamiliar but looking to build on this, I found the XFS
source code along with the following three web links to be extremely
helpful:
* http://lwn.net/Articles/379903/
* http://lwn.net/Articles/381064/
* http://lwn.net/Articles/383362/
I should also node the more "boring" aspects of this patch:
* The ZFS_LINUX_COMPILE_IFELSE autoconf macro was modified to
support a sixth paramter. This parameter is used to populate the
contents of the new conftest.h file. If no sixth parameter is
provided, conftest.h will be empty.
* The ZFS_LINUX_TRY_COMPILE_HEADER autoconf macro was introduced.
This macro is nearly identical to the ZFS_LINUX_TRY_COMPILE macro,
except it has support for a fifth option that is then passed as
the sixth parameter to ZFS_LINUX_COMPILE_IFELSE.
These autoconf changes were needed to test the availability of the Linux
tracepoint macros. Due to the odd nature of the Linux tracepoint macro
API, a separate ".h" must be created (the path and filename is used
internally by the kernel's define_trace.h file).
* The HAVE_DECLARE_EVENT_CLASS autoconf macro was introduced. This
is to determine if we can safely enable the Linux tracepoint
functionality. We need to selectively disable the tracepoint code
due to the kernel exporting certain functions as GPL only. Without
this check, the build process will fail at link time.
In addition, the SET_ERROR macro was modified into a tracepoint as well.
To do this, the 'sdt.h' file was moved into the 'include/sys' directory
and now contains a userspace portion and a kernel space portion. The
dprintf and zfs_dbgmsg* interfaces are now implemented as tracepoint as
well.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2014-06-13 21:54:48 +04:00
|
|
|
void
|
|
|
|
zfs_panic_recover(const char *fmt, ...)
|
|
|
|
{
|
|
|
|
va_list adx;
|
|
|
|
|
|
|
|
va_start(adx, fmt);
|
|
|
|
vcmn_err(zfs_recover ? CE_WARN : CE_PANIC, fmt, adx);
|
|
|
|
va_end(adx);
|
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
/*
|
|
|
|
* This is a stripped-down version of strtoull, suitable only for converting
|
2013-06-11 21:12:34 +04:00
|
|
|
* lowercase hexadecimal numbers that don't overflow.
|
2010-05-29 00:45:14 +04:00
|
|
|
*/
|
|
|
|
uint64_t
|
2017-06-13 06:16:28 +03:00
|
|
|
zfs_strtonum(const char *str, char **nptr)
|
2010-05-29 00:45:14 +04:00
|
|
|
{
|
|
|
|
uint64_t val = 0;
|
|
|
|
char c;
|
|
|
|
int digit;
|
|
|
|
|
|
|
|
while ((c = *str) != '\0') {
|
|
|
|
if (c >= '0' && c <= '9')
|
|
|
|
digit = c - '0';
|
|
|
|
else if (c >= 'a' && c <= 'f')
|
|
|
|
digit = 10 + c - 'a';
|
|
|
|
else
|
|
|
|
break;
|
|
|
|
|
|
|
|
val *= 16;
|
|
|
|
val += digit;
|
|
|
|
|
|
|
|
str++;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nptr)
|
|
|
|
*nptr = (char *)str;
|
|
|
|
|
|
|
|
return (val);
|
|
|
|
}
|
|
|
|
|
2018-09-06 04:33:36 +03:00
|
|
|
void
|
|
|
|
spa_activate_allocation_classes(spa_t *spa, dmu_tx_t *tx)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We bump the feature refcount for each special vdev added to the pool
|
|
|
|
*/
|
|
|
|
ASSERT(spa_feature_is_enabled(spa, SPA_FEATURE_ALLOCATION_CLASSES));
|
|
|
|
spa_feature_incr(spa, SPA_FEATURE_ALLOCATION_CLASSES, tx);
|
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* Accessor functions
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
boolean_t
|
|
|
|
spa_shutting_down(spa_t *spa)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2008-12-03 23:09:06 +03:00
|
|
|
return (spa->spa_async_suspended);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
dsl_pool_t *
|
|
|
|
spa_get_dsl(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_dsl_pool);
|
|
|
|
}
|
|
|
|
|
2012-12-14 03:24:15 +04:00
|
|
|
boolean_t
|
|
|
|
spa_is_initializing(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_is_initializing);
|
|
|
|
}
|
|
|
|
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 19:30:13 +03:00
|
|
|
boolean_t
|
|
|
|
spa_indirect_vdevs_loaded(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_indirect_vdevs_loaded);
|
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
blkptr_t *
|
|
|
|
spa_get_rootblkptr(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (&spa->spa_ubsync.ub_rootbp);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_set_rootblkptr(spa_t *spa, const blkptr_t *bp)
|
|
|
|
{
|
|
|
|
spa->spa_uberblock.ub_rootbp = *bp;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_altroot(spa_t *spa, char *buf, size_t buflen)
|
|
|
|
{
|
|
|
|
if (spa->spa_root == NULL)
|
|
|
|
buf[0] = '\0';
|
|
|
|
else
|
|
|
|
(void) strncpy(buf, spa->spa_root, buflen);
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
spa_sync_pass(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_sync_pass);
|
|
|
|
}
|
|
|
|
|
|
|
|
char *
|
|
|
|
spa_name(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_name);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_guid(spa_t *spa)
|
|
|
|
{
|
2012-12-15 00:38:04 +04:00
|
|
|
dsl_pool_t *dp = spa_get_dsl(spa);
|
|
|
|
uint64_t guid;
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* If we fail to parse the config during spa_load(), we can go through
|
|
|
|
* the error path (which posts an ereport) and end up here with no root
|
2011-11-12 02:07:54 +04:00
|
|
|
* vdev. We stash the original pool guid in 'spa_config_guid' to handle
|
2008-11-20 23:01:55 +03:00
|
|
|
* this case.
|
|
|
|
*/
|
2012-12-15 00:38:04 +04:00
|
|
|
if (spa->spa_root_vdev == NULL)
|
|
|
|
return (spa->spa_config_guid);
|
|
|
|
|
|
|
|
guid = spa->spa_last_synced_guid != 0 ?
|
|
|
|
spa->spa_last_synced_guid : spa->spa_root_vdev->vdev_guid;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return the most recently synced out guid unless we're
|
|
|
|
* in syncing context.
|
|
|
|
*/
|
|
|
|
if (dp && dsl_pool_sync_context(dp))
|
2008-11-20 23:01:55 +03:00
|
|
|
return (spa->spa_root_vdev->vdev_guid);
|
|
|
|
else
|
2012-12-15 00:38:04 +04:00
|
|
|
return (guid);
|
2011-11-12 02:07:54 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_load_guid(spa_t *spa)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* This is a GUID that exists solely as a reference for the
|
|
|
|
* purposes of the arc. It is generated at load time, and
|
|
|
|
* is never written to persistent storage.
|
|
|
|
*/
|
|
|
|
return (spa->spa_load_guid);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_last_synced_txg(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_ubsync.ub_txg);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_first_txg(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_first_txg);
|
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
uint64_t
|
|
|
|
spa_syncing_txg(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_syncing_txg);
|
|
|
|
}
|
|
|
|
|
2017-04-07 23:50:18 +03:00
|
|
|
/*
|
|
|
|
* Return the last txg where data can be dirtied. The final txgs
|
|
|
|
* will be used to just clear out any deferred frees that remain.
|
|
|
|
*/
|
|
|
|
uint64_t
|
|
|
|
spa_final_dirty_txg(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_final_txg - TXG_DEFER_SIZE);
|
|
|
|
}
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
pool_state_t
|
2008-11-20 23:01:55 +03:00
|
|
|
spa_state(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_state);
|
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
spa_load_state_t
|
|
|
|
spa_load_state(spa_t *spa)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2010-05-29 00:45:14 +04:00
|
|
|
return (spa->spa_load_state);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
2010-05-29 00:45:14 +04:00
|
|
|
spa_freeze_txg(spa_t *spa)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2010-05-29 00:45:14 +04:00
|
|
|
return (spa->spa_freeze_txg);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
Fix size inflation in spa_get_worst_case_asize()
When we try assign a new transaction to a TXG we must know beforehand
if there is sufficient free space on disk. This is to decide,
in dmu_tx_assign(), if we should reject the TX with ENOSPC.
We rely on spa_get_worst_case_asize() to inflate the size of our
logical writes by a factor of spa_asize_inflation which is
calculated as:
(VDEV_RAIDZ_MAXPARITY + 1) * SPA_DVAS_PER_BP * 2 == 24
The problem with the current implementation is that we don't take
into account what happens with very small writes on VDEVs with large
physical block sizes.
Consider the case of writes to a dataset with recordsize=512,
copies=3 on a VDEV with ashift=13 (usually SSD with 8K block size):
every logical IO will end up allocating 3 * 8K = 24K on disk, so 512
bytes multiplied by 48, which is double the size we account for.
If we allow this kind of writes to be assigned a TX it is possible,
when the pool is almost full, to trigger an allocation failure
(ENOSPC) in the ZIO pipeline, which will in turn result in the whole
pool being suspended.
The bug is fixed by using, in spa_get_worst_case_asize(), the MAX()
value chosen between the logical io size from zfs_write() and the
maximum physical block size used among our VDEVs.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Closes #5941
2017-04-11 01:28:21 +03:00
|
|
|
/*
|
|
|
|
* Return the inflated asize for a logical write in bytes. This is used by the
|
|
|
|
* DMU to calculate the space a logical write will require on disk.
|
|
|
|
* If lsize is smaller than the largest physical block size allocatable on this
|
|
|
|
* pool we use its value instead, since the write will end up using the whole
|
|
|
|
* block anyway.
|
|
|
|
*/
|
2008-11-20 23:01:55 +03:00
|
|
|
uint64_t
|
OpenZFS 7793 - ztest fails assertion in dmu_tx_willuse_space
Reviewed by: Steve Gonczi <steve.gonczi@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Ported-by: Brian Behlendorf <behlendorf1@llnl.gov>
Background information: This assertion about tx_space_* verifies that we
are not dirtying more stuff than we thought we would. We “need” to know
how much we will dirty so that we can check if we should fail this
transaction with ENOSPC/EDQUOT, in dmu_tx_assign(). While the
transaction is open (i.e. between dmu_tx_assign() and dmu_tx_commit() —
typically less than a millisecond), we call dbuf_dirty() on the exact
blocks that will be modified. Once this happens, the temporary
accounting in tx_space_* is unnecessary, because we know exactly what
blocks are newly dirtied; we call dnode_willuse_space() to track this
more exact accounting.
The fundamental problem causing this bug is that dmu_tx_hold_*() relies
on the current state in the DMU (e.g. dn_nlevels) to predict how much
will be dirtied by this transaction, but this state can change before we
actually perform the transaction (i.e. call dbuf_dirty()).
This bug will be fixed by removing the assertion that the tx_space_*
accounting is perfectly accurate (i.e. we never dirty more than was
predicted by dmu_tx_hold_*()). By removing the requirement that this
accounting be perfectly accurate, we can also vastly simplify it, e.g.
removing most of the logic in dmu_tx_count_*().
The new tx space accounting will be very approximate, and may be more or
less than what is actually dirtied. It will still be used to determine
if this transaction will put us over quota. Transactions that are marked
by dmu_tx_mark_netfree() will be excepted from this check. We won’t make
an attempt to determine how much space will be freed by the transaction
— this was rarely accurate enough to determine if a transaction should
be permitted when we are over quota, which is why dmu_tx_mark_netfree()
was introduced in 2014.
We also won’t attempt to give “credit” when overwriting existing blocks,
if those blocks may be freed. This allows us to remove the
do_free_accounting logic in dbuf_dirty(), and associated routines. This
logic attempted to predict what will be on disk when this txg syncs, to
know if the overwritten block will be freed (i.e. exists, and has no
snapshots).
OpenZFS-issue: https://www.illumos.org/issues/7793
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/3704e0a
Upstream bugs: DLPX-32883a
Closes #5804
Porting notes:
- DNODE_SIZE replaced with DNODE_MIN_SIZE in dmu_tx_count_dnode(),
Using the default dnode size would be slightly better.
- DEBUG_DMU_TX wrappers and configure option removed.
- Resolved _by_dnode() conflicts these changes have not yet been
applied to OpenZFS.
2017-03-07 20:51:59 +03:00
|
|
|
spa_get_worst_case_asize(spa_t *spa, uint64_t lsize)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
Fix size inflation in spa_get_worst_case_asize()
When we try assign a new transaction to a TXG we must know beforehand
if there is sufficient free space on disk. This is to decide,
in dmu_tx_assign(), if we should reject the TX with ENOSPC.
We rely on spa_get_worst_case_asize() to inflate the size of our
logical writes by a factor of spa_asize_inflation which is
calculated as:
(VDEV_RAIDZ_MAXPARITY + 1) * SPA_DVAS_PER_BP * 2 == 24
The problem with the current implementation is that we don't take
into account what happens with very small writes on VDEVs with large
physical block sizes.
Consider the case of writes to a dataset with recordsize=512,
copies=3 on a VDEV with ashift=13 (usually SSD with 8K block size):
every logical IO will end up allocating 3 * 8K = 24K on disk, so 512
bytes multiplied by 48, which is double the size we account for.
If we allow this kind of writes to be assigned a TX it is possible,
when the pool is almost full, to trigger an allocation failure
(ENOSPC) in the ZIO pipeline, which will in turn result in the whole
pool being suspended.
The bug is fixed by using, in spa_get_worst_case_asize(), the MAX()
value chosen between the logical io size from zfs_write() and the
maximum physical block size used among our VDEVs.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Closes #5941
2017-04-11 01:28:21 +03:00
|
|
|
if (lsize == 0)
|
|
|
|
return (0); /* No inflation needed */
|
|
|
|
return (MAX(lsize, 1 << spa->spa_max_ashift) * spa_asize_inflation);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2014-11-03 23:28:43 +03:00
|
|
|
/*
|
|
|
|
* Return the amount of slop space in bytes. It is 1/32 of the pool (3.2%),
|
2016-07-14 02:48:01 +03:00
|
|
|
* or at least 128MB, unless that would cause it to be more than half the
|
|
|
|
* pool size.
|
2014-11-03 23:28:43 +03:00
|
|
|
*
|
|
|
|
* See the comment above spa_slop_shift for details.
|
|
|
|
*/
|
|
|
|
uint64_t
|
2017-01-21 00:17:55 +03:00
|
|
|
spa_get_slop_space(spa_t *spa)
|
|
|
|
{
|
2014-11-03 23:28:43 +03:00
|
|
|
uint64_t space = spa_get_dspace(spa);
|
2016-07-14 02:48:01 +03:00
|
|
|
return (MAX(space >> spa_slop_shift, MIN(space >> 1, spa_min_slop)));
|
2014-11-03 23:28:43 +03:00
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
uint64_t
|
|
|
|
spa_get_dspace(spa_t *spa)
|
|
|
|
{
|
2010-05-29 00:45:14 +04:00
|
|
|
return (spa->spa_dspace);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2016-12-17 01:11:29 +03:00
|
|
|
uint64_t
|
|
|
|
spa_get_checkpoint_space(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_checkpoint_info.sci_dspace);
|
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
void
|
|
|
|
spa_update_dspace(spa_t *spa)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2010-05-29 00:45:14 +04:00
|
|
|
spa->spa_dspace = metaslab_class_get_dspace(spa_normal_class(spa)) +
|
|
|
|
ddt_get_dedup_dspace(spa);
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 19:30:13 +03:00
|
|
|
if (spa->spa_vdev_removal != NULL) {
|
|
|
|
/*
|
|
|
|
* We can't allocate from the removing device, so
|
|
|
|
* subtract its size. This prevents the DMU/DSL from
|
|
|
|
* filling up the (now smaller) pool while we are in the
|
|
|
|
* middle of removing the device.
|
|
|
|
*
|
|
|
|
* Note that the DMU/DSL doesn't actually know or care
|
|
|
|
* how much space is allocated (it does its own tracking
|
|
|
|
* of how much space has been logically used). So it
|
|
|
|
* doesn't matter that the data we are moving may be
|
|
|
|
* allocated twice (on the old device and the new
|
|
|
|
* device).
|
|
|
|
*/
|
OpenZFS 9290 - device removal reduces redundancy of mirrors
Mirrors are supposed to provide redundancy in the face of whole-disk
failure and silent damage (e.g. some data on disk is not right, but ZFS
hasn't detected the whole device as being broken). However, the current
device removal implementation bypasses some of the mirror's redundancy.
Note that in no case is incorrect data returned, but we might get a
checksum error when we should have been able to find the right data.
There are two underlying problems:
1. When we remove a mirror device, we only read one side of the mirror.
Since we can't verify the checksum, this side may be silently bad, but
the good data is on the other side of the mirror (which we didn't read).
This can cause the removal to "bake in" the busted data – all copies of
the data in the new location are the same, busted version, while we left
the good version behind.
The fix for this is to read and copy both sides of the mirror. If the
old and new vdevs are mirrors, we will read both sides of the old
mirror, and write each copy to the corresponding side of the new mirror.
(If the old and new vdevs have a different number of children, we will
do this as best as possible.) Even though we aren't verifying checksums,
this ensures that as long as there's a good copy of the data, we'll have
a good copy after the removal, even if there's silent damage to one side
of the mirror. If we're removing a mirror that has some silent damage,
we'll have exactly the same damage in the new location (assuming that
the new location is also a mirror).
2. When we read from an indirect vdev that points to a mirror vdev, we
only consider one copy of the data. This can lead to reduced effective
redundancy, because we might read a bad copy of the data from one side
of the mirror, and not retry the other, good side of the mirror.
Note that the problem is not with the removal process, but rather after
the removal has completed (having copied correct data to both sides of
the mirror), if one side of the new mirror is silently damaged, we
encounter the problem when reading the relocated data via the indirect
vdev. Also note that the problem doesn't occur when ZFS knows that one
side of the mirror is bad, e.g. when a disk entirely fails or is
offlined.
The impact is that reads (from indirect vdevs that point to mirrors) may
return a checksum error even though the good data exists on one side of
the mirror, and scrub doesn't repair all data on the mirror (if some of
it is pointed to via an indirect vdev).
The fix for this is complicated by "split blocks" - one logical block
may be split into two (or more) pieces with each piece moved to a
different new location. In this case we need to read all versions of
each split (one from each side of the mirror), and figure out which
combination of versions results in the correct checksum, and then repair
the incorrect versions.
This ensures that we supply the same redundancy whether you use device
removal or not. For example, if a mirror has small silent errors on all
of its children, we can still reconstruct the correct data, as long as
those errors are at sufficiently-separated offsets (specifically,
separated by the largest block size - default of 128KB, but up to 16MB).
Porting notes:
* A new indirect vdev check was moved from dsl_scan_needs_resilver_cb()
to dsl_scan_needs_resilver(), which was added to ZoL as part of the
sequential scrub work.
* Passed NULL for zfs_ereport_post_checksum()'s zbookmark_phys_t
parameter. The extra parameter is unique to ZoL.
* When posting indirect checksum errors the ABD can be passed directly,
zfs_ereport_post_checksum() is not yet ABD-aware in OpenZFS.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://illumos.org/issues/9290
OpenZFS-commit: https://github.com/openzfs/openzfs/pull/591
Closes #6900
2018-02-13 22:37:56 +03:00
|
|
|
spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER);
|
|
|
|
vdev_t *vd =
|
|
|
|
vdev_lookup_top(spa, spa->spa_vdev_removal->svr_vdev_id);
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 19:30:13 +03:00
|
|
|
spa->spa_dspace -= spa_deflate(spa) ?
|
|
|
|
vd->vdev_stat.vs_dspace : vd->vdev_stat.vs_space;
|
OpenZFS 9290 - device removal reduces redundancy of mirrors
Mirrors are supposed to provide redundancy in the face of whole-disk
failure and silent damage (e.g. some data on disk is not right, but ZFS
hasn't detected the whole device as being broken). However, the current
device removal implementation bypasses some of the mirror's redundancy.
Note that in no case is incorrect data returned, but we might get a
checksum error when we should have been able to find the right data.
There are two underlying problems:
1. When we remove a mirror device, we only read one side of the mirror.
Since we can't verify the checksum, this side may be silently bad, but
the good data is on the other side of the mirror (which we didn't read).
This can cause the removal to "bake in" the busted data – all copies of
the data in the new location are the same, busted version, while we left
the good version behind.
The fix for this is to read and copy both sides of the mirror. If the
old and new vdevs are mirrors, we will read both sides of the old
mirror, and write each copy to the corresponding side of the new mirror.
(If the old and new vdevs have a different number of children, we will
do this as best as possible.) Even though we aren't verifying checksums,
this ensures that as long as there's a good copy of the data, we'll have
a good copy after the removal, even if there's silent damage to one side
of the mirror. If we're removing a mirror that has some silent damage,
we'll have exactly the same damage in the new location (assuming that
the new location is also a mirror).
2. When we read from an indirect vdev that points to a mirror vdev, we
only consider one copy of the data. This can lead to reduced effective
redundancy, because we might read a bad copy of the data from one side
of the mirror, and not retry the other, good side of the mirror.
Note that the problem is not with the removal process, but rather after
the removal has completed (having copied correct data to both sides of
the mirror), if one side of the new mirror is silently damaged, we
encounter the problem when reading the relocated data via the indirect
vdev. Also note that the problem doesn't occur when ZFS knows that one
side of the mirror is bad, e.g. when a disk entirely fails or is
offlined.
The impact is that reads (from indirect vdevs that point to mirrors) may
return a checksum error even though the good data exists on one side of
the mirror, and scrub doesn't repair all data on the mirror (if some of
it is pointed to via an indirect vdev).
The fix for this is complicated by "split blocks" - one logical block
may be split into two (or more) pieces with each piece moved to a
different new location. In this case we need to read all versions of
each split (one from each side of the mirror), and figure out which
combination of versions results in the correct checksum, and then repair
the incorrect versions.
This ensures that we supply the same redundancy whether you use device
removal or not. For example, if a mirror has small silent errors on all
of its children, we can still reconstruct the correct data, as long as
those errors are at sufficiently-separated offsets (specifically,
separated by the largest block size - default of 128KB, but up to 16MB).
Porting notes:
* A new indirect vdev check was moved from dsl_scan_needs_resilver_cb()
to dsl_scan_needs_resilver(), which was added to ZoL as part of the
sequential scrub work.
* Passed NULL for zfs_ereport_post_checksum()'s zbookmark_phys_t
parameter. The extra parameter is unique to ZoL.
* When posting indirect checksum errors the ABD can be passed directly,
zfs_ereport_post_checksum() is not yet ABD-aware in OpenZFS.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Ported-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://illumos.org/issues/9290
OpenZFS-commit: https://github.com/openzfs/openzfs/pull/591
Closes #6900
2018-02-13 22:37:56 +03:00
|
|
|
spa_config_exit(spa, SCL_VDEV, FTAG);
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 19:30:13 +03:00
|
|
|
}
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return the failure mode that has been set to this pool. The default
|
|
|
|
* behavior will be to block all I/Os when a complete failure occurs.
|
|
|
|
*/
|
2017-12-19 01:06:07 +03:00
|
|
|
uint64_t
|
2008-11-20 23:01:55 +03:00
|
|
|
spa_get_failmode(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_failmode);
|
|
|
|
}
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
boolean_t
|
|
|
|
spa_suspended(spa_t *spa)
|
|
|
|
{
|
2018-03-15 20:56:55 +03:00
|
|
|
return (spa->spa_suspended != ZIO_SUSPEND_NONE);
|
2008-12-03 23:09:06 +03:00
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
uint64_t
|
|
|
|
spa_version(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_ubsync.ub_version);
|
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
boolean_t
|
|
|
|
spa_deflate(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_deflate);
|
|
|
|
}
|
|
|
|
|
|
|
|
metaslab_class_t *
|
|
|
|
spa_normal_class(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_normal_class);
|
|
|
|
}
|
|
|
|
|
|
|
|
metaslab_class_t *
|
|
|
|
spa_log_class(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_log_class);
|
|
|
|
}
|
|
|
|
|
2018-09-06 04:33:36 +03:00
|
|
|
metaslab_class_t *
|
|
|
|
spa_special_class(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_special_class);
|
|
|
|
}
|
|
|
|
|
|
|
|
metaslab_class_t *
|
|
|
|
spa_dedup_class(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_dedup_class);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Locate an appropriate allocation class
|
|
|
|
*/
|
|
|
|
metaslab_class_t *
|
|
|
|
spa_preferred_class(spa_t *spa, uint64_t size, dmu_object_type_t objtype,
|
|
|
|
uint_t level, uint_t special_smallblk)
|
|
|
|
{
|
|
|
|
if (DMU_OT_IS_ZIL(objtype)) {
|
|
|
|
if (spa->spa_log_class->mc_groups != 0)
|
|
|
|
return (spa_log_class(spa));
|
|
|
|
else
|
|
|
|
return (spa_normal_class(spa));
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean_t has_special_class = spa->spa_special_class->mc_groups != 0;
|
|
|
|
|
|
|
|
if (DMU_OT_IS_DDT(objtype)) {
|
|
|
|
if (spa->spa_dedup_class->mc_groups != 0)
|
|
|
|
return (spa_dedup_class(spa));
|
|
|
|
else if (has_special_class && zfs_ddt_data_is_special)
|
|
|
|
return (spa_special_class(spa));
|
|
|
|
else
|
|
|
|
return (spa_normal_class(spa));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Indirect blocks for user data can land in special if allowed */
|
|
|
|
if (level > 0 && (DMU_OT_IS_FILE(objtype) || objtype == DMU_OT_ZVOL)) {
|
|
|
|
if (has_special_class && zfs_user_indirect_is_special)
|
|
|
|
return (spa_special_class(spa));
|
|
|
|
else
|
|
|
|
return (spa_normal_class(spa));
|
|
|
|
}
|
|
|
|
|
|
|
|
if (DMU_OT_IS_METADATA(objtype) || level > 0) {
|
|
|
|
if (has_special_class)
|
|
|
|
return (spa_special_class(spa));
|
|
|
|
else
|
|
|
|
return (spa_normal_class(spa));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allow small file blocks in special class in some cases (like
|
|
|
|
* for the dRAID vdev feature). But always leave a reserve of
|
|
|
|
* zfs_special_class_metadata_reserve_pct exclusively for metadata.
|
|
|
|
*/
|
|
|
|
if (DMU_OT_IS_FILE(objtype) &&
|
2019-02-08 23:32:12 +03:00
|
|
|
has_special_class && size <= special_smallblk) {
|
2018-09-06 04:33:36 +03:00
|
|
|
metaslab_class_t *special = spa_special_class(spa);
|
|
|
|
uint64_t alloc = metaslab_class_get_alloc(special);
|
|
|
|
uint64_t space = metaslab_class_get_space(special);
|
|
|
|
uint64_t limit =
|
|
|
|
(space * (100 - zfs_special_class_metadata_reserve_pct))
|
|
|
|
/ 100;
|
|
|
|
|
|
|
|
if (alloc < limit)
|
|
|
|
return (special);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (spa_normal_class(spa));
|
|
|
|
}
|
|
|
|
|
2015-04-02 06:44:32 +03:00
|
|
|
void
|
|
|
|
spa_evicting_os_register(spa_t *spa, objset_t *os)
|
|
|
|
{
|
|
|
|
mutex_enter(&spa->spa_evicting_os_lock);
|
|
|
|
list_insert_head(&spa->spa_evicting_os_list, os);
|
|
|
|
mutex_exit(&spa->spa_evicting_os_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_evicting_os_deregister(spa_t *spa, objset_t *os)
|
|
|
|
{
|
|
|
|
mutex_enter(&spa->spa_evicting_os_lock);
|
|
|
|
list_remove(&spa->spa_evicting_os_list, os);
|
|
|
|
cv_broadcast(&spa->spa_evicting_os_cv);
|
|
|
|
mutex_exit(&spa->spa_evicting_os_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_evicting_os_wait(spa_t *spa)
|
|
|
|
{
|
|
|
|
mutex_enter(&spa->spa_evicting_os_lock);
|
|
|
|
while (!list_is_empty(&spa->spa_evicting_os_list))
|
|
|
|
cv_wait(&spa->spa_evicting_os_cv, &spa->spa_evicting_os_lock);
|
|
|
|
mutex_exit(&spa->spa_evicting_os_lock);
|
|
|
|
|
|
|
|
dmu_buf_user_evict_wait();
|
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
int
|
|
|
|
spa_max_replication(spa_t *spa)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* As of SPA_VERSION == SPA_VERSION_DITTO_BLOCKS, we are able to
|
|
|
|
* handle BPs with more than one DVA allocated. Set our max
|
|
|
|
* replication level accordingly.
|
|
|
|
*/
|
|
|
|
if (spa_version(spa) < SPA_VERSION_DITTO_BLOCKS)
|
|
|
|
return (1);
|
|
|
|
return (MIN(SPA_DVAS_PER_BP, spa_max_replication_override));
|
|
|
|
}
|
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
int
|
|
|
|
spa_prev_software_version(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_prev_software_version);
|
|
|
|
}
|
|
|
|
|
2013-04-30 02:49:23 +04:00
|
|
|
uint64_t
|
|
|
|
spa_deadman_synctime(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_deadman_synctime);
|
|
|
|
}
|
|
|
|
|
2017-12-19 01:06:07 +03:00
|
|
|
uint64_t
|
|
|
|
spa_deadman_ziotime(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_deadman_ziotime);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_get_deadman_failmode(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_deadman_failmode);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_set_deadman_failmode(spa_t *spa, const char *failmode)
|
|
|
|
{
|
|
|
|
if (strcmp(failmode, "wait") == 0)
|
|
|
|
spa->spa_deadman_failmode = ZIO_FAILURE_MODE_WAIT;
|
|
|
|
else if (strcmp(failmode, "continue") == 0)
|
|
|
|
spa->spa_deadman_failmode = ZIO_FAILURE_MODE_CONTINUE;
|
|
|
|
else if (strcmp(failmode, "panic") == 0)
|
|
|
|
spa->spa_deadman_failmode = ZIO_FAILURE_MODE_PANIC;
|
|
|
|
else
|
|
|
|
spa->spa_deadman_failmode = ZIO_FAILURE_MODE_WAIT;
|
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
uint64_t
|
2010-05-29 00:45:14 +04:00
|
|
|
dva_get_dsize_sync(spa_t *spa, const dva_t *dva)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
2010-05-29 00:45:14 +04:00
|
|
|
uint64_t asize = DVA_GET_ASIZE(dva);
|
|
|
|
uint64_t dsize = asize;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
ASSERT(spa_config_held(spa, SCL_ALL, RW_READER) != 0);
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
if (asize != 0 && spa->spa_deflate) {
|
|
|
|
vdev_t *vd = vdev_lookup_top(spa, DVA_GET_VDEV(dva));
|
2014-05-05 22:28:12 +04:00
|
|
|
if (vd != NULL)
|
|
|
|
dsize = (asize >> SPA_MINBLOCKSHIFT) *
|
|
|
|
vd->vdev_deflate_ratio;
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
return (dsize);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
bp_get_dsize_sync(spa_t *spa, const blkptr_t *bp)
|
|
|
|
{
|
|
|
|
uint64_t dsize = 0;
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int d = 0; d < BP_GET_NDVAS(bp); d++)
|
2010-05-29 00:45:14 +04:00
|
|
|
dsize += dva_get_dsize_sync(spa, &bp->blk_dva[d]);
|
|
|
|
|
|
|
|
return (dsize);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
bp_get_dsize(spa_t *spa, const blkptr_t *bp)
|
|
|
|
{
|
|
|
|
uint64_t dsize = 0;
|
|
|
|
|
|
|
|
spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER);
|
|
|
|
|
2017-11-04 23:25:13 +03:00
|
|
|
for (int d = 0; d < BP_GET_NDVAS(bp); d++)
|
2010-05-29 00:45:14 +04:00
|
|
|
dsize += dva_get_dsize_sync(spa, &bp->blk_dva[d]);
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
spa_config_exit(spa, SCL_VDEV, FTAG);
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
return (dsize);
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
2017-09-27 04:45:19 +03:00
|
|
|
uint64_t
|
|
|
|
spa_dirty_data(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_dsl_pool->dp_dirty_total);
|
|
|
|
}
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
/*
|
|
|
|
* ==========================================================================
|
|
|
|
* Initialization and Termination
|
|
|
|
* ==========================================================================
|
|
|
|
*/
|
|
|
|
|
|
|
|
static int
|
|
|
|
spa_name_compare(const void *a1, const void *a2)
|
|
|
|
{
|
|
|
|
const spa_t *s1 = a1;
|
|
|
|
const spa_t *s2 = a2;
|
|
|
|
int s;
|
|
|
|
|
|
|
|
s = strcmp(s1->spa_name, s2->spa_name);
|
2016-08-27 21:12:53 +03:00
|
|
|
|
|
|
|
return (AVL_ISIGN(s));
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2010-08-26 20:52:41 +04:00
|
|
|
spa_boot_init(void)
|
2008-11-20 23:01:55 +03:00
|
|
|
{
|
|
|
|
spa_config_load();
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_init(int mode)
|
|
|
|
{
|
|
|
|
mutex_init(&spa_namespace_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
mutex_init(&spa_spare_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
mutex_init(&spa_l2cache_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
cv_init(&spa_namespace_cv, NULL, CV_DEFAULT, NULL);
|
|
|
|
|
|
|
|
avl_create(&spa_namespace_avl, spa_name_compare, sizeof (spa_t),
|
|
|
|
offsetof(spa_t, spa_avl));
|
|
|
|
|
|
|
|
avl_create(&spa_spare_avl, spa_spare_compare, sizeof (spa_aux_t),
|
|
|
|
offsetof(spa_aux_t, aux_avl));
|
|
|
|
|
|
|
|
avl_create(&spa_l2cache_avl, spa_l2cache_compare, sizeof (spa_aux_t),
|
|
|
|
offsetof(spa_aux_t, aux_avl));
|
|
|
|
|
2009-01-16 00:59:39 +03:00
|
|
|
spa_mode_global = mode;
|
2008-11-20 23:01:55 +03:00
|
|
|
|
2013-05-17 01:18:06 +04:00
|
|
|
#ifndef _KERNEL
|
|
|
|
if (spa_mode_global != FREAD && dprintf_find_string("watch")) {
|
|
|
|
struct sigaction sa;
|
|
|
|
|
|
|
|
sa.sa_flags = SA_SIGINFO;
|
|
|
|
sigemptyset(&sa.sa_mask);
|
|
|
|
sa.sa_sigaction = arc_buf_sigsegv;
|
|
|
|
|
|
|
|
if (sigaction(SIGSEGV, &sa, NULL) == -1) {
|
|
|
|
perror("could not enable watchpoints: "
|
|
|
|
"sigaction(SIGSEGV, ...) = ");
|
|
|
|
} else {
|
|
|
|
arc_watch = B_TRUE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2010-08-26 22:42:43 +04:00
|
|
|
fm_init();
|
2018-10-01 20:42:05 +03:00
|
|
|
zfs_refcount_init();
|
2008-11-20 23:01:55 +03:00
|
|
|
unique_init();
|
Illumos #4101, #4102, #4103, #4105, #4106
4101 metaslab_debug should allow for fine-grained control
4102 space_maps should store more information about themselves
4103 space map object blocksize should be increased
4105 removing a mirrored log device results in a leaked object
4106 asynchronously load metaslab
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Sebastien Roy <seb@delphix.com>
Approved by: Garrett D'Amore <garrett@damore.org>
Prior to this patch, space_maps were preferred solely based on the
amount of free space left in each. Unfortunately, this heuristic didn't
contain any information about the make-up of that free space, which
meant we could keep preferring and loading a highly fragmented space map
that wouldn't actually have enough contiguous space to satisfy the
allocation; then unloading that space_map and repeating the process.
This change modifies the space_map's to store additional information
about the contiguous space in the space_map, so that we can use this
information to make a better decision about which space_map to load.
This requires reallocating all space_map objects to increase their
bonus buffer size sizes enough to fit the new metadata.
The above feature can be enabled via a new feature flag introduced by
this change: com.delphix:spacemap_histogram
In addition to the above, this patch allows the space_map block size to
be increase. Currently the block size is set to be 4K in size, which has
certain implications including the following:
* 4K sector devices will not see any compression benefit
* large space_maps require more metadata on-disk
* large space_maps require more time to load (typically random reads)
Now the space_map block size can adjust as needed up to the maximum size
set via the space_map_max_blksz variable.
A bug was fixed which resulted in potentially leaking an object when
removing a mirrored log device. The previous logic for vdev_remove() did
not deal with removing top-level vdevs that are interior vdevs (i.e.
mirror) correctly. The problem would occur when removing a mirrored log
device, and result in the DTL space map object being leaked; because
top-level vdevs don't have DTL space map objects associated with them.
References:
https://www.illumos.org/issues/4101
https://www.illumos.org/issues/4102
https://www.illumos.org/issues/4103
https://www.illumos.org/issues/4105
https://www.illumos.org/issues/4106
https://github.com/illumos/illumos-gate/commit/0713e23
Porting notes:
A handful of kmem_alloc() calls were converted to kmem_zalloc(). Also,
the KM_PUSHPAGE and TQ_PUSHPAGE flags were used as necessary.
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2488
2013-10-02 01:25:53 +04:00
|
|
|
range_tree_init();
|
2017-01-12 22:52:56 +03:00
|
|
|
metaslab_alloc_trace_init();
|
2013-11-20 01:34:46 +04:00
|
|
|
ddt_init();
|
2008-11-20 23:01:55 +03:00
|
|
|
zio_init();
|
|
|
|
dmu_init();
|
|
|
|
zil_init();
|
|
|
|
vdev_cache_stat_init();
|
2017-08-04 13:23:10 +03:00
|
|
|
vdev_mirror_stat_init();
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
vdev_raidz_math_init();
|
2016-12-21 21:47:15 +03:00
|
|
|
vdev_file_init();
|
2008-11-20 23:01:55 +03:00
|
|
|
zfs_prop_init();
|
|
|
|
zpool_prop_init();
|
2012-12-14 03:24:15 +04:00
|
|
|
zpool_feature_init();
|
2008-11-20 23:01:55 +03:00
|
|
|
spa_config_load();
|
2008-12-03 23:09:06 +03:00
|
|
|
l2arc_start();
|
2017-11-16 04:27:01 +03:00
|
|
|
scan_init();
|
2017-03-23 03:58:47 +03:00
|
|
|
qat_init();
|
2008-11-20 23:01:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_fini(void)
|
|
|
|
{
|
2008-12-03 23:09:06 +03:00
|
|
|
l2arc_stop();
|
|
|
|
|
2008-11-20 23:01:55 +03:00
|
|
|
spa_evict_all();
|
|
|
|
|
2016-12-21 21:47:15 +03:00
|
|
|
vdev_file_fini();
|
2008-11-20 23:01:55 +03:00
|
|
|
vdev_cache_stat_fini();
|
2017-08-04 13:23:10 +03:00
|
|
|
vdev_mirror_stat_fini();
|
SIMD implementation of vdev_raidz generate and reconstruct routines
This is a new implementation of RAIDZ1/2/3 routines using x86_64
scalar, SSE, and AVX2 instruction sets. Included are 3 parity
generation routines (P, PQ, and PQR) and 7 reconstruction routines,
for all RAIDZ level. On module load, a quick benchmark of supported
routines will select the fastest for each operation and they will
be used at runtime. Original implementation is still present and
can be selected via module parameter.
Patch contains:
- specialized gen/rec routines for all RAIDZ levels,
- new scalar raidz implementation (unrolled),
- two x86_64 SIMD implementations (SSE and AVX2 instructions sets),
- fastest routines selected on module load (benchmark).
- cmd/raidz_test - verify and benchmark all implementations
- added raidz_test to the ZFS Test Suite
New zfs module parameters:
- zfs_vdev_raidz_impl (str): selects the implementation to use. On
module load, the parameter will only accept first 3 options, and
the other implementations can be set once module is finished
loading. Possible values for this option are:
"fastest" - use the fastest math available
"original" - use the original raidz code
"scalar" - new scalar impl
"sse" - new SSE impl if available
"avx2" - new AVX2 impl if available
See contents of `/sys/module/zfs/parameters/zfs_vdev_raidz_impl` to
get the list of supported values. If an implementation is not supported
on the system, it will not be shown. Currently selected option is
enclosed in `[]`.
Signed-off-by: Gvozden Neskovic <neskovic@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #4328
2016-04-25 11:04:31 +03:00
|
|
|
vdev_raidz_math_fini();
|
2008-11-20 23:01:55 +03:00
|
|
|
zil_fini();
|
|
|
|
dmu_fini();
|
|
|
|
zio_fini();
|
2013-11-20 01:34:46 +04:00
|
|
|
ddt_fini();
|
2017-01-12 22:52:56 +03:00
|
|
|
metaslab_alloc_trace_fini();
|
Illumos #4101, #4102, #4103, #4105, #4106
4101 metaslab_debug should allow for fine-grained control
4102 space_maps should store more information about themselves
4103 space map object blocksize should be increased
4105 removing a mirrored log device results in a leaked object
4106 asynchronously load metaslab
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Sebastien Roy <seb@delphix.com>
Approved by: Garrett D'Amore <garrett@damore.org>
Prior to this patch, space_maps were preferred solely based on the
amount of free space left in each. Unfortunately, this heuristic didn't
contain any information about the make-up of that free space, which
meant we could keep preferring and loading a highly fragmented space map
that wouldn't actually have enough contiguous space to satisfy the
allocation; then unloading that space_map and repeating the process.
This change modifies the space_map's to store additional information
about the contiguous space in the space_map, so that we can use this
information to make a better decision about which space_map to load.
This requires reallocating all space_map objects to increase their
bonus buffer size sizes enough to fit the new metadata.
The above feature can be enabled via a new feature flag introduced by
this change: com.delphix:spacemap_histogram
In addition to the above, this patch allows the space_map block size to
be increase. Currently the block size is set to be 4K in size, which has
certain implications including the following:
* 4K sector devices will not see any compression benefit
* large space_maps require more metadata on-disk
* large space_maps require more time to load (typically random reads)
Now the space_map block size can adjust as needed up to the maximum size
set via the space_map_max_blksz variable.
A bug was fixed which resulted in potentially leaking an object when
removing a mirrored log device. The previous logic for vdev_remove() did
not deal with removing top-level vdevs that are interior vdevs (i.e.
mirror) correctly. The problem would occur when removing a mirrored log
device, and result in the DTL space map object being leaked; because
top-level vdevs don't have DTL space map objects associated with them.
References:
https://www.illumos.org/issues/4101
https://www.illumos.org/issues/4102
https://www.illumos.org/issues/4103
https://www.illumos.org/issues/4105
https://www.illumos.org/issues/4106
https://github.com/illumos/illumos-gate/commit/0713e23
Porting notes:
A handful of kmem_alloc() calls were converted to kmem_zalloc(). Also,
the KM_PUSHPAGE and TQ_PUSHPAGE flags were used as necessary.
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2488
2013-10-02 01:25:53 +04:00
|
|
|
range_tree_fini();
|
2008-11-20 23:01:55 +03:00
|
|
|
unique_fini();
|
2018-10-01 20:42:05 +03:00
|
|
|
zfs_refcount_fini();
|
2010-08-26 22:42:43 +04:00
|
|
|
fm_fini();
|
2017-11-16 04:27:01 +03:00
|
|
|
scan_fini();
|
2017-03-23 03:58:47 +03:00
|
|
|
qat_fini();
|
2008-11-20 23:01:55 +03:00
|
|
|
|
|
|
|
avl_destroy(&spa_namespace_avl);
|
|
|
|
avl_destroy(&spa_spare_avl);
|
|
|
|
avl_destroy(&spa_l2cache_avl);
|
|
|
|
|
|
|
|
cv_destroy(&spa_namespace_cv);
|
|
|
|
mutex_destroy(&spa_namespace_lock);
|
|
|
|
mutex_destroy(&spa_spare_lock);
|
|
|
|
mutex_destroy(&spa_l2cache_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return whether this pool has slogs. No locking needed.
|
|
|
|
* It's not a problem if the wrong answer is returned as it's only for
|
|
|
|
* performance and not correctness
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
spa_has_slogs(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_log_class->mc_rotor != NULL);
|
|
|
|
}
|
2008-12-03 23:09:06 +03:00
|
|
|
|
2010-05-29 00:45:14 +04:00
|
|
|
spa_log_state_t
|
|
|
|
spa_get_log_state(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_log_state);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_set_log_state(spa_t *spa, spa_log_state_t state)
|
|
|
|
{
|
|
|
|
spa->spa_log_state = state;
|
|
|
|
}
|
|
|
|
|
2008-12-03 23:09:06 +03:00
|
|
|
boolean_t
|
|
|
|
spa_is_root(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_is_root);
|
|
|
|
}
|
2009-01-16 00:59:39 +03:00
|
|
|
|
|
|
|
boolean_t
|
|
|
|
spa_writeable(spa_t *spa)
|
|
|
|
{
|
OpenZFS 9075 - Improve ZFS pool import/load process and corrupted pool recovery
Some work has been done lately to improve the debugability of the ZFS pool
load (and import) process. This includes:
7638 Refactor spa_load_impl into several functions
8961 SPA load/import should tell us why it failed
7277 zdb should be able to print zfs_dbgmsg's
To iterate on top of that, there's a few changes that were made to make the
import process more resilient and crash free. One of the first tasks during the
pool load process is to parse a config provided from userland that describes
what devices the pool is composed of. A vdev tree is generated from that config,
and then all the vdevs are opened.
The Meta Object Set (MOS) of the pool is accessed, and several metadata objects
that are necessary to load the pool are read. The exact configuration of the
pool is also stored inside the MOS. Since the configuration provided from
userland is external and might not accurately describe the vdev tree
of the pool at the txg that is being loaded, it cannot be relied upon to safely
operate the pool. For that reason, the configuration in the MOS is read early
on. In the past, the two configurations were compared together and if there was
a mismatch then the load process was aborted and an error was returned.
The latter was a good way to ensure a pool does not get corrupted, however it
made the pool load process needlessly fragile in cases where the vdev
configuration changed or the userland configuration was outdated. Since the MOS
is stored in 3 copies, the configuration provided by userland doesn't have to be
perfect in order to read its contents. Hence, a new approach has been adopted:
The pool is first opened with the untrusted userland configuration just so that
the real configuration can be read from the MOS. The trusted MOS configuration
is then used to generate a new vdev tree and the pool is re-opened.
When the pool is opened with an untrusted configuration, writes are disabled
to avoid accidentally damaging it. During reads, some sanity checks are
performed on block pointers to see if each DVA points to a known vdev;
when the configuration is untrusted, instead of panicking the system if those
checks fail we simply avoid issuing reads to the invalid DVAs.
This new two-step pool load process now allows rewinding pools accross
vdev tree changes such as device replacement, addition, etc. Loading a pool
from an external config file in a clustering environment also becomes much
safer now since the pool will import even if the config is outdated and didn't,
for instance, register a recent device addition.
With this code in place, it became relatively easy to implement a
long-sought-after feature: the ability to import a pool with missing top level
(i.e. non-redundant) devices. Note that since this almost guarantees some loss
of data, this feature is for now restricted to a read-only import.
Porting notes (ZTS):
* Fix 'make dist' target in zpool_import
* The maximum path length allowed by tar is 99 characters. Several
of the new test cases exceeded this limit resulting in them not
being included in the tarball. Shorten the names slightly.
* Set/get tunables using accessor functions.
* Get last synced txg via the "zfs_txg_history" mechanism.
* Clear zinject handlers in cleanup for import_cache_device_replaced
and import_rewind_device_replaced in order that the zpool can be
exported if there is an error.
* Increase FILESIZE to 8G in zfs-test.sh to allow for a larger
ext4 file system to be created on ZFS_DISK2. Also, there's
no need to partition ZFS_DISK2 at all. The partitioning had
already been disabled for multipath devices. Among other things,
the partitioning steals some space from the ext4 file system,
makes it difficult to accurately calculate the paramters to
parted and can make some of the tests fail.
* Increase FS_SIZE and FILE_SIZE in the zpool_import test
configuration now that FILESIZE is larger.
* Write more data in order that device evacuation take lonnger in
a couple tests.
* Use mkdir -p to avoid errors when the directory already exists.
* Remove use of sudo in import_rewind_config_changed.
Authored by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Approved by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://illumos.org/issues/9075
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/619c0123
Closes #7459
2016-07-22 17:39:36 +03:00
|
|
|
return (!!(spa->spa_mode & FWRITE) && spa->spa_trust_config);
|
2009-01-16 00:59:39 +03:00
|
|
|
}
|
|
|
|
|
2014-07-18 19:08:31 +04:00
|
|
|
/*
|
|
|
|
* Returns true if there is a pending sync task in any of the current
|
|
|
|
* syncing txg, the current quiescing txg, or the current open txg.
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
spa_has_pending_synctask(spa_t *spa)
|
|
|
|
{
|
2016-12-17 01:11:29 +03:00
|
|
|
return (!txg_all_lists_empty(&spa->spa_dsl_pool->dp_sync_tasks) ||
|
|
|
|
!txg_all_lists_empty(&spa->spa_dsl_pool->dp_early_sync_tasks));
|
2014-07-18 19:08:31 +04:00
|
|
|
}
|
|
|
|
|
2009-01-16 00:59:39 +03:00
|
|
|
int
|
|
|
|
spa_mode(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_mode);
|
|
|
|
}
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_bootfs(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_bootfs);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_delegation(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_delegation);
|
|
|
|
}
|
|
|
|
|
|
|
|
objset_t *
|
|
|
|
spa_meta_objset(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_meta_objset);
|
|
|
|
}
|
|
|
|
|
|
|
|
enum zio_checksum
|
|
|
|
spa_dedup_checksum(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_dedup_checksum);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reset pool scan stat per scan pass (or reboot).
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
spa_scan_stat_init(spa_t *spa)
|
|
|
|
{
|
|
|
|
/* data not stored on disk */
|
|
|
|
spa->spa_scan_pass_start = gethrestime_sec();
|
2017-07-07 08:16:13 +03:00
|
|
|
if (dsl_scan_is_paused_scrub(spa->spa_dsl_pool->dp_scan))
|
|
|
|
spa->spa_scan_pass_scrub_pause = spa->spa_scan_pass_start;
|
|
|
|
else
|
|
|
|
spa->spa_scan_pass_scrub_pause = 0;
|
|
|
|
spa->spa_scan_pass_scrub_spent_paused = 0;
|
2010-05-29 00:45:14 +04:00
|
|
|
spa->spa_scan_pass_exam = 0;
|
2017-11-16 04:27:01 +03:00
|
|
|
spa->spa_scan_pass_issued = 0;
|
2010-05-29 00:45:14 +04:00
|
|
|
vdev_scan_stat_init(spa->spa_root_vdev);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get scan stats for zpool status reports
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
spa_scan_get_stats(spa_t *spa, pool_scan_stat_t *ps)
|
|
|
|
{
|
|
|
|
dsl_scan_t *scn = spa->spa_dsl_pool ? spa->spa_dsl_pool->dp_scan : NULL;
|
|
|
|
|
|
|
|
if (scn == NULL || scn->scn_phys.scn_func == POOL_SCAN_NONE)
|
2013-03-08 22:41:28 +04:00
|
|
|
return (SET_ERROR(ENOENT));
|
2010-05-29 00:45:14 +04:00
|
|
|
bzero(ps, sizeof (pool_scan_stat_t));
|
|
|
|
|
|
|
|
/* data stored on disk */
|
|
|
|
ps->pss_func = scn->scn_phys.scn_func;
|
2017-11-16 04:27:01 +03:00
|
|
|
ps->pss_state = scn->scn_phys.scn_state;
|
2010-05-29 00:45:14 +04:00
|
|
|
ps->pss_start_time = scn->scn_phys.scn_start_time;
|
|
|
|
ps->pss_end_time = scn->scn_phys.scn_end_time;
|
|
|
|
ps->pss_to_examine = scn->scn_phys.scn_to_examine;
|
2017-11-30 20:40:13 +03:00
|
|
|
ps->pss_examined = scn->scn_phys.scn_examined;
|
2010-05-29 00:45:14 +04:00
|
|
|
ps->pss_to_process = scn->scn_phys.scn_to_process;
|
|
|
|
ps->pss_processed = scn->scn_phys.scn_processed;
|
|
|
|
ps->pss_errors = scn->scn_phys.scn_errors;
|
|
|
|
|
|
|
|
/* data not stored on disk */
|
|
|
|
ps->pss_pass_exam = spa->spa_scan_pass_exam;
|
2017-11-30 20:40:13 +03:00
|
|
|
ps->pss_pass_start = spa->spa_scan_pass_start;
|
2017-07-07 08:16:13 +03:00
|
|
|
ps->pss_pass_scrub_pause = spa->spa_scan_pass_scrub_pause;
|
|
|
|
ps->pss_pass_scrub_spent_paused = spa->spa_scan_pass_scrub_spent_paused;
|
2017-11-30 20:40:13 +03:00
|
|
|
ps->pss_pass_issued = spa->spa_scan_pass_issued;
|
|
|
|
ps->pss_issued =
|
|
|
|
scn->scn_issued_before_pass + spa->spa_scan_pass_issued;
|
2010-05-29 00:45:14 +04:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
2010-08-26 22:49:16 +04:00
|
|
|
|
2014-11-03 23:15:08 +03:00
|
|
|
int
|
|
|
|
spa_maxblocksize(spa_t *spa)
|
|
|
|
{
|
|
|
|
if (spa_feature_is_enabled(spa, SPA_FEATURE_LARGE_BLOCKS))
|
|
|
|
return (SPA_MAXBLOCKSIZE);
|
|
|
|
else
|
|
|
|
return (SPA_OLD_MAXBLOCKSIZE);
|
|
|
|
}
|
|
|
|
|
OpenZFS 7614, 9064 - zfs device evacuation/removal
OpenZFS 7614 - zfs device evacuation/removal
OpenZFS 9064 - remove_mirror should wait for device removal to complete
This project allows top-level vdevs to be removed from the storage pool
with "zpool remove", reducing the total amount of storage in the pool.
This operation copies all allocated regions of the device to be removed
onto other devices, recording the mapping from old to new location.
After the removal is complete, read and free operations to the removed
(now "indirect") vdev must be remapped and performed at the new location
on disk. The indirect mapping table is kept in memory whenever the pool
is loaded, so there is minimal performance overhead when doing operations
on the indirect vdev.
The size of the in-memory mapping table will be reduced when its entries
become "obsolete" because they are no longer used by any block pointers
in the pool. An entry becomes obsolete when all the blocks that use
it are freed. An entry can also become obsolete when all the snapshots
that reference it are deleted, and the block pointers that reference it
have been "remapped" in all filesystems/zvols (and clones). Whenever an
indirect block is written, all the block pointers in it will be "remapped"
to their new (concrete) locations if possible. This process can be
accelerated by using the "zfs remap" command to proactively rewrite all
indirect blocks that reference indirect (removed) vdevs.
Note that when a device is removed, we do not verify the checksum of
the data that is copied. This makes the process much faster, but if it
were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be
possible to copy the wrong data, when we have the correct data on e.g.
the other side of the mirror.
At the moment, only mirrors and simple top-level vdevs can be removed
and no removal is allowed if any of the top-level vdevs are raidz.
Porting Notes:
* Avoid zero-sized kmem_alloc() in vdev_compact_children().
The device evacuation code adds a dependency that
vdev_compact_children() be able to properly empty the vdev_child
array by setting it to NULL and zeroing vdev_children. Under Linux,
kmem_alloc() and related functions return a sentinel pointer rather
than NULL for zero-sized allocations.
* Remove comment regarding "mpt" driver where zfs_remove_max_segment
is initialized to SPA_MAXBLOCKSIZE.
Change zfs_condense_indirect_commit_entry_delay_ticks to
zfs_condense_indirect_commit_entry_delay_ms for consistency with
most other tunables in which delays are specified in ms.
* ZTS changes:
Use set_tunable rather than mdb
Use zpool sync as appropriate
Use sync_pool instead of sync
Kill jobs during test_removal_with_operation to allow unmount/export
Don't add non-disk names such as "mirror" or "raidz" to $DISKS
Use $TEST_BASE_DIR instead of /tmp
Increase HZ from 100 to 1000 which is more common on Linux
removal_multiple_indirection.ksh
Reduce iterations in order to not time out on the code
coverage builders.
removal_resume_export:
Functionally, the test case is correct but there exists a race
where the kernel thread hasn't been fully started yet and is
not visible. Wait for up to 1 second for the removal thread
to be started before giving up on it. Also, increase the
amount of data copied in order that the removal not finish
before the export has a chance to fail.
* MMP compatibility, the concept of concrete versus non-concrete devices
has slightly changed the semantics of vdev_writeable(). Update
mmp_random_leaf_impl() accordingly.
* Updated dbuf_remap() to handle the org.zfsonlinux:large_dnode pool
feature which is not supported by OpenZFS.
* Added support for new vdev removal tracepoints.
* Test cases removal_with_zdb and removal_condense_export have been
intentionally disabled. When run manually they pass as intended,
but when running in the automated test environment they produce
unreliable results on the latest Fedora release.
They may work better once the upstream pool import refectoring is
merged into ZoL at which point they will be re-enabled.
Authored by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Alex Reece <alex@delphix.com>
Reviewed-by: George Wilson <george.wilson@delphix.com>
Reviewed-by: John Kennedy <john.kennedy@delphix.com>
Reviewed-by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Richard Laager <rlaager@wiktel.com>
Reviewed by: Tim Chase <tim@chase2k.com>
Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Garrett D'Amore <garrett@damore.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://www.illumos.org/issues/7614
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f539f1eb
Closes #6900
2016-09-22 19:30:13 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Returns the txg that the last device removal completed. No indirect mappings
|
|
|
|
* have been added since this txg.
|
|
|
|
*/
|
|
|
|
uint64_t
|
|
|
|
spa_get_last_removal_txg(spa_t *spa)
|
|
|
|
{
|
|
|
|
uint64_t vdevid;
|
|
|
|
uint64_t ret = -1ULL;
|
|
|
|
|
|
|
|
spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER);
|
|
|
|
/*
|
|
|
|
* sr_prev_indirect_vdev is only modified while holding all the
|
|
|
|
* config locks, so it is sufficient to hold SCL_VDEV as reader when
|
|
|
|
* examining it.
|
|
|
|
*/
|
|
|
|
vdevid = spa->spa_removing_phys.sr_prev_indirect_vdev;
|
|
|
|
|
|
|
|
while (vdevid != -1ULL) {
|
|
|
|
vdev_t *vd = vdev_lookup_top(spa, vdevid);
|
|
|
|
vdev_indirect_births_t *vib = vd->vdev_indirect_births;
|
|
|
|
|
|
|
|
ASSERT3P(vd->vdev_ops, ==, &vdev_indirect_ops);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the removal did not remap any data, we don't care.
|
|
|
|
*/
|
|
|
|
if (vdev_indirect_births_count(vib) != 0) {
|
|
|
|
ret = vdev_indirect_births_last_entry_txg(vib);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
vdevid = vd->vdev_indirect_config.vic_prev_indirect_vdev;
|
|
|
|
}
|
|
|
|
spa_config_exit(spa, SCL_VDEV, FTAG);
|
|
|
|
|
|
|
|
IMPLY(ret != -1ULL,
|
|
|
|
spa_feature_is_active(spa, SPA_FEATURE_DEVICE_REMOVAL));
|
|
|
|
|
|
|
|
return (ret);
|
|
|
|
}
|
|
|
|
|
Implement large_dnode pool feature
Justification
-------------
This feature adds support for variable length dnodes. Our motivation is
to eliminate the overhead associated with using spill blocks. Spill
blocks are used to store system attribute data (i.e. file metadata) that
does not fit in the dnode's bonus buffer. By allowing a larger bonus
buffer area the use of a spill block can be avoided. Spill blocks
potentially incur an additional read I/O for every dnode in a dnode
block. As a worst case example, reading 32 dnodes from a 16k dnode block
and all of the spill blocks could issue 33 separate reads. Now suppose
those dnodes have size 1024 and therefore don't need spill blocks. Then
the worst case number of blocks read is reduced to from 33 to two--one
per dnode block. In practice spill blocks may tend to be co-located on
disk with the dnode blocks so the reduction in I/O would not be this
drastic. In a badly fragmented pool, however, the improvement could be
significant.
ZFS-on-Linux systems that make heavy use of extended attributes would
benefit from this feature. In particular, ZFS-on-Linux supports the
xattr=sa dataset property which allows file extended attribute data
to be stored in the dnode bonus buffer as an alternative to the
traditional directory-based format. Workloads such as SELinux and the
Lustre distributed filesystem often store enough xattr data to force
spill bocks when xattr=sa is in effect. Large dnodes may therefore
provide a performance benefit to such systems.
Other use cases that may benefit from this feature include files with
large ACLs and symbolic links with long target names. Furthermore,
this feature may be desirable on other platforms in case future
applications or features are developed that could make use of a
larger bonus buffer area.
Implementation
--------------
The size of a dnode may be a multiple of 512 bytes up to the size of
a dnode block (currently 16384 bytes). A dn_extra_slots field was
added to the current on-disk dnode_phys_t structure to describe the
size of the physical dnode on disk. The 8 bits for this field were
taken from the zero filled dn_pad2 field. The field represents how
many "extra" dnode_phys_t slots a dnode consumes in its dnode block.
This convention results in a value of 0 for 512 byte dnodes which
preserves on-disk format compatibility with older software.
Similarly, the in-memory dnode_t structure has a new dn_num_slots field
to represent the total number of dnode_phys_t slots consumed on disk.
Thus dn->dn_num_slots is 1 greater than the corresponding
dnp->dn_extra_slots. This difference in convention was adopted
because, unlike on-disk structures, backward compatibility is not a
concern for in-memory objects, so we used a more natural way to
represent size for a dnode_t.
The default size for newly created dnodes is determined by the value of
a new "dnodesize" dataset property. By default the property is set to
"legacy" which is compatible with older software. Setting the property
to "auto" will allow the filesystem to choose the most suitable dnode
size. Currently this just sets the default dnode size to 1k, but future
code improvements could dynamically choose a size based on observed
workload patterns. Dnodes of varying sizes can coexist within the same
dataset and even within the same dnode block. For example, to enable
automatically-sized dnodes, run
# zfs set dnodesize=auto tank/fish
The user can also specify literal values for the dnodesize property.
These are currently limited to powers of two from 1k to 16k. The
power-of-2 limitation is only for simplicity of the user interface.
Internally the implementation can handle any multiple of 512 up to 16k,
and consumers of the DMU API can specify any legal dnode value.
The size of a new dnode is determined at object allocation time and
stored as a new field in the znode in-memory structure. New DMU
interfaces are added to allow the consumer to specify the dnode size
that a newly allocated object should use. Existing interfaces are
unchanged to avoid having to update every call site and to preserve
compatibility with external consumers such as Lustre. The new
interfaces names are given below. The versions of these functions that
don't take a dnodesize parameter now just call the _dnsize() versions
with a dnodesize of 0, which means use the legacy dnode size.
New DMU interfaces:
dmu_object_alloc_dnsize()
dmu_object_claim_dnsize()
dmu_object_reclaim_dnsize()
New ZAP interfaces:
zap_create_dnsize()
zap_create_norm_dnsize()
zap_create_flags_dnsize()
zap_create_claim_norm_dnsize()
zap_create_link_dnsize()
The constant DN_MAX_BONUSLEN is renamed to DN_OLD_MAX_BONUSLEN. The
spa_maxdnodesize() function should be used to determine the maximum
bonus length for a pool.
These are a few noteworthy changes to key functions:
* The prototype for dnode_hold_impl() now takes a "slots" parameter.
When the DNODE_MUST_BE_FREE flag is set, this parameter is used to
ensure the hole at the specified object offset is large enough to
hold the dnode being created. The slots parameter is also used
to ensure a dnode does not span multiple dnode blocks. In both of
these cases, if a failure occurs, ENOSPC is returned. Keep in mind,
these failure cases are only possible when using DNODE_MUST_BE_FREE.
If the DNODE_MUST_BE_ALLOCATED flag is set, "slots" must be 0.
dnode_hold_impl() will check if the requested dnode is already
consumed as an extra dnode slot by an large dnode, in which case
it returns ENOENT.
* The function dmu_object_alloc() advances to the next dnode block
if dnode_hold_impl() returns an error for a requested object.
This is because the beginning of the next dnode block is the only
location it can safely assume to either be a hole or a valid
starting point for a dnode.
* dnode_next_offset_level() and other functions that iterate
through dnode blocks may no longer use a simple array indexing
scheme. These now use the current dnode's dn_num_slots field to
advance to the next dnode in the block. This is to ensure we
properly skip the current dnode's bonus area and don't interpret it
as a valid dnode.
zdb
---
The zdb command was updated to display a dnode's size under the
"dnsize" column when the object is dumped.
For ZIL create log records, zdb will now display the slot count for
the object.
ztest
-----
Ztest chooses a random dnodesize for every newly created object. The
random distribution is more heavily weighted toward small dnodes to
better simulate real-world datasets.
Unused bonus buffer space is filled with non-zero values computed from
the object number, dataset id, offset, and generation number. This
helps ensure that the dnode traversal code properly skips the interior
regions of large dnodes, and that these interior regions are not
overwritten by data belonging to other dnodes. A new test visits each
object in a dataset. It verifies that the actual dnode size matches what
was stored in the ztest block tag when it was created. It also verifies
that the unused bonus buffer space is filled with the expected data
patterns.
ZFS Test Suite
--------------
Added six new large dnode-specific tests, and integrated the dnodesize
property into existing tests for zfs allow and send/recv.
Send/Receive
------------
ZFS send streams for datasets containing large dnodes cannot be received
on pools that don't support the large_dnode feature. A send stream with
large dnodes sets a DMU_BACKUP_FEATURE_LARGE_DNODE flag which will be
unrecognized by an incompatible receiving pool so that the zfs receive
will fail gracefully.
While not implemented here, it may be possible to generate a
backward-compatible send stream from a dataset containing large
dnodes. The implementation may be tricky, however, because the send
object record for a large dnode would need to be resized to a 512
byte dnode, possibly kicking in a spill block in the process. This
means we would need to construct a new SA layout and possibly
register it in the SA layout object. The SA layout is normally just
sent as an ordinary object record. But if we are constructing new
layouts while generating the send stream we'd have to build the SA
layout object dynamically and send it at the end of the stream.
For sending and receiving between pools that do support large dnodes,
the drr_object send record type is extended with a new field to store
the dnode slot count. This field was repurposed from unused padding
in the structure.
ZIL Replay
----------
The dnode slot count is stored in the uppermost 8 bits of the lr_foid
field. The bits were unused as the object id is currently capped at
48 bits.
Resizing Dnodes
---------------
It should be possible to resize a dnode when it is dirtied if the
current dnodesize dataset property differs from the dnode's size, but
this functionality is not currently implemented. Clearly a dnode can
only grow if there are sufficient contiguous unused slots in the
dnode block, but it should always be possible to shrink a dnode.
Growing dnodes may be useful to reduce fragmentation in a pool with
many spill blocks in use. Shrinking dnodes may be useful to allow
sending a dataset to a pool that doesn't support the large_dnode
feature.
Feature Reference Counting
--------------------------
The reference count for the large_dnode pool feature tracks the
number of datasets that have ever contained a dnode of size larger
than 512 bytes. The first time a large dnode is created in a dataset
the dataset is converted to an extensible dataset. This is a one-way
operation and the only way to decrement the feature count is to
destroy the dataset, even if the dataset no longer contains any large
dnodes. The complexity of reference counting on a per-dnode basis was
too high, so we chose to track it on a per-dataset basis similarly to
the large_block feature.
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3542
2016-03-17 04:25:34 +03:00
|
|
|
int
|
|
|
|
spa_maxdnodesize(spa_t *spa)
|
|
|
|
{
|
|
|
|
if (spa_feature_is_enabled(spa, SPA_FEATURE_LARGE_DNODE))
|
|
|
|
return (DNODE_MAX_SIZE);
|
|
|
|
else
|
|
|
|
return (DNODE_MIN_SIZE);
|
|
|
|
}
|
|
|
|
|
Multi-modifier protection (MMP)
Add multihost=on|off pool property to control MMP. When enabled
a new thread writes uberblocks to the last slot in each label, at a
set frequency, to indicate to other hosts the pool is actively imported.
These uberblocks are the last synced uberblock with an updated
timestamp. Property defaults to off.
During tryimport, find the "best" uberblock (newest txg and timestamp)
repeatedly, checking for change in the found uberblock. Include the
results of the activity test in the config returned by tryimport.
These results are reported to user in "zpool import".
Allow the user to control the period between MMP writes, and the
duration of the activity test on import, via a new module parameter
zfs_multihost_interval. The period is specified in milliseconds. The
activity test duration is calculated from this value, and from the
mmp_delay in the "best" uberblock found initially.
Add a kstat interface to export statistics about Multiple Modifier
Protection (MMP) updates. Include the last synced txg number, the
timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV
label that received the last MMP update, and the VDEV path. Abbreviated
output below.
$ cat /proc/spl/kstat/zfs/mypool/multihost
31 0 0x01 10 880 105092382393521 105144180101111
txg timestamp mmp_delay vdev_guid vdev_label vdev_path
20468 261337 250274925 68396651780 3 /dev/sda
20468 261339 252023374 6267402363293 1 /dev/sdc
20468 261340 252000858 6698080955233 1 /dev/sdx
20468 261341 251980635 783892869810 2 /dev/sdy
20468 261342 253385953 8923255792467 3 /dev/sdd
20468 261344 253336622 042125143176 0 /dev/sdab
20468 261345 253310522 1200778101278 2 /dev/sde
20468 261346 253286429 0950576198362 2 /dev/sdt
20468 261347 253261545 96209817917 3 /dev/sds
20468 261349 253238188 8555725937673 3 /dev/sdb
Add a new tunable zfs_multihost_history to specify the number of MMP
updates to store history for. By default it is set to zero meaning that
no MMP statistics are stored.
When using ztest to generate activity, for automated tests of the MMP
function, some test functions interfere with the test. For example, the
pool is exported to run zdb and then imported again. Add a new ztest
function, "-M", to alter ztest behavior to prevent this.
Add new tests to verify the new functionality. Tests provided by
Giuseppe Di Natale.
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: Ned Bass <bass6@llnl.gov>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Olaf Faaland <faaland1@llnl.gov>
Closes #745
Closes #6279
2017-07-08 06:20:35 +03:00
|
|
|
boolean_t
|
|
|
|
spa_multihost(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_multihost ? B_TRUE : B_FALSE);
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned long
|
|
|
|
spa_get_hostid(void)
|
|
|
|
{
|
|
|
|
unsigned long myhostid;
|
|
|
|
|
|
|
|
#ifdef _KERNEL
|
|
|
|
myhostid = zone_get_hostid(NULL);
|
|
|
|
#else /* _KERNEL */
|
|
|
|
/*
|
|
|
|
* We're emulating the system's hostid in userland, so
|
|
|
|
* we can't use zone_get_hostid().
|
|
|
|
*/
|
|
|
|
(void) ddi_strtoul(hw_serial, NULL, 10, &myhostid);
|
|
|
|
#endif /* _KERNEL */
|
|
|
|
|
|
|
|
return (myhostid);
|
|
|
|
}
|
|
|
|
|
OpenZFS 9075 - Improve ZFS pool import/load process and corrupted pool recovery
Some work has been done lately to improve the debugability of the ZFS pool
load (and import) process. This includes:
7638 Refactor spa_load_impl into several functions
8961 SPA load/import should tell us why it failed
7277 zdb should be able to print zfs_dbgmsg's
To iterate on top of that, there's a few changes that were made to make the
import process more resilient and crash free. One of the first tasks during the
pool load process is to parse a config provided from userland that describes
what devices the pool is composed of. A vdev tree is generated from that config,
and then all the vdevs are opened.
The Meta Object Set (MOS) of the pool is accessed, and several metadata objects
that are necessary to load the pool are read. The exact configuration of the
pool is also stored inside the MOS. Since the configuration provided from
userland is external and might not accurately describe the vdev tree
of the pool at the txg that is being loaded, it cannot be relied upon to safely
operate the pool. For that reason, the configuration in the MOS is read early
on. In the past, the two configurations were compared together and if there was
a mismatch then the load process was aborted and an error was returned.
The latter was a good way to ensure a pool does not get corrupted, however it
made the pool load process needlessly fragile in cases where the vdev
configuration changed or the userland configuration was outdated. Since the MOS
is stored in 3 copies, the configuration provided by userland doesn't have to be
perfect in order to read its contents. Hence, a new approach has been adopted:
The pool is first opened with the untrusted userland configuration just so that
the real configuration can be read from the MOS. The trusted MOS configuration
is then used to generate a new vdev tree and the pool is re-opened.
When the pool is opened with an untrusted configuration, writes are disabled
to avoid accidentally damaging it. During reads, some sanity checks are
performed on block pointers to see if each DVA points to a known vdev;
when the configuration is untrusted, instead of panicking the system if those
checks fail we simply avoid issuing reads to the invalid DVAs.
This new two-step pool load process now allows rewinding pools accross
vdev tree changes such as device replacement, addition, etc. Loading a pool
from an external config file in a clustering environment also becomes much
safer now since the pool will import even if the config is outdated and didn't,
for instance, register a recent device addition.
With this code in place, it became relatively easy to implement a
long-sought-after feature: the ability to import a pool with missing top level
(i.e. non-redundant) devices. Note that since this almost guarantees some loss
of data, this feature is for now restricted to a read-only import.
Porting notes (ZTS):
* Fix 'make dist' target in zpool_import
* The maximum path length allowed by tar is 99 characters. Several
of the new test cases exceeded this limit resulting in them not
being included in the tarball. Shorten the names slightly.
* Set/get tunables using accessor functions.
* Get last synced txg via the "zfs_txg_history" mechanism.
* Clear zinject handlers in cleanup for import_cache_device_replaced
and import_rewind_device_replaced in order that the zpool can be
exported if there is an error.
* Increase FILESIZE to 8G in zfs-test.sh to allow for a larger
ext4 file system to be created on ZFS_DISK2. Also, there's
no need to partition ZFS_DISK2 at all. The partitioning had
already been disabled for multipath devices. Among other things,
the partitioning steals some space from the ext4 file system,
makes it difficult to accurately calculate the paramters to
parted and can make some of the tests fail.
* Increase FS_SIZE and FILE_SIZE in the zpool_import test
configuration now that FILESIZE is larger.
* Write more data in order that device evacuation take lonnger in
a couple tests.
* Use mkdir -p to avoid errors when the directory already exists.
* Remove use of sudo in import_rewind_config_changed.
Authored by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Approved by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://illumos.org/issues/9075
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/619c0123
Closes #7459
2016-07-22 17:39:36 +03:00
|
|
|
boolean_t
|
|
|
|
spa_trust_config(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_trust_config);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_missing_tvds_allowed(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_missing_tvds_allowed);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
spa_set_missing_tvds(spa_t *spa, uint64_t missing)
|
|
|
|
{
|
|
|
|
spa->spa_missing_tvds = missing;
|
|
|
|
}
|
|
|
|
|
2018-06-06 19:33:54 +03:00
|
|
|
/*
|
|
|
|
* Return the pool state string ("ONLINE", "DEGRADED", "SUSPENDED", etc).
|
|
|
|
*/
|
|
|
|
const char *
|
|
|
|
spa_state_to_name(spa_t *spa)
|
|
|
|
{
|
|
|
|
vdev_state_t state = spa->spa_root_vdev->vdev_state;
|
|
|
|
vdev_aux_t aux = spa->spa_root_vdev->vdev_stat.vs_aux;
|
|
|
|
|
|
|
|
if (spa_suspended(spa) &&
|
|
|
|
(spa_get_failmode(spa) != ZIO_FAILURE_MODE_CONTINUE))
|
|
|
|
return ("SUSPENDED");
|
|
|
|
|
|
|
|
switch (state) {
|
|
|
|
case VDEV_STATE_CLOSED:
|
|
|
|
case VDEV_STATE_OFFLINE:
|
|
|
|
return ("OFFLINE");
|
|
|
|
case VDEV_STATE_REMOVED:
|
|
|
|
return ("REMOVED");
|
|
|
|
case VDEV_STATE_CANT_OPEN:
|
|
|
|
if (aux == VDEV_AUX_CORRUPT_DATA || aux == VDEV_AUX_BAD_LOG)
|
|
|
|
return ("FAULTED");
|
|
|
|
else if (aux == VDEV_AUX_SPLIT_POOL)
|
|
|
|
return ("SPLIT");
|
|
|
|
else
|
|
|
|
return ("UNAVAIL");
|
|
|
|
case VDEV_STATE_FAULTED:
|
|
|
|
return ("FAULTED");
|
|
|
|
case VDEV_STATE_DEGRADED:
|
|
|
|
return ("DEGRADED");
|
|
|
|
case VDEV_STATE_HEALTHY:
|
|
|
|
return ("ONLINE");
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ("UNKNOWN");
|
|
|
|
}
|
|
|
|
|
2016-12-17 01:11:29 +03:00
|
|
|
boolean_t
|
|
|
|
spa_top_vdevs_spacemap_addressable(spa_t *spa)
|
|
|
|
{
|
|
|
|
vdev_t *rvd = spa->spa_root_vdev;
|
|
|
|
for (uint64_t c = 0; c < rvd->vdev_children; c++) {
|
|
|
|
if (!vdev_is_spacemap_addressable(rvd->vdev_child[c]))
|
|
|
|
return (B_FALSE);
|
|
|
|
}
|
|
|
|
return (B_TRUE);
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean_t
|
|
|
|
spa_has_checkpoint(spa_t *spa)
|
|
|
|
{
|
|
|
|
return (spa->spa_checkpoint_txg != 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean_t
|
|
|
|
spa_importing_readonly_checkpoint(spa_t *spa)
|
|
|
|
{
|
|
|
|
return ((spa->spa_import_flags & ZFS_IMPORT_CHECKPOINT) &&
|
|
|
|
spa->spa_mode == FREAD);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t
|
|
|
|
spa_min_claim_txg(spa_t *spa)
|
|
|
|
{
|
|
|
|
uint64_t checkpoint_txg = spa->spa_uberblock.ub_checkpoint_txg;
|
|
|
|
|
|
|
|
if (checkpoint_txg != 0)
|
|
|
|
return (checkpoint_txg + 1);
|
|
|
|
|
|
|
|
return (spa->spa_first_txg);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If there is a checkpoint, async destroys may consume more space from
|
|
|
|
* the pool instead of freeing it. In an attempt to save the pool from
|
|
|
|
* getting suspended when it is about to run out of space, we stop
|
|
|
|
* processing async destroys.
|
|
|
|
*/
|
|
|
|
boolean_t
|
|
|
|
spa_suspend_async_destroy(spa_t *spa)
|
|
|
|
{
|
|
|
|
dsl_pool_t *dp = spa_get_dsl(spa);
|
|
|
|
|
|
|
|
uint64_t unreserved = dsl_pool_unreserved_space(dp,
|
|
|
|
ZFS_SPACE_CHECK_EXTRA_RESERVED);
|
|
|
|
uint64_t used = dsl_dir_phys(dp->dp_root_dir)->dd_used_bytes;
|
|
|
|
uint64_t avail = (unreserved > used) ? (unreserved - used) : 0;
|
|
|
|
|
|
|
|
if (spa_has_checkpoint(spa) && avail == 0)
|
|
|
|
return (B_TRUE);
|
|
|
|
|
|
|
|
return (B_FALSE);
|
|
|
|
}
|
|
|
|
|
2018-02-16 04:53:18 +03:00
|
|
|
#if defined(_KERNEL)
|
2017-12-19 01:06:07 +03:00
|
|
|
|
|
|
|
#include <linux/mod_compat.h>
|
|
|
|
|
|
|
|
static int
|
|
|
|
param_set_deadman_failmode(const char *val, zfs_kernel_param_t *kp)
|
|
|
|
{
|
|
|
|
spa_t *spa = NULL;
|
|
|
|
char *p;
|
|
|
|
|
|
|
|
if (val == NULL)
|
|
|
|
return (SET_ERROR(-EINVAL));
|
|
|
|
|
|
|
|
if ((p = strchr(val, '\n')) != NULL)
|
|
|
|
*p = '\0';
|
|
|
|
|
|
|
|
if (strcmp(val, "wait") != 0 && strcmp(val, "continue") != 0 &&
|
|
|
|
strcmp(val, "panic"))
|
|
|
|
return (SET_ERROR(-EINVAL));
|
|
|
|
|
2018-05-09 07:45:47 +03:00
|
|
|
if (spa_mode_global != 0) {
|
|
|
|
mutex_enter(&spa_namespace_lock);
|
|
|
|
while ((spa = spa_next(spa)) != NULL)
|
|
|
|
spa_set_deadman_failmode(spa, val);
|
|
|
|
mutex_exit(&spa_namespace_lock);
|
|
|
|
}
|
2017-12-19 01:06:07 +03:00
|
|
|
|
|
|
|
return (param_set_charp(val, kp));
|
|
|
|
}
|
|
|
|
|
2018-05-09 07:45:47 +03:00
|
|
|
static int
|
|
|
|
param_set_deadman_ziotime(const char *val, zfs_kernel_param_t *kp)
|
|
|
|
{
|
|
|
|
spa_t *spa = NULL;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
error = param_set_ulong(val, kp);
|
|
|
|
if (error < 0)
|
|
|
|
return (SET_ERROR(error));
|
|
|
|
|
|
|
|
if (spa_mode_global != 0) {
|
|
|
|
mutex_enter(&spa_namespace_lock);
|
|
|
|
while ((spa = spa_next(spa)) != NULL)
|
|
|
|
spa->spa_deadman_ziotime =
|
|
|
|
MSEC2NSEC(zfs_deadman_ziotime_ms);
|
|
|
|
mutex_exit(&spa_namespace_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
param_set_deadman_synctime(const char *val, zfs_kernel_param_t *kp)
|
|
|
|
{
|
|
|
|
spa_t *spa = NULL;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
error = param_set_ulong(val, kp);
|
|
|
|
if (error < 0)
|
|
|
|
return (SET_ERROR(error));
|
|
|
|
|
|
|
|
if (spa_mode_global != 0) {
|
|
|
|
mutex_enter(&spa_namespace_lock);
|
|
|
|
while ((spa = spa_next(spa)) != NULL)
|
|
|
|
spa->spa_deadman_synctime =
|
|
|
|
MSEC2NSEC(zfs_deadman_synctime_ms);
|
|
|
|
mutex_exit(&spa_namespace_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
Add limits to spa_slop_shift tunable
This change adds limits to the possible spa_slop_shift values set via
the sysfs interface. Accepted values are from a minimum of 1 to a
maximum of 31 (inclusive): these limits are based on the following
values observed on a 128PB file-vdev test pool:
spa_slop_shift=1, spa_get_slop_space=63.5PiB
spa_slop_shift=2, spa_get_slop_space=31.8PiB
spa_slop_shift=3, spa_get_slop_space=15.9PiB
spa_slop_shift=4, spa_get_slop_space=7.9PiB
spa_slop_shift=5, spa_get_slop_space=4PiB
spa_slop_shift=6, spa_get_slop_space=2PiB
...
spa_slop_shift=25, spa_get_slop_space=4GiB
spa_slop_shift=26, spa_get_slop_space=2GiB
spa_slop_shift=27, spa_get_slop_space=1016MiB
spa_slop_shift=28, spa_get_slop_space=508MiB
spa_slop_shift=29, spa_get_slop_space=254MiB
spa_slop_shift=30, spa_get_slop_space=128MiB
spa_slop_shift=31, spa_get_slop_space=128MiB
spa_slop_shift=32, spa_get_slop_space=128MiB
Reviewed-by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Closes #7876
Closes #7900
2018-09-21 07:10:12 +03:00
|
|
|
static int
|
|
|
|
param_set_slop_shift(const char *buf, zfs_kernel_param_t *kp)
|
|
|
|
{
|
|
|
|
unsigned long val;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
error = kstrtoul(buf, 0, &val);
|
|
|
|
if (error)
|
|
|
|
return (SET_ERROR(error));
|
|
|
|
|
|
|
|
if (val < 1 || val > 31)
|
|
|
|
return (SET_ERROR(-EINVAL));
|
|
|
|
|
|
|
|
error = param_set_int(buf, kp);
|
|
|
|
if (error < 0)
|
|
|
|
return (SET_ERROR(error));
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2010-08-26 22:49:16 +04:00
|
|
|
/* Namespace manipulation */
|
|
|
|
EXPORT_SYMBOL(spa_lookup);
|
|
|
|
EXPORT_SYMBOL(spa_add);
|
|
|
|
EXPORT_SYMBOL(spa_remove);
|
|
|
|
EXPORT_SYMBOL(spa_next);
|
|
|
|
|
|
|
|
/* Refcount functions */
|
|
|
|
EXPORT_SYMBOL(spa_open_ref);
|
|
|
|
EXPORT_SYMBOL(spa_close);
|
|
|
|
EXPORT_SYMBOL(spa_refcount_zero);
|
|
|
|
|
|
|
|
/* Pool configuration lock */
|
|
|
|
EXPORT_SYMBOL(spa_config_tryenter);
|
|
|
|
EXPORT_SYMBOL(spa_config_enter);
|
|
|
|
EXPORT_SYMBOL(spa_config_exit);
|
|
|
|
EXPORT_SYMBOL(spa_config_held);
|
|
|
|
|
|
|
|
/* Pool vdev add/remove lock */
|
|
|
|
EXPORT_SYMBOL(spa_vdev_enter);
|
|
|
|
EXPORT_SYMBOL(spa_vdev_exit);
|
|
|
|
|
|
|
|
/* Pool vdev state change lock */
|
|
|
|
EXPORT_SYMBOL(spa_vdev_state_enter);
|
|
|
|
EXPORT_SYMBOL(spa_vdev_state_exit);
|
|
|
|
|
|
|
|
/* Accessor functions */
|
|
|
|
EXPORT_SYMBOL(spa_shutting_down);
|
|
|
|
EXPORT_SYMBOL(spa_get_dsl);
|
|
|
|
EXPORT_SYMBOL(spa_get_rootblkptr);
|
|
|
|
EXPORT_SYMBOL(spa_set_rootblkptr);
|
|
|
|
EXPORT_SYMBOL(spa_altroot);
|
|
|
|
EXPORT_SYMBOL(spa_sync_pass);
|
|
|
|
EXPORT_SYMBOL(spa_name);
|
|
|
|
EXPORT_SYMBOL(spa_guid);
|
|
|
|
EXPORT_SYMBOL(spa_last_synced_txg);
|
|
|
|
EXPORT_SYMBOL(spa_first_txg);
|
|
|
|
EXPORT_SYMBOL(spa_syncing_txg);
|
|
|
|
EXPORT_SYMBOL(spa_version);
|
|
|
|
EXPORT_SYMBOL(spa_state);
|
|
|
|
EXPORT_SYMBOL(spa_load_state);
|
|
|
|
EXPORT_SYMBOL(spa_freeze_txg);
|
|
|
|
EXPORT_SYMBOL(spa_get_dspace);
|
|
|
|
EXPORT_SYMBOL(spa_update_dspace);
|
|
|
|
EXPORT_SYMBOL(spa_deflate);
|
|
|
|
EXPORT_SYMBOL(spa_normal_class);
|
|
|
|
EXPORT_SYMBOL(spa_log_class);
|
2018-09-06 04:33:36 +03:00
|
|
|
EXPORT_SYMBOL(spa_special_class);
|
|
|
|
EXPORT_SYMBOL(spa_preferred_class);
|
2010-08-26 22:49:16 +04:00
|
|
|
EXPORT_SYMBOL(spa_max_replication);
|
|
|
|
EXPORT_SYMBOL(spa_prev_software_version);
|
|
|
|
EXPORT_SYMBOL(spa_get_failmode);
|
|
|
|
EXPORT_SYMBOL(spa_suspended);
|
|
|
|
EXPORT_SYMBOL(spa_bootfs);
|
|
|
|
EXPORT_SYMBOL(spa_delegation);
|
|
|
|
EXPORT_SYMBOL(spa_meta_objset);
|
2014-11-03 23:15:08 +03:00
|
|
|
EXPORT_SYMBOL(spa_maxblocksize);
|
Implement large_dnode pool feature
Justification
-------------
This feature adds support for variable length dnodes. Our motivation is
to eliminate the overhead associated with using spill blocks. Spill
blocks are used to store system attribute data (i.e. file metadata) that
does not fit in the dnode's bonus buffer. By allowing a larger bonus
buffer area the use of a spill block can be avoided. Spill blocks
potentially incur an additional read I/O for every dnode in a dnode
block. As a worst case example, reading 32 dnodes from a 16k dnode block
and all of the spill blocks could issue 33 separate reads. Now suppose
those dnodes have size 1024 and therefore don't need spill blocks. Then
the worst case number of blocks read is reduced to from 33 to two--one
per dnode block. In practice spill blocks may tend to be co-located on
disk with the dnode blocks so the reduction in I/O would not be this
drastic. In a badly fragmented pool, however, the improvement could be
significant.
ZFS-on-Linux systems that make heavy use of extended attributes would
benefit from this feature. In particular, ZFS-on-Linux supports the
xattr=sa dataset property which allows file extended attribute data
to be stored in the dnode bonus buffer as an alternative to the
traditional directory-based format. Workloads such as SELinux and the
Lustre distributed filesystem often store enough xattr data to force
spill bocks when xattr=sa is in effect. Large dnodes may therefore
provide a performance benefit to such systems.
Other use cases that may benefit from this feature include files with
large ACLs and symbolic links with long target names. Furthermore,
this feature may be desirable on other platforms in case future
applications or features are developed that could make use of a
larger bonus buffer area.
Implementation
--------------
The size of a dnode may be a multiple of 512 bytes up to the size of
a dnode block (currently 16384 bytes). A dn_extra_slots field was
added to the current on-disk dnode_phys_t structure to describe the
size of the physical dnode on disk. The 8 bits for this field were
taken from the zero filled dn_pad2 field. The field represents how
many "extra" dnode_phys_t slots a dnode consumes in its dnode block.
This convention results in a value of 0 for 512 byte dnodes which
preserves on-disk format compatibility with older software.
Similarly, the in-memory dnode_t structure has a new dn_num_slots field
to represent the total number of dnode_phys_t slots consumed on disk.
Thus dn->dn_num_slots is 1 greater than the corresponding
dnp->dn_extra_slots. This difference in convention was adopted
because, unlike on-disk structures, backward compatibility is not a
concern for in-memory objects, so we used a more natural way to
represent size for a dnode_t.
The default size for newly created dnodes is determined by the value of
a new "dnodesize" dataset property. By default the property is set to
"legacy" which is compatible with older software. Setting the property
to "auto" will allow the filesystem to choose the most suitable dnode
size. Currently this just sets the default dnode size to 1k, but future
code improvements could dynamically choose a size based on observed
workload patterns. Dnodes of varying sizes can coexist within the same
dataset and even within the same dnode block. For example, to enable
automatically-sized dnodes, run
# zfs set dnodesize=auto tank/fish
The user can also specify literal values for the dnodesize property.
These are currently limited to powers of two from 1k to 16k. The
power-of-2 limitation is only for simplicity of the user interface.
Internally the implementation can handle any multiple of 512 up to 16k,
and consumers of the DMU API can specify any legal dnode value.
The size of a new dnode is determined at object allocation time and
stored as a new field in the znode in-memory structure. New DMU
interfaces are added to allow the consumer to specify the dnode size
that a newly allocated object should use. Existing interfaces are
unchanged to avoid having to update every call site and to preserve
compatibility with external consumers such as Lustre. The new
interfaces names are given below. The versions of these functions that
don't take a dnodesize parameter now just call the _dnsize() versions
with a dnodesize of 0, which means use the legacy dnode size.
New DMU interfaces:
dmu_object_alloc_dnsize()
dmu_object_claim_dnsize()
dmu_object_reclaim_dnsize()
New ZAP interfaces:
zap_create_dnsize()
zap_create_norm_dnsize()
zap_create_flags_dnsize()
zap_create_claim_norm_dnsize()
zap_create_link_dnsize()
The constant DN_MAX_BONUSLEN is renamed to DN_OLD_MAX_BONUSLEN. The
spa_maxdnodesize() function should be used to determine the maximum
bonus length for a pool.
These are a few noteworthy changes to key functions:
* The prototype for dnode_hold_impl() now takes a "slots" parameter.
When the DNODE_MUST_BE_FREE flag is set, this parameter is used to
ensure the hole at the specified object offset is large enough to
hold the dnode being created. The slots parameter is also used
to ensure a dnode does not span multiple dnode blocks. In both of
these cases, if a failure occurs, ENOSPC is returned. Keep in mind,
these failure cases are only possible when using DNODE_MUST_BE_FREE.
If the DNODE_MUST_BE_ALLOCATED flag is set, "slots" must be 0.
dnode_hold_impl() will check if the requested dnode is already
consumed as an extra dnode slot by an large dnode, in which case
it returns ENOENT.
* The function dmu_object_alloc() advances to the next dnode block
if dnode_hold_impl() returns an error for a requested object.
This is because the beginning of the next dnode block is the only
location it can safely assume to either be a hole or a valid
starting point for a dnode.
* dnode_next_offset_level() and other functions that iterate
through dnode blocks may no longer use a simple array indexing
scheme. These now use the current dnode's dn_num_slots field to
advance to the next dnode in the block. This is to ensure we
properly skip the current dnode's bonus area and don't interpret it
as a valid dnode.
zdb
---
The zdb command was updated to display a dnode's size under the
"dnsize" column when the object is dumped.
For ZIL create log records, zdb will now display the slot count for
the object.
ztest
-----
Ztest chooses a random dnodesize for every newly created object. The
random distribution is more heavily weighted toward small dnodes to
better simulate real-world datasets.
Unused bonus buffer space is filled with non-zero values computed from
the object number, dataset id, offset, and generation number. This
helps ensure that the dnode traversal code properly skips the interior
regions of large dnodes, and that these interior regions are not
overwritten by data belonging to other dnodes. A new test visits each
object in a dataset. It verifies that the actual dnode size matches what
was stored in the ztest block tag when it was created. It also verifies
that the unused bonus buffer space is filled with the expected data
patterns.
ZFS Test Suite
--------------
Added six new large dnode-specific tests, and integrated the dnodesize
property into existing tests for zfs allow and send/recv.
Send/Receive
------------
ZFS send streams for datasets containing large dnodes cannot be received
on pools that don't support the large_dnode feature. A send stream with
large dnodes sets a DMU_BACKUP_FEATURE_LARGE_DNODE flag which will be
unrecognized by an incompatible receiving pool so that the zfs receive
will fail gracefully.
While not implemented here, it may be possible to generate a
backward-compatible send stream from a dataset containing large
dnodes. The implementation may be tricky, however, because the send
object record for a large dnode would need to be resized to a 512
byte dnode, possibly kicking in a spill block in the process. This
means we would need to construct a new SA layout and possibly
register it in the SA layout object. The SA layout is normally just
sent as an ordinary object record. But if we are constructing new
layouts while generating the send stream we'd have to build the SA
layout object dynamically and send it at the end of the stream.
For sending and receiving between pools that do support large dnodes,
the drr_object send record type is extended with a new field to store
the dnode slot count. This field was repurposed from unused padding
in the structure.
ZIL Replay
----------
The dnode slot count is stored in the uppermost 8 bits of the lr_foid
field. The bits were unused as the object id is currently capped at
48 bits.
Resizing Dnodes
---------------
It should be possible to resize a dnode when it is dirtied if the
current dnodesize dataset property differs from the dnode's size, but
this functionality is not currently implemented. Clearly a dnode can
only grow if there are sufficient contiguous unused slots in the
dnode block, but it should always be possible to shrink a dnode.
Growing dnodes may be useful to reduce fragmentation in a pool with
many spill blocks in use. Shrinking dnodes may be useful to allow
sending a dataset to a pool that doesn't support the large_dnode
feature.
Feature Reference Counting
--------------------------
The reference count for the large_dnode pool feature tracks the
number of datasets that have ever contained a dnode of size larger
than 512 bytes. The first time a large dnode is created in a dataset
the dataset is converted to an extensible dataset. This is a one-way
operation and the only way to decrement the feature count is to
destroy the dataset, even if the dataset no longer contains any large
dnodes. The complexity of reference counting on a per-dnode basis was
too high, so we chose to track it on a per-dataset basis similarly to
the large_block feature.
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #3542
2016-03-17 04:25:34 +03:00
|
|
|
EXPORT_SYMBOL(spa_maxdnodesize);
|
2010-08-26 22:49:16 +04:00
|
|
|
|
|
|
|
/* Miscellaneous support routines */
|
|
|
|
EXPORT_SYMBOL(spa_guid_exists);
|
|
|
|
EXPORT_SYMBOL(spa_strdup);
|
|
|
|
EXPORT_SYMBOL(spa_strfree);
|
|
|
|
EXPORT_SYMBOL(spa_get_random);
|
|
|
|
EXPORT_SYMBOL(spa_generate_guid);
|
2013-12-09 22:37:51 +04:00
|
|
|
EXPORT_SYMBOL(snprintf_blkptr);
|
2010-08-26 22:49:16 +04:00
|
|
|
EXPORT_SYMBOL(spa_freeze);
|
|
|
|
EXPORT_SYMBOL(spa_upgrade);
|
|
|
|
EXPORT_SYMBOL(spa_evict_all);
|
|
|
|
EXPORT_SYMBOL(spa_lookup_by_guid);
|
|
|
|
EXPORT_SYMBOL(spa_has_spare);
|
|
|
|
EXPORT_SYMBOL(dva_get_dsize_sync);
|
|
|
|
EXPORT_SYMBOL(bp_get_dsize_sync);
|
|
|
|
EXPORT_SYMBOL(bp_get_dsize);
|
|
|
|
EXPORT_SYMBOL(spa_has_slogs);
|
|
|
|
EXPORT_SYMBOL(spa_is_root);
|
|
|
|
EXPORT_SYMBOL(spa_writeable);
|
|
|
|
EXPORT_SYMBOL(spa_mode);
|
|
|
|
EXPORT_SYMBOL(spa_namespace_lock);
|
OpenZFS 9075 - Improve ZFS pool import/load process and corrupted pool recovery
Some work has been done lately to improve the debugability of the ZFS pool
load (and import) process. This includes:
7638 Refactor spa_load_impl into several functions
8961 SPA load/import should tell us why it failed
7277 zdb should be able to print zfs_dbgmsg's
To iterate on top of that, there's a few changes that were made to make the
import process more resilient and crash free. One of the first tasks during the
pool load process is to parse a config provided from userland that describes
what devices the pool is composed of. A vdev tree is generated from that config,
and then all the vdevs are opened.
The Meta Object Set (MOS) of the pool is accessed, and several metadata objects
that are necessary to load the pool are read. The exact configuration of the
pool is also stored inside the MOS. Since the configuration provided from
userland is external and might not accurately describe the vdev tree
of the pool at the txg that is being loaded, it cannot be relied upon to safely
operate the pool. For that reason, the configuration in the MOS is read early
on. In the past, the two configurations were compared together and if there was
a mismatch then the load process was aborted and an error was returned.
The latter was a good way to ensure a pool does not get corrupted, however it
made the pool load process needlessly fragile in cases where the vdev
configuration changed or the userland configuration was outdated. Since the MOS
is stored in 3 copies, the configuration provided by userland doesn't have to be
perfect in order to read its contents. Hence, a new approach has been adopted:
The pool is first opened with the untrusted userland configuration just so that
the real configuration can be read from the MOS. The trusted MOS configuration
is then used to generate a new vdev tree and the pool is re-opened.
When the pool is opened with an untrusted configuration, writes are disabled
to avoid accidentally damaging it. During reads, some sanity checks are
performed on block pointers to see if each DVA points to a known vdev;
when the configuration is untrusted, instead of panicking the system if those
checks fail we simply avoid issuing reads to the invalid DVAs.
This new two-step pool load process now allows rewinding pools accross
vdev tree changes such as device replacement, addition, etc. Loading a pool
from an external config file in a clustering environment also becomes much
safer now since the pool will import even if the config is outdated and didn't,
for instance, register a recent device addition.
With this code in place, it became relatively easy to implement a
long-sought-after feature: the ability to import a pool with missing top level
(i.e. non-redundant) devices. Note that since this almost guarantees some loss
of data, this feature is for now restricted to a read-only import.
Porting notes (ZTS):
* Fix 'make dist' target in zpool_import
* The maximum path length allowed by tar is 99 characters. Several
of the new test cases exceeded this limit resulting in them not
being included in the tarball. Shorten the names slightly.
* Set/get tunables using accessor functions.
* Get last synced txg via the "zfs_txg_history" mechanism.
* Clear zinject handlers in cleanup for import_cache_device_replaced
and import_rewind_device_replaced in order that the zpool can be
exported if there is an error.
* Increase FILESIZE to 8G in zfs-test.sh to allow for a larger
ext4 file system to be created on ZFS_DISK2. Also, there's
no need to partition ZFS_DISK2 at all. The partitioning had
already been disabled for multipath devices. Among other things,
the partitioning steals some space from the ext4 file system,
makes it difficult to accurately calculate the paramters to
parted and can make some of the tests fail.
* Increase FS_SIZE and FILE_SIZE in the zpool_import test
configuration now that FILESIZE is larger.
* Write more data in order that device evacuation take lonnger in
a couple tests.
* Use mkdir -p to avoid errors when the directory already exists.
* Remove use of sudo in import_rewind_config_changed.
Authored by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Approved by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>
OpenZFS-issue: https://illumos.org/issues/9075
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/619c0123
Closes #7459
2016-07-22 17:39:36 +03:00
|
|
|
EXPORT_SYMBOL(spa_trust_config);
|
|
|
|
EXPORT_SYMBOL(spa_missing_tvds_allowed);
|
|
|
|
EXPORT_SYMBOL(spa_set_missing_tvds);
|
2018-06-06 19:33:54 +03:00
|
|
|
EXPORT_SYMBOL(spa_state_to_name);
|
2016-12-17 01:11:29 +03:00
|
|
|
EXPORT_SYMBOL(spa_importing_readonly_checkpoint);
|
|
|
|
EXPORT_SYMBOL(spa_min_claim_txg);
|
|
|
|
EXPORT_SYMBOL(spa_suspend_async_destroy);
|
|
|
|
EXPORT_SYMBOL(spa_has_checkpoint);
|
|
|
|
EXPORT_SYMBOL(spa_top_vdevs_spacemap_addressable);
|
2013-04-30 02:49:23 +04:00
|
|
|
|
2016-12-12 21:46:26 +03:00
|
|
|
/* BEGIN CSTYLED */
|
2014-12-23 03:54:43 +03:00
|
|
|
module_param(zfs_flags, uint, 0644);
|
Swap DTRACE_PROBE* with Linux tracepoints
This patch leverages Linux tracepoints from within the ZFS on Linux
code base. It also refactors the debug code to bring it back in sync
with Illumos.
The information exported via tracepoints can be used for a variety of
reasons (e.g. debugging, tuning, general exploration/understanding,
etc). It is advantageous to use Linux tracepoints as the mechanism to
export this kind of information (as opposed to something else) for a
number of reasons:
* A number of external tools can make use of our tracepoints
"automatically" (e.g. perf, systemtap)
* Tracepoints are designed to be extremely cheap when disabled
* It's one of the "accepted" ways to export this kind of
information; many other kernel subsystems use tracepoints too.
Unfortunately, though, there are a few caveats as well:
* Linux tracepoints appear to only be available to GPL licensed
modules due to the way certain kernel functions are exported.
Thus, to actually make use of the tracepoints introduced by this
patch, one might have to patch and re-compile the kernel;
exporting the necessary functions to non-GPL modules.
* Prior to upstream kernel version v3.14-rc6-30-g66cc69e, Linux
tracepoints are not available for unsigned kernel modules
(tracepoints will get disabled due to the module's 'F' taint).
Thus, one either has to sign the zfs kernel module prior to
loading it, or use a kernel versioned v3.14-rc6-30-g66cc69e or
newer.
Assuming the above two requirements are satisfied, lets look at an
example of how this patch can be used and what information it exposes
(all commands run as 'root'):
# list all zfs tracepoints available
$ ls /sys/kernel/debug/tracing/events/zfs
enable filter zfs_arc__delete
zfs_arc__evict zfs_arc__hit zfs_arc__miss
zfs_l2arc__evict zfs_l2arc__hit zfs_l2arc__iodone
zfs_l2arc__miss zfs_l2arc__read zfs_l2arc__write
zfs_new_state__mfu zfs_new_state__mru
# enable all zfs tracepoints, clear the tracepoint ring buffer
$ echo 1 > /sys/kernel/debug/tracing/events/zfs/enable
$ echo 0 > /sys/kernel/debug/tracing/trace
# import zpool called 'tank', inspect tracepoint data (each line was
# truncated, they're too long for a commit message otherwise)
$ zpool import tank
$ cat /sys/kernel/debug/tracing/trace | head -n35
# tracer: nop
#
# entries-in-buffer/entries-written: 1219/1219 #P:8
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
lt-zpool-30132 [003] .... 91344.200050: zfs_arc__miss: hdr...
z_rd_int/0-30156 [003] .... 91344.200611: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.201173: zfs_arc__miss: hdr...
z_rd_int/1-30157 [003] .... 91344.201756: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.201795: zfs_arc__miss: hdr...
z_rd_int/2-30158 [003] .... 91344.202099: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202126: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202130: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202134: zfs_arc__hit: hdr ...
lt-zpool-30132 [003] .... 91344.202146: zfs_arc__miss: hdr...
z_rd_int/3-30159 [003] .... 91344.202457: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202484: zfs_arc__miss: hdr...
z_rd_int/4-30160 [003] .... 91344.202866: zfs_new_state__mru...
lt-zpool-30132 [003] .... 91344.202891: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.203034: zfs_arc__miss: hdr...
z_rd_iss/1-30149 [001] .... 91344.203749: zfs_new_state__mru...
lt-zpool-30132 [001] .... 91344.203789: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.203878: zfs_arc__miss: hdr...
z_rd_iss/3-30151 [001] .... 91344.204315: zfs_new_state__mru...
lt-zpool-30132 [001] .... 91344.204332: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204337: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204352: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204356: zfs_arc__hit: hdr ...
lt-zpool-30132 [001] .... 91344.204360: zfs_arc__hit: hdr ...
To highlight the kind of detailed information that is being exported
using this infrastructure, I've taken the first tracepoint line from the
output above and reformatted it such that it fits in 80 columns:
lt-zpool-30132 [003] .... 91344.200050: zfs_arc__miss:
hdr {
dva 0x1:0x40082
birth 15491
cksum0 0x163edbff3a
flags 0x640
datacnt 1
type 1
size 2048
spa 3133524293419867460
state_type 0
access 0
mru_hits 0
mru_ghost_hits 0
mfu_hits 0
mfu_ghost_hits 0
l2_hits 0
refcount 1
} bp {
dva0 0x1:0x40082
dva1 0x1:0x3000e5
dva2 0x1:0x5a006e
cksum 0x163edbff3a:0x75af30b3dd6:0x1499263ff5f2b:0x288bd118815e00
lsize 2048
} zb {
objset 0
object 0
level -1
blkid 0
}
For the specific tracepoint shown here, 'zfs_arc__miss', data is
exported detailing the arc_buf_hdr_t (hdr), blkptr_t (bp), and
zbookmark_t (zb) that caused the ARC miss (down to the exact DVA!).
This kind of precise and detailed information can be extremely valuable
when trying to answer certain kinds of questions.
For anybody unfamiliar but looking to build on this, I found the XFS
source code along with the following three web links to be extremely
helpful:
* http://lwn.net/Articles/379903/
* http://lwn.net/Articles/381064/
* http://lwn.net/Articles/383362/
I should also node the more "boring" aspects of this patch:
* The ZFS_LINUX_COMPILE_IFELSE autoconf macro was modified to
support a sixth paramter. This parameter is used to populate the
contents of the new conftest.h file. If no sixth parameter is
provided, conftest.h will be empty.
* The ZFS_LINUX_TRY_COMPILE_HEADER autoconf macro was introduced.
This macro is nearly identical to the ZFS_LINUX_TRY_COMPILE macro,
except it has support for a fifth option that is then passed as
the sixth parameter to ZFS_LINUX_COMPILE_IFELSE.
These autoconf changes were needed to test the availability of the Linux
tracepoint macros. Due to the odd nature of the Linux tracepoint macro
API, a separate ".h" must be created (the path and filename is used
internally by the kernel's define_trace.h file).
* The HAVE_DECLARE_EVENT_CLASS autoconf macro was introduced. This
is to determine if we can safely enable the Linux tracepoint
functionality. We need to selectively disable the tracepoint code
due to the kernel exporting certain functions as GPL only. Without
this check, the build process will fail at link time.
In addition, the SET_ERROR macro was modified into a tracepoint as well.
To do this, the 'sdt.h' file was moved into the 'include/sys' directory
and now contains a userspace portion and a kernel space portion. The
dprintf and zfs_dbgmsg* interfaces are now implemented as tracepoint as
well.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2014-06-13 21:54:48 +04:00
|
|
|
MODULE_PARM_DESC(zfs_flags, "Set additional debugging flags");
|
|
|
|
|
|
|
|
module_param(zfs_recover, int, 0644);
|
|
|
|
MODULE_PARM_DESC(zfs_recover, "Set to attempt to recover from fatal errors");
|
|
|
|
|
|
|
|
module_param(zfs_free_leak_on_eio, int, 0644);
|
|
|
|
MODULE_PARM_DESC(zfs_free_leak_on_eio,
|
|
|
|
"Set to ignore IO errors during free and permanently leak the space");
|
|
|
|
|
2018-05-09 07:45:47 +03:00
|
|
|
module_param_call(zfs_deadman_synctime_ms, param_set_deadman_synctime,
|
|
|
|
param_get_ulong, &zfs_deadman_synctime_ms, 0644);
|
2017-12-19 01:06:07 +03:00
|
|
|
MODULE_PARM_DESC(zfs_deadman_synctime_ms,
|
|
|
|
"Pool sync expiration time in milliseconds");
|
|
|
|
|
2018-05-09 07:45:47 +03:00
|
|
|
module_param_call(zfs_deadman_ziotime_ms, param_set_deadman_ziotime,
|
|
|
|
param_get_ulong, &zfs_deadman_ziotime_ms, 0644);
|
2017-12-19 01:06:07 +03:00
|
|
|
MODULE_PARM_DESC(zfs_deadman_ziotime_ms,
|
|
|
|
"IO expiration time in milliseconds");
|
2013-04-30 02:49:23 +04:00
|
|
|
|
2017-02-01 01:19:08 +03:00
|
|
|
module_param(zfs_deadman_checktime_ms, ulong, 0644);
|
|
|
|
MODULE_PARM_DESC(zfs_deadman_checktime_ms,
|
|
|
|
"Dead I/O check interval in milliseconds");
|
|
|
|
|
2013-04-30 02:49:23 +04:00
|
|
|
module_param(zfs_deadman_enabled, int, 0644);
|
|
|
|
MODULE_PARM_DESC(zfs_deadman_enabled, "Enable deadman timer");
|
Illumos #4045 write throttle & i/o scheduler performance work
4045 zfs write throttle & i/o scheduler performance work
1. The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync
read, sync write, async read, async write, and scrub/resilver. The scheduler
issues a number of concurrent i/os from each class to the device. Once a class
has been selected, an i/o is selected from this class using either an elevator
algorithem (async, scrub classes) or FIFO (sync classes). The number of
concurrent async write i/os is tuned dynamically based on i/o load, to achieve
good sync i/o latency when there is not a high load of writes, and good write
throughput when there is. See the block comment in vdev_queue.c (reproduced
below) for more details.
2. The write throttle (dsl_pool_tempreserve_space() and
txg_constrain_throughput()) is rewritten to produce much more consistent delays
when under constant load. The new write throttle is based on the amount of
dirty data, rather than guesses about future performance of the system. When
there is a lot of dirty data, each transaction (e.g. write() syscall) will be
delayed by the same small amount. This eliminates the "brick wall of wait"
that the old write throttle could hit, causing all transactions to wait several
seconds until the next txg opens. One of the keys to the new write throttle is
decrementing the amount of dirty data as i/o completes, rather than at the end
of spa_sync(). Note that the write throttle is only applied once the i/o
scheduler is issuing the maximum number of outstanding async writes. See the
block comments in dsl_pool.c and above dmu_tx_delay() (reproduced below) for
more details.
This diff has several other effects, including:
* the commonly-tuned global variable zfs_vdev_max_pending has been removed;
use per-class zfs_vdev_*_max_active values or zfs_vdev_max_active instead.
* the size of each txg (meaning the amount of dirty data written, and thus the
time it takes to write out) is now controlled differently. There is no longer
an explicit time goal; the primary determinant is amount of dirty data.
Systems that are under light or medium load will now often see that a txg is
always syncing, but the impact to performance (e.g. read latency) is minimal.
Tune zfs_dirty_data_max and zfs_dirty_data_sync to control this.
* zio_taskq_batch_pct = 75 -- Only use 75% of all CPUs for compression,
checksum, etc. This improves latency by not allowing these CPU-intensive tasks
to consume all CPU (on machines with at least 4 CPU's; the percentage is
rounded up).
--matt
APPENDIX: problems with the current i/o scheduler
The current ZFS i/o scheduler (vdev_queue.c) is deadline based. The problem
with this is that if there are always i/os pending, then certain classes of
i/os can see very long delays.
For example, if there are always synchronous reads outstanding, then no async
writes will be serviced until they become "past due". One symptom of this
situation is that each pass of the txg sync takes at least several seconds
(typically 3 seconds).
If many i/os become "past due" (their deadline is in the past), then we must
service all of these overdue i/os before any new i/os. This happens when we
enqueue a batch of async writes for the txg sync, with deadlines 2.5 seconds in
the future. If we can't complete all the i/os in 2.5 seconds (e.g. because
there were always reads pending), then these i/os will become past due. Now we
must service all the "async" writes (which could be hundreds of megabytes)
before we service any reads, introducing considerable latency to synchronous
i/os (reads or ZIL writes).
Notes on porting to ZFS on Linux:
- zio_t gained new members io_physdone and io_phys_children. Because
object caches in the Linux port call the constructor only once at
allocation time, objects may contain residual data when retrieved
from the cache. Therefore zio_create() was updated to zero out the two
new fields.
- vdev_mirror_pending() relied on the depth of the per-vdev pending queue
(vq->vq_pending_tree) to select the least-busy leaf vdev to read from.
This tree has been replaced by vq->vq_active_tree which is now used
for the same purpose.
- vdev_queue_init() used the value of zfs_vdev_max_pending to determine
the number of vdev I/O buffers to pre-allocate. That global no longer
exists, so we instead use the sum of the *_max_active values for each of
the five I/O classes described above.
- The Illumos implementation of dmu_tx_delay() delays a transaction by
sleeping in condition variable embedded in the thread
(curthread->t_delay_cv). We do not have an equivalent CV to use in
Linux, so this change replaced the delay logic with a wrapper called
zfs_sleep_until(). This wrapper could be adopted upstream and in other
downstream ports to abstract away operating system-specific delay logic.
- These tunables are added as module parameters, and descriptions added
to the zfs-module-parameters.5 man page.
spa_asize_inflation
zfs_deadman_synctime_ms
zfs_vdev_max_active
zfs_vdev_async_write_active_min_dirty_percent
zfs_vdev_async_write_active_max_dirty_percent
zfs_vdev_async_read_max_active
zfs_vdev_async_read_min_active
zfs_vdev_async_write_max_active
zfs_vdev_async_write_min_active
zfs_vdev_scrub_max_active
zfs_vdev_scrub_min_active
zfs_vdev_sync_read_max_active
zfs_vdev_sync_read_min_active
zfs_vdev_sync_write_max_active
zfs_vdev_sync_write_min_active
zfs_dirty_data_max_percent
zfs_delay_min_dirty_percent
zfs_dirty_data_max_max_percent
zfs_dirty_data_max
zfs_dirty_data_max_max
zfs_dirty_data_sync
zfs_delay_scale
The latter four have type unsigned long, whereas they are uint64_t in
Illumos. This accommodates Linux's module_param() supported types, but
means they may overflow on 32-bit architectures.
The values zfs_dirty_data_max and zfs_dirty_data_max_max are the most
likely to overflow on 32-bit systems, since they express physical RAM
sizes in bytes. In fact, Illumos initializes zfs_dirty_data_max_max to
2^32 which does overflow. To resolve that, this port instead initializes
it in arc_init() to 25% of physical RAM, and adds the tunable
zfs_dirty_data_max_max_percent to override that percentage. While this
solution doesn't completely avoid the overflow issue, it should be a
reasonable default for most systems, and the minority of affected
systems can work around the issue by overriding the defaults.
- Fixed reversed logic in comment above zfs_delay_scale declaration.
- Clarified comments in vdev_queue.c regarding when per-queue minimums take
effect.
- Replaced dmu_tx_write_limit in the dmu_tx kstat file
with dmu_tx_dirty_delay and dmu_tx_dirty_over_max. The first counts
how many times a transaction has been delayed because the pool dirty
data has exceeded zfs_delay_min_dirty_percent. The latter counts how
many times the pool dirty data has exceeded zfs_dirty_data_max (which
we expect to never happen).
- The original patch would have regressed the bug fixed in
zfsonlinux/zfs@c418410, which prevented users from setting the
zfs_vdev_aggregation_limit tuning larger than SPA_MAXBLOCKSIZE.
A similar fix is added to vdev_queue_aggregate().
- In vdev_queue_io_to_issue(), dynamically allocate 'zio_t search' on the
heap instead of the stack. In Linux we can't afford such large
structures on the stack.
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Ned Bass <bass6@llnl.gov>
Reviewed by: Brendan Gregg <brendan.gregg@joyent.com>
Approved by: Robert Mustacchi <rm@joyent.com>
References:
http://www.illumos.org/issues/4045
illumos/illumos-gate@69962b5647e4a8b9b14998733b765925381b727e
Ported-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1913
2013-08-29 07:01:20 +04:00
|
|
|
|
2017-12-19 01:06:07 +03:00
|
|
|
module_param_call(zfs_deadman_failmode, param_set_deadman_failmode,
|
|
|
|
param_get_charp, &zfs_deadman_failmode, 0644);
|
|
|
|
MODULE_PARM_DESC(zfs_deadman_failmode, "Failmode for deadman timer");
|
|
|
|
|
Illumos #4045 write throttle & i/o scheduler performance work
4045 zfs write throttle & i/o scheduler performance work
1. The ZFS i/o scheduler (vdev_queue.c) now divides i/os into 5 classes: sync
read, sync write, async read, async write, and scrub/resilver. The scheduler
issues a number of concurrent i/os from each class to the device. Once a class
has been selected, an i/o is selected from this class using either an elevator
algorithem (async, scrub classes) or FIFO (sync classes). The number of
concurrent async write i/os is tuned dynamically based on i/o load, to achieve
good sync i/o latency when there is not a high load of writes, and good write
throughput when there is. See the block comment in vdev_queue.c (reproduced
below) for more details.
2. The write throttle (dsl_pool_tempreserve_space() and
txg_constrain_throughput()) is rewritten to produce much more consistent delays
when under constant load. The new write throttle is based on the amount of
dirty data, rather than guesses about future performance of the system. When
there is a lot of dirty data, each transaction (e.g. write() syscall) will be
delayed by the same small amount. This eliminates the "brick wall of wait"
that the old write throttle could hit, causing all transactions to wait several
seconds until the next txg opens. One of the keys to the new write throttle is
decrementing the amount of dirty data as i/o completes, rather than at the end
of spa_sync(). Note that the write throttle is only applied once the i/o
scheduler is issuing the maximum number of outstanding async writes. See the
block comments in dsl_pool.c and above dmu_tx_delay() (reproduced below) for
more details.
This diff has several other effects, including:
* the commonly-tuned global variable zfs_vdev_max_pending has been removed;
use per-class zfs_vdev_*_max_active values or zfs_vdev_max_active instead.
* the size of each txg (meaning the amount of dirty data written, and thus the
time it takes to write out) is now controlled differently. There is no longer
an explicit time goal; the primary determinant is amount of dirty data.
Systems that are under light or medium load will now often see that a txg is
always syncing, but the impact to performance (e.g. read latency) is minimal.
Tune zfs_dirty_data_max and zfs_dirty_data_sync to control this.
* zio_taskq_batch_pct = 75 -- Only use 75% of all CPUs for compression,
checksum, etc. This improves latency by not allowing these CPU-intensive tasks
to consume all CPU (on machines with at least 4 CPU's; the percentage is
rounded up).
--matt
APPENDIX: problems with the current i/o scheduler
The current ZFS i/o scheduler (vdev_queue.c) is deadline based. The problem
with this is that if there are always i/os pending, then certain classes of
i/os can see very long delays.
For example, if there are always synchronous reads outstanding, then no async
writes will be serviced until they become "past due". One symptom of this
situation is that each pass of the txg sync takes at least several seconds
(typically 3 seconds).
If many i/os become "past due" (their deadline is in the past), then we must
service all of these overdue i/os before any new i/os. This happens when we
enqueue a batch of async writes for the txg sync, with deadlines 2.5 seconds in
the future. If we can't complete all the i/os in 2.5 seconds (e.g. because
there were always reads pending), then these i/os will become past due. Now we
must service all the "async" writes (which could be hundreds of megabytes)
before we service any reads, introducing considerable latency to synchronous
i/os (reads or ZIL writes).
Notes on porting to ZFS on Linux:
- zio_t gained new members io_physdone and io_phys_children. Because
object caches in the Linux port call the constructor only once at
allocation time, objects may contain residual data when retrieved
from the cache. Therefore zio_create() was updated to zero out the two
new fields.
- vdev_mirror_pending() relied on the depth of the per-vdev pending queue
(vq->vq_pending_tree) to select the least-busy leaf vdev to read from.
This tree has been replaced by vq->vq_active_tree which is now used
for the same purpose.
- vdev_queue_init() used the value of zfs_vdev_max_pending to determine
the number of vdev I/O buffers to pre-allocate. That global no longer
exists, so we instead use the sum of the *_max_active values for each of
the five I/O classes described above.
- The Illumos implementation of dmu_tx_delay() delays a transaction by
sleeping in condition variable embedded in the thread
(curthread->t_delay_cv). We do not have an equivalent CV to use in
Linux, so this change replaced the delay logic with a wrapper called
zfs_sleep_until(). This wrapper could be adopted upstream and in other
downstream ports to abstract away operating system-specific delay logic.
- These tunables are added as module parameters, and descriptions added
to the zfs-module-parameters.5 man page.
spa_asize_inflation
zfs_deadman_synctime_ms
zfs_vdev_max_active
zfs_vdev_async_write_active_min_dirty_percent
zfs_vdev_async_write_active_max_dirty_percent
zfs_vdev_async_read_max_active
zfs_vdev_async_read_min_active
zfs_vdev_async_write_max_active
zfs_vdev_async_write_min_active
zfs_vdev_scrub_max_active
zfs_vdev_scrub_min_active
zfs_vdev_sync_read_max_active
zfs_vdev_sync_read_min_active
zfs_vdev_sync_write_max_active
zfs_vdev_sync_write_min_active
zfs_dirty_data_max_percent
zfs_delay_min_dirty_percent
zfs_dirty_data_max_max_percent
zfs_dirty_data_max
zfs_dirty_data_max_max
zfs_dirty_data_sync
zfs_delay_scale
The latter four have type unsigned long, whereas they are uint64_t in
Illumos. This accommodates Linux's module_param() supported types, but
means they may overflow on 32-bit architectures.
The values zfs_dirty_data_max and zfs_dirty_data_max_max are the most
likely to overflow on 32-bit systems, since they express physical RAM
sizes in bytes. In fact, Illumos initializes zfs_dirty_data_max_max to
2^32 which does overflow. To resolve that, this port instead initializes
it in arc_init() to 25% of physical RAM, and adds the tunable
zfs_dirty_data_max_max_percent to override that percentage. While this
solution doesn't completely avoid the overflow issue, it should be a
reasonable default for most systems, and the minority of affected
systems can work around the issue by overriding the defaults.
- Fixed reversed logic in comment above zfs_delay_scale declaration.
- Clarified comments in vdev_queue.c regarding when per-queue minimums take
effect.
- Replaced dmu_tx_write_limit in the dmu_tx kstat file
with dmu_tx_dirty_delay and dmu_tx_dirty_over_max. The first counts
how many times a transaction has been delayed because the pool dirty
data has exceeded zfs_delay_min_dirty_percent. The latter counts how
many times the pool dirty data has exceeded zfs_dirty_data_max (which
we expect to never happen).
- The original patch would have regressed the bug fixed in
zfsonlinux/zfs@c418410, which prevented users from setting the
zfs_vdev_aggregation_limit tuning larger than SPA_MAXBLOCKSIZE.
A similar fix is added to vdev_queue_aggregate().
- In vdev_queue_io_to_issue(), dynamically allocate 'zio_t search' on the
heap instead of the stack. In Linux we can't afford such large
structures on the stack.
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Christopher Siden <christopher.siden@delphix.com>
Reviewed by: Ned Bass <bass6@llnl.gov>
Reviewed by: Brendan Gregg <brendan.gregg@joyent.com>
Approved by: Robert Mustacchi <rm@joyent.com>
References:
http://www.illumos.org/issues/4045
illumos/illumos-gate@69962b5647e4a8b9b14998733b765925381b727e
Ported-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1913
2013-08-29 07:01:20 +04:00
|
|
|
module_param(spa_asize_inflation, int, 0644);
|
|
|
|
MODULE_PARM_DESC(spa_asize_inflation,
|
2013-11-01 23:26:11 +04:00
|
|
|
"SPA size estimate multiplication factor");
|
2015-09-01 19:45:10 +03:00
|
|
|
|
Add limits to spa_slop_shift tunable
This change adds limits to the possible spa_slop_shift values set via
the sysfs interface. Accepted values are from a minimum of 1 to a
maximum of 31 (inclusive): these limits are based on the following
values observed on a 128PB file-vdev test pool:
spa_slop_shift=1, spa_get_slop_space=63.5PiB
spa_slop_shift=2, spa_get_slop_space=31.8PiB
spa_slop_shift=3, spa_get_slop_space=15.9PiB
spa_slop_shift=4, spa_get_slop_space=7.9PiB
spa_slop_shift=5, spa_get_slop_space=4PiB
spa_slop_shift=6, spa_get_slop_space=2PiB
...
spa_slop_shift=25, spa_get_slop_space=4GiB
spa_slop_shift=26, spa_get_slop_space=2GiB
spa_slop_shift=27, spa_get_slop_space=1016MiB
spa_slop_shift=28, spa_get_slop_space=508MiB
spa_slop_shift=29, spa_get_slop_space=254MiB
spa_slop_shift=30, spa_get_slop_space=128MiB
spa_slop_shift=31, spa_get_slop_space=128MiB
spa_slop_shift=32, spa_get_slop_space=128MiB
Reviewed-by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Closes #7876
Closes #7900
2018-09-21 07:10:12 +03:00
|
|
|
module_param_call(spa_slop_shift, param_set_slop_shift, param_get_int,
|
|
|
|
&spa_slop_shift, 0644);
|
2015-09-01 19:45:10 +03:00
|
|
|
MODULE_PARM_DESC(spa_slop_shift, "Reserved free space in pool");
|
2018-09-06 04:33:36 +03:00
|
|
|
|
|
|
|
module_param(zfs_ddt_data_is_special, int, 0644);
|
|
|
|
MODULE_PARM_DESC(zfs_ddt_data_is_special,
|
|
|
|
"Place DDT data into the special class");
|
|
|
|
|
|
|
|
module_param(zfs_user_indirect_is_special, int, 0644);
|
|
|
|
MODULE_PARM_DESC(zfs_user_indirect_is_special,
|
|
|
|
"Place user data indirect blocks into the special class");
|
2016-12-12 21:46:26 +03:00
|
|
|
/* END CSTYLED */
|
2010-08-26 22:49:16 +04:00
|
|
|
#endif
|