Log Spacemap Project

= Motivation

At Delphix we've seen a lot of customer systems where fragmentation
is over 75% and random writes take a performance hit because a lot
of time is spend on I/Os that update on-disk space accounting metadata.
Specifically, we seen cases where 20% to 40% of sync time is spend
after sync pass 1 and ~30% of the I/Os on the system is spent updating
spacemaps.

The problem is that these pools have existed long enough that we've
touched almost every metaslab at least once, and random writes
scatter frees across all metaslabs every TXG, thus appending to
their spacemaps and resulting in many I/Os. To give an example,
assuming that every VDEV has 200 metaslabs and our writes fit within
a single spacemap block (generally 4K) we have 200 I/Os. Then if we
assume 2 levels of indirection, we need 400 additional I/Os and
since we are talking about metadata for which we keep 2 extra copies
for redundancy we need to triple that number, leading to a total of
1800 I/Os per VDEV every TXG.

We could try and decrease the number of metaslabs so we have less
I/Os per TXG but then each metaslab would cover a wider range on
disk and thus would take more time to be loaded in memory from disk.
In addition, after it's loaded, it's range tree would consume more
memory.

Another idea would be to just increase the spacemap block size
which would allow us to fit more entries within an I/O block
resulting in fewer I/Os per metaslab and a speedup in loading time.
The problem is still that we don't deal with the number of I/Os
going up as the number of metaslabs is increasing and the fact
is that we generally write a lot to a few metaslabs and a little
to the rest of them. Thus, just increasing the block size would
actually waste bandwidth because we won't be utilizing our bigger
block size.

= About this patch

This patch introduces the Log Spacemap project which provides the
solution to the above problem while taking into account all the
aforementioned tradeoffs. The details on how it achieves that can
be found in the references sections below and in the code (see
Big Theory Statement in spa_log_spacemap.c).

Even though the change is fairly constraint within the metaslab
and lower-level SPA codepaths, there is a side-change that is
user-facing. The change is that VDEV IDs from VDEV holes will no
longer be reused. To give some background and reasoning for this,
when a log device is removed and its VDEV structure was replaced
with a hole (or was compacted; if at the end of the vdev array),
its vdev_id could be reused by devices added after that. Now
with the pool-wide space maps recording the vdev ID, this behavior
can cause problems (e.g. is this entry referring to a segment in
the new vdev or the removed log?). Thus, to simplify things the
ID reuse behavior is gone and now vdev IDs for top-level vdevs
are truly unique within a pool.

= Testing

The illumos implementation of this feature has been used internally
for a year and has been in production for ~6 months. For this patch
specifically there don't seem to be any regressions introduced to
ZTS and I have been running zloop for a week without any related
problems.

= Performance Analysis (Linux Specific)

All performance results and analysis for illumos can be found in
the links of the references. Redoing the same experiments in Linux
gave similar results. Below are the specifics of the Linux run.

After the pool reached stable state the percentage of the time
spent in pass 1 per TXG was 64% on average for the stock bits
while the log spacemap bits stayed at 95% during the experiment
(graph: sdimitro.github.io/img/linux-lsm/PercOfSyncInPassOne.png).

Sync times per TXG were 37.6 seconds on average for the stock
bits and 22.7 seconds for the log spacemap bits (related graph:
sdimitro.github.io/img/linux-lsm/SyncTimePerTXG.png). As a result
the log spacemap bits were able to push more TXGs, which is also
the reason why all graphs quantified per TXG have more entries for
the log spacemap bits.

Another interesting aspect in terms of txg syncs is that the stock
bits had 22% of their TXGs reach sync pass 7, 55% reach sync pass 8,
and 20% reach 9. The log space map bits reached sync pass 4 in 79%
of their TXGs, sync pass 7 in 19%, and sync pass 8 at 1%. This
emphasizes the fact that not only we spend less time on metadata
but we also iterate less times to convergence in spa_sync() dirtying
objects.
[related graphs:
stock- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGStock.png
lsm- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGLSM.png]

Finally, the improvement in IOPs that the userland gains from the
change is approximately 40%. There is a consistent win in IOPS as
you can see from the graphs below but the absolute amount of
improvement that the log spacemap gives varies within each minute
interval.
sdimitro.github.io/img/linux-lsm/StockVsLog3Days.png
sdimitro.github.io/img/linux-lsm/StockVsLog10Hours.png

= Porting to Other Platforms

For people that want to port this commit to other platforms below
is a list of ZoL commits that this patch depends on:

Make zdb results for checkpoint tests consistent
db587941c5

Update vdev_is_spacemap_addressable() for new spacemap encoding
419ba59145

Simplify spa_sync by breaking it up to smaller functions
8dc2197b7b

Factor metaslab_load_wait() in metaslab_load()
b194fab0fb

Rename range_tree_verify to range_tree_verify_not_present
df72b8bebe

Change target size of metaslabs from 256GB to 16GB
c853f382db

zdb -L should skip leak detection altogether
21e7cf5da8

vs_alloc can underflow in L2ARC vdevs
7558997d2f

Simplify log vdev removal code
6c926f426a

Get rid of space_map_update() for ms_synced_length
425d3237ee

Introduce auxiliary metaslab histograms
928e8ad47d

Error path in metaslab_load_impl() forgets to drop ms_sync_lock
8eef997679

= References

Background, Motivation, and Internals of the Feature
- OpenZFS 2017 Presentation:
youtu.be/jj2IxRkl5bQ
- Slides:
slideshare.net/SerapheimNikolaosDim/zfs-log-spacemaps-project

Flushing Algorithm Internals & Performance Results
(Illumos Specific)
- Blogpost:
sdimitro.github.io/post/zfs-lsm-flushing/
- OpenZFS 2018 Presentation:
youtu.be/x6D2dHRjkxw
- Slides:
slideshare.net/SerapheimNikolaosDim/zfs-log-spacemap-flushing-algorithm

Upstream Delphix Issues:
DLPX-51539, DLPX-59659, DLPX-57783, DLPX-61438, DLPX-41227, DLPX-59320
DLPX-63385

Reviewed-by: Sean Eric Fagan <sef@ixsystems.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Closes #8442
This commit is contained in:
Serapheim Dimitropoulos 2019-07-16 10:11:49 -07:00 committed by Brian Behlendorf
parent df834a7ccc
commit 93e28d661e
41 changed files with 3196 additions and 333 deletions

View File

@ -812,6 +812,12 @@ get_checkpoint_refcount(vdev_t *vd)
return (refcount); return (refcount);
} }
static int
get_log_spacemap_refcount(spa_t *spa)
{
return (avl_numnodes(&spa->spa_sm_logs_by_txg));
}
static int static int
verify_spacemap_refcounts(spa_t *spa) verify_spacemap_refcounts(spa_t *spa)
{ {
@ -826,6 +832,7 @@ verify_spacemap_refcounts(spa_t *spa)
actual_refcount += get_obsolete_refcount(spa->spa_root_vdev); actual_refcount += get_obsolete_refcount(spa->spa_root_vdev);
actual_refcount += get_prev_obsolete_spacemap_refcount(spa); actual_refcount += get_prev_obsolete_spacemap_refcount(spa);
actual_refcount += get_checkpoint_refcount(spa->spa_root_vdev); actual_refcount += get_checkpoint_refcount(spa->spa_root_vdev);
actual_refcount += get_log_spacemap_refcount(spa);
if (expected_refcount != actual_refcount) { if (expected_refcount != actual_refcount) {
(void) printf("space map refcount mismatch: expected %lld != " (void) printf("space map refcount mismatch: expected %lld != "
@ -924,7 +931,7 @@ dump_spacemap(objset_t *os, space_map_t *sm)
alloc -= entry_run; alloc -= entry_run;
entry_id++; entry_id++;
} }
if ((uint64_t)alloc != space_map_allocated(sm)) { if (alloc != space_map_allocated(sm)) {
(void) printf("space_map_object alloc (%lld) INCONSISTENT " (void) printf("space_map_object alloc (%lld) INCONSISTENT "
"with space map summary (%lld)\n", "with space map summary (%lld)\n",
(longlong_t)space_map_allocated(sm), (longlong_t)alloc); (longlong_t)space_map_allocated(sm), (longlong_t)alloc);
@ -990,23 +997,45 @@ dump_metaslab(metaslab_t *msp)
ASSERT(msp->ms_size == (1ULL << vd->vdev_ms_shift)); ASSERT(msp->ms_size == (1ULL << vd->vdev_ms_shift));
dump_spacemap(spa->spa_meta_objset, msp->ms_sm); dump_spacemap(spa->spa_meta_objset, msp->ms_sm);
if (spa_feature_is_active(spa, SPA_FEATURE_LOG_SPACEMAP)) {
(void) printf("\tFlush data:\n\tunflushed txg=%llu\n\n",
(u_longlong_t)metaslab_unflushed_txg(msp));
}
} }
static void static void
print_vdev_metaslab_header(vdev_t *vd) print_vdev_metaslab_header(vdev_t *vd)
{ {
vdev_alloc_bias_t alloc_bias = vd->vdev_alloc_bias; vdev_alloc_bias_t alloc_bias = vd->vdev_alloc_bias;
const char *bias_str; const char *bias_str = "";
if (alloc_bias == VDEV_BIAS_LOG || vd->vdev_islog) {
bias_str = VDEV_ALLOC_BIAS_LOG;
} else if (alloc_bias == VDEV_BIAS_SPECIAL) {
bias_str = VDEV_ALLOC_BIAS_SPECIAL;
} else if (alloc_bias == VDEV_BIAS_DEDUP) {
bias_str = VDEV_ALLOC_BIAS_DEDUP;
}
bias_str = (alloc_bias == VDEV_BIAS_LOG || vd->vdev_islog) ? uint64_t ms_flush_data_obj = 0;
VDEV_ALLOC_BIAS_LOG : if (vd->vdev_top_zap != 0) {
(alloc_bias == VDEV_BIAS_SPECIAL) ? VDEV_ALLOC_BIAS_SPECIAL : int error = zap_lookup(spa_meta_objset(vd->vdev_spa),
(alloc_bias == VDEV_BIAS_DEDUP) ? VDEV_ALLOC_BIAS_DEDUP : vd->vdev_top_zap, VDEV_TOP_ZAP_MS_UNFLUSHED_PHYS_TXGS,
vd->vdev_islog ? "log" : ""; sizeof (uint64_t), 1, &ms_flush_data_obj);
if (error != ENOENT) {
ASSERT0(error);
}
}
(void) printf("\tvdev %10llu %s\n" (void) printf("\tvdev %10llu %s",
"\t%-10s%5llu %-19s %-15s %-12s\n", (u_longlong_t)vd->vdev_id, bias_str);
(u_longlong_t)vd->vdev_id, bias_str,
if (ms_flush_data_obj != 0) {
(void) printf(" ms_unflushed_phys object %llu",
(u_longlong_t)ms_flush_data_obj);
}
(void) printf("\n\t%-10s%5llu %-19s %-15s %-12s\n",
"metaslabs", (u_longlong_t)vd->vdev_ms_count, "metaslabs", (u_longlong_t)vd->vdev_ms_count,
"offset", "spacemap", "free"); "offset", "spacemap", "free");
(void) printf("\t%15s %19s %15s %12s\n", (void) printf("\t%15s %19s %15s %12s\n",
@ -1172,6 +1201,24 @@ dump_metaslabs(spa_t *spa)
} }
} }
static void
dump_log_spacemaps(spa_t *spa)
{
(void) printf("\nLog Space Maps in Pool:\n");
for (spa_log_sm_t *sls = avl_first(&spa->spa_sm_logs_by_txg);
sls; sls = AVL_NEXT(&spa->spa_sm_logs_by_txg, sls)) {
space_map_t *sm = NULL;
VERIFY0(space_map_open(&sm, spa_meta_objset(spa),
sls->sls_sm_obj, 0, UINT64_MAX, SPA_MINBLOCKSHIFT));
(void) printf("Log Spacemap object %llu txg %llu\n",
(u_longlong_t)sls->sls_sm_obj, (u_longlong_t)sls->sls_txg);
dump_spacemap(spa->spa_meta_objset, sm);
space_map_close(sm);
}
(void) printf("\n");
}
static void static void
dump_dde(const ddt_t *ddt, const ddt_entry_t *dde, uint64_t index) dump_dde(const ddt_t *ddt, const ddt_entry_t *dde, uint64_t index)
{ {
@ -3782,6 +3829,84 @@ static metaslab_ops_t zdb_metaslab_ops = {
NULL /* alloc */ NULL /* alloc */
}; };
typedef int (*zdb_log_sm_cb_t)(spa_t *spa, space_map_entry_t *sme,
uint64_t txg, void *arg);
typedef struct unflushed_iter_cb_arg {
spa_t *uic_spa;
uint64_t uic_txg;
void *uic_arg;
zdb_log_sm_cb_t uic_cb;
} unflushed_iter_cb_arg_t;
static int
iterate_through_spacemap_logs_cb(space_map_entry_t *sme, void *arg)
{
unflushed_iter_cb_arg_t *uic = arg;
return (uic->uic_cb(uic->uic_spa, sme, uic->uic_txg, uic->uic_arg));
}
static void
iterate_through_spacemap_logs(spa_t *spa, zdb_log_sm_cb_t cb, void *arg)
{
if (!spa_feature_is_active(spa, SPA_FEATURE_LOG_SPACEMAP))
return;
spa_config_enter(spa, SCL_CONFIG, FTAG, RW_READER);
for (spa_log_sm_t *sls = avl_first(&spa->spa_sm_logs_by_txg);
sls; sls = AVL_NEXT(&spa->spa_sm_logs_by_txg, sls)) {
space_map_t *sm = NULL;
VERIFY0(space_map_open(&sm, spa_meta_objset(spa),
sls->sls_sm_obj, 0, UINT64_MAX, SPA_MINBLOCKSHIFT));
unflushed_iter_cb_arg_t uic = {
.uic_spa = spa,
.uic_txg = sls->sls_txg,
.uic_arg = arg,
.uic_cb = cb
};
VERIFY0(space_map_iterate(sm, space_map_length(sm),
iterate_through_spacemap_logs_cb, &uic));
space_map_close(sm);
}
spa_config_exit(spa, SCL_CONFIG, FTAG);
}
/* ARGSUSED */
static int
load_unflushed_svr_segs_cb(spa_t *spa, space_map_entry_t *sme,
uint64_t txg, void *arg)
{
spa_vdev_removal_t *svr = arg;
uint64_t offset = sme->sme_offset;
uint64_t size = sme->sme_run;
/* skip vdevs we don't care about */
if (sme->sme_vdev != svr->svr_vdev_id)
return (0);
vdev_t *vd = vdev_lookup_top(spa, sme->sme_vdev);
metaslab_t *ms = vd->vdev_ms[offset >> vd->vdev_ms_shift];
ASSERT(sme->sme_type == SM_ALLOC || sme->sme_type == SM_FREE);
if (txg < metaslab_unflushed_txg(ms))
return (0);
vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping;
ASSERT(vim != NULL);
if (offset >= vdev_indirect_mapping_max_offset(vim))
return (0);
if (sme->sme_type == SM_ALLOC)
range_tree_add(svr->svr_allocd_segs, offset, size);
else
range_tree_remove(svr->svr_allocd_segs, offset, size);
return (0);
}
/* ARGSUSED */ /* ARGSUSED */
static void static void
claim_segment_impl_cb(uint64_t inner_offset, vdev_t *vd, uint64_t offset, claim_segment_impl_cb(uint64_t inner_offset, vdev_t *vd, uint64_t offset,
@ -3830,36 +3955,35 @@ zdb_claim_removing(spa_t *spa, zdb_cb_t *zcb)
vdev_t *vd = vdev_lookup_top(spa, svr->svr_vdev_id); vdev_t *vd = vdev_lookup_top(spa, svr->svr_vdev_id);
vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping;
ASSERT0(range_tree_space(svr->svr_allocd_segs));
range_tree_t *allocs = range_tree_create(NULL, NULL);
for (uint64_t msi = 0; msi < vd->vdev_ms_count; msi++) { for (uint64_t msi = 0; msi < vd->vdev_ms_count; msi++) {
metaslab_t *msp = vd->vdev_ms[msi]; metaslab_t *msp = vd->vdev_ms[msi];
if (msp->ms_start >= vdev_indirect_mapping_max_offset(vim)) if (msp->ms_start >= vdev_indirect_mapping_max_offset(vim))
break; break;
ASSERT0(range_tree_space(svr->svr_allocd_segs)); ASSERT0(range_tree_space(allocs));
if (msp->ms_sm != NULL)
VERIFY0(space_map_load(msp->ms_sm, allocs, SM_ALLOC));
range_tree_vacate(allocs, range_tree_add, svr->svr_allocd_segs);
}
range_tree_destroy(allocs);
if (msp->ms_sm != NULL) { iterate_through_spacemap_logs(spa, load_unflushed_svr_segs_cb, svr);
VERIFY0(space_map_load(msp->ms_sm,
svr->svr_allocd_segs, SM_ALLOC));
/* /*
* Clear everything past what has been synced unless * Clear everything past what has been synced,
* it's past the spacemap, because we have not allocated * because we have not allocated mappings for
* mappings for it yet. * it yet.
*/ */
uint64_t vim_max_offset =
vdev_indirect_mapping_max_offset(vim);
uint64_t sm_end = msp->ms_sm->sm_start +
msp->ms_sm->sm_size;
if (sm_end > vim_max_offset)
range_tree_clear(svr->svr_allocd_segs, range_tree_clear(svr->svr_allocd_segs,
vim_max_offset, sm_end - vim_max_offset); vdev_indirect_mapping_max_offset(vim),
} vd->vdev_asize - vdev_indirect_mapping_max_offset(vim));
zcb->zcb_removing_size += zcb->zcb_removing_size += range_tree_space(svr->svr_allocd_segs);
range_tree_space(svr->svr_allocd_segs);
range_tree_vacate(svr->svr_allocd_segs, claim_segment_cb, vd); range_tree_vacate(svr->svr_allocd_segs, claim_segment_cb, vd);
}
spa_config_exit(spa, SCL_CONFIG, FTAG); spa_config_exit(spa, SCL_CONFIG, FTAG);
} }
@ -4070,6 +4194,82 @@ zdb_leak_init_exclude_checkpoint(spa_t *spa, zdb_cb_t *zcb)
} }
} }
static int
count_unflushed_space_cb(spa_t *spa, space_map_entry_t *sme,
uint64_t txg, void *arg)
{
int64_t *ualloc_space = arg;
uint64_t offset = sme->sme_offset;
uint64_t vdev_id = sme->sme_vdev;
vdev_t *vd = vdev_lookup_top(spa, vdev_id);
if (!vdev_is_concrete(vd))
return (0);
metaslab_t *ms = vd->vdev_ms[offset >> vd->vdev_ms_shift];
ASSERT(sme->sme_type == SM_ALLOC || sme->sme_type == SM_FREE);
if (txg < metaslab_unflushed_txg(ms))
return (0);
if (sme->sme_type == SM_ALLOC)
*ualloc_space += sme->sme_run;
else
*ualloc_space -= sme->sme_run;
return (0);
}
static int64_t
get_unflushed_alloc_space(spa_t *spa)
{
if (dump_opt['L'])
return (0);
int64_t ualloc_space = 0;
iterate_through_spacemap_logs(spa, count_unflushed_space_cb,
&ualloc_space);
return (ualloc_space);
}
static int
load_unflushed_cb(spa_t *spa, space_map_entry_t *sme, uint64_t txg, void *arg)
{
maptype_t *uic_maptype = arg;
uint64_t offset = sme->sme_offset;
uint64_t size = sme->sme_run;
uint64_t vdev_id = sme->sme_vdev;
vdev_t *vd = vdev_lookup_top(spa, vdev_id);
/* skip indirect vdevs */
if (!vdev_is_concrete(vd))
return (0);
metaslab_t *ms = vd->vdev_ms[offset >> vd->vdev_ms_shift];
ASSERT(sme->sme_type == SM_ALLOC || sme->sme_type == SM_FREE);
ASSERT(*uic_maptype == SM_ALLOC || *uic_maptype == SM_FREE);
if (txg < metaslab_unflushed_txg(ms))
return (0);
if (*uic_maptype == sme->sme_type)
range_tree_add(ms->ms_allocatable, offset, size);
else
range_tree_remove(ms->ms_allocatable, offset, size);
return (0);
}
static void
load_unflushed_to_ms_allocatables(spa_t *spa, maptype_t maptype)
{
iterate_through_spacemap_logs(spa, load_unflushed_cb, &maptype);
}
static void static void
load_concrete_ms_allocatable_trees(spa_t *spa, maptype_t maptype) load_concrete_ms_allocatable_trees(spa_t *spa, maptype_t maptype)
{ {
@ -4093,7 +4293,7 @@ load_concrete_ms_allocatable_trees(spa_t *spa, maptype_t maptype)
(longlong_t)vd->vdev_ms_count); (longlong_t)vd->vdev_ms_count);
mutex_enter(&msp->ms_lock); mutex_enter(&msp->ms_lock);
metaslab_unload(msp); range_tree_vacate(msp->ms_allocatable, NULL, NULL);
/* /*
* We don't want to spend the CPU manipulating the * We don't want to spend the CPU manipulating the
@ -4110,6 +4310,8 @@ load_concrete_ms_allocatable_trees(spa_t *spa, maptype_t maptype)
mutex_exit(&msp->ms_lock); mutex_exit(&msp->ms_lock);
} }
} }
load_unflushed_to_ms_allocatables(spa, maptype);
} }
/* /*
@ -4124,7 +4326,7 @@ load_indirect_ms_allocatable_tree(vdev_t *vd, metaslab_t *msp,
vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping;
mutex_enter(&msp->ms_lock); mutex_enter(&msp->ms_lock);
metaslab_unload(msp); range_tree_vacate(msp->ms_allocatable, NULL, NULL);
/* /*
* We don't want to spend the CPU manipulating the * We don't want to spend the CPU manipulating the
@ -4383,7 +4585,6 @@ zdb_leak_fini(spa_t *spa, zdb_cb_t *zcb)
range_tree_vacate(msp->ms_allocatable, range_tree_vacate(msp->ms_allocatable,
zdb_leak, vd); zdb_leak, vd);
} }
if (msp->ms_loaded) { if (msp->ms_loaded) {
msp->ms_loaded = B_FALSE; msp->ms_loaded = B_FALSE;
} }
@ -4520,7 +4721,8 @@ dump_block_stats(spa_t *spa)
total_alloc = norm_alloc + total_alloc = norm_alloc +
metaslab_class_get_alloc(spa_log_class(spa)) + metaslab_class_get_alloc(spa_log_class(spa)) +
metaslab_class_get_alloc(spa_special_class(spa)) + metaslab_class_get_alloc(spa_special_class(spa)) +
metaslab_class_get_alloc(spa_dedup_class(spa)); metaslab_class_get_alloc(spa_dedup_class(spa)) +
get_unflushed_alloc_space(spa);
total_found = tzb->zb_asize - zcb.zcb_dedup_asize + total_found = tzb->zb_asize - zcb.zcb_dedup_asize +
zcb.zcb_removing_size + zcb.zcb_checkpoint_size; zcb.zcb_removing_size + zcb.zcb_checkpoint_size;
@ -5392,12 +5594,25 @@ mos_obj_refd_multiple(uint64_t obj)
range_tree_add(mos_refd_objs, obj, 1); range_tree_add(mos_refd_objs, obj, 1);
} }
static void
mos_leak_vdev_top_zap(vdev_t *vd)
{
uint64_t ms_flush_data_obj;
int error = zap_lookup(spa_meta_objset(vd->vdev_spa),
vd->vdev_top_zap, VDEV_TOP_ZAP_MS_UNFLUSHED_PHYS_TXGS,
sizeof (ms_flush_data_obj), 1, &ms_flush_data_obj);
if (error == ENOENT)
return;
ASSERT0(error);
mos_obj_refd(ms_flush_data_obj);
}
static void static void
mos_leak_vdev(vdev_t *vd) mos_leak_vdev(vdev_t *vd)
{ {
mos_obj_refd(vd->vdev_dtl_object); mos_obj_refd(vd->vdev_dtl_object);
mos_obj_refd(vd->vdev_ms_array); mos_obj_refd(vd->vdev_ms_array);
mos_obj_refd(vd->vdev_top_zap);
mos_obj_refd(vd->vdev_indirect_config.vic_births_object); mos_obj_refd(vd->vdev_indirect_config.vic_births_object);
mos_obj_refd(vd->vdev_indirect_config.vic_mapping_object); mos_obj_refd(vd->vdev_indirect_config.vic_mapping_object);
mos_obj_refd(vd->vdev_leaf_zap); mos_obj_refd(vd->vdev_leaf_zap);
@ -5415,11 +5630,33 @@ mos_leak_vdev(vdev_t *vd)
mos_obj_refd(space_map_object(ms->ms_sm)); mos_obj_refd(space_map_object(ms->ms_sm));
} }
if (vd->vdev_top_zap != 0) {
mos_obj_refd(vd->vdev_top_zap);
mos_leak_vdev_top_zap(vd);
}
for (uint64_t c = 0; c < vd->vdev_children; c++) { for (uint64_t c = 0; c < vd->vdev_children; c++) {
mos_leak_vdev(vd->vdev_child[c]); mos_leak_vdev(vd->vdev_child[c]);
} }
} }
static void
mos_leak_log_spacemaps(spa_t *spa)
{
uint64_t spacemap_zap;
int error = zap_lookup(spa_meta_objset(spa),
DMU_POOL_DIRECTORY_OBJECT, DMU_POOL_LOG_SPACEMAP_ZAP,
sizeof (spacemap_zap), 1, &spacemap_zap);
if (error == ENOENT)
return;
ASSERT0(error);
mos_obj_refd(spacemap_zap);
for (spa_log_sm_t *sls = avl_first(&spa->spa_sm_logs_by_txg);
sls; sls = AVL_NEXT(&spa->spa_sm_logs_by_txg, sls))
mos_obj_refd(sls->sls_sm_obj);
}
static int static int
dump_mos_leaks(spa_t *spa) dump_mos_leaks(spa_t *spa)
{ {
@ -5451,6 +5688,10 @@ dump_mos_leaks(spa_t *spa)
mos_obj_refd(spa->spa_l2cache.sav_object); mos_obj_refd(spa->spa_l2cache.sav_object);
mos_obj_refd(spa->spa_spares.sav_object); mos_obj_refd(spa->spa_spares.sav_object);
if (spa->spa_syncing_log_sm != NULL)
mos_obj_refd(spa->spa_syncing_log_sm->sm_object);
mos_leak_log_spacemaps(spa);
mos_obj_refd(spa->spa_condensing_indirect_phys. mos_obj_refd(spa->spa_condensing_indirect_phys.
scip_next_mapping_object); scip_next_mapping_object);
mos_obj_refd(spa->spa_condensing_indirect_phys. mos_obj_refd(spa->spa_condensing_indirect_phys.
@ -5528,6 +5769,79 @@ dump_mos_leaks(spa_t *spa)
return (rv); return (rv);
} }
typedef struct log_sm_obsolete_stats_arg {
uint64_t lsos_current_txg;
uint64_t lsos_total_entries;
uint64_t lsos_valid_entries;
uint64_t lsos_sm_entries;
uint64_t lsos_valid_sm_entries;
} log_sm_obsolete_stats_arg_t;
static int
log_spacemap_obsolete_stats_cb(spa_t *spa, space_map_entry_t *sme,
uint64_t txg, void *arg)
{
log_sm_obsolete_stats_arg_t *lsos = arg;
uint64_t offset = sme->sme_offset;
uint64_t vdev_id = sme->sme_vdev;
if (lsos->lsos_current_txg == 0) {
/* this is the first log */
lsos->lsos_current_txg = txg;
} else if (lsos->lsos_current_txg < txg) {
/* we just changed log - print stats and reset */
(void) printf("%-8llu valid entries out of %-8llu - txg %llu\n",
(u_longlong_t)lsos->lsos_valid_sm_entries,
(u_longlong_t)lsos->lsos_sm_entries,
(u_longlong_t)lsos->lsos_current_txg);
lsos->lsos_valid_sm_entries = 0;
lsos->lsos_sm_entries = 0;
lsos->lsos_current_txg = txg;
}
ASSERT3U(lsos->lsos_current_txg, ==, txg);
lsos->lsos_sm_entries++;
lsos->lsos_total_entries++;
vdev_t *vd = vdev_lookup_top(spa, vdev_id);
if (!vdev_is_concrete(vd))
return (0);
metaslab_t *ms = vd->vdev_ms[offset >> vd->vdev_ms_shift];
ASSERT(sme->sme_type == SM_ALLOC || sme->sme_type == SM_FREE);
if (txg < metaslab_unflushed_txg(ms))
return (0);
lsos->lsos_valid_sm_entries++;
lsos->lsos_valid_entries++;
return (0);
}
static void
dump_log_spacemap_obsolete_stats(spa_t *spa)
{
log_sm_obsolete_stats_arg_t lsos;
bzero(&lsos, sizeof (lsos));
(void) printf("Log Space Map Obsolete Entry Statistics:\n");
iterate_through_spacemap_logs(spa,
log_spacemap_obsolete_stats_cb, &lsos);
/* print stats for latest log */
(void) printf("%-8llu valid entries out of %-8llu - txg %llu\n",
(u_longlong_t)lsos.lsos_valid_sm_entries,
(u_longlong_t)lsos.lsos_sm_entries,
(u_longlong_t)lsos.lsos_current_txg);
(void) printf("%-8llu valid entries out of %-8llu - total\n\n",
(u_longlong_t)lsos.lsos_valid_entries,
(u_longlong_t)lsos.lsos_total_entries);
}
static void static void
dump_zpool(spa_t *spa) dump_zpool(spa_t *spa)
{ {
@ -5557,6 +5871,10 @@ dump_zpool(spa_t *spa)
dump_metaslabs(spa); dump_metaslabs(spa);
if (dump_opt['M']) if (dump_opt['M'])
dump_metaslab_groups(spa); dump_metaslab_groups(spa);
if (dump_opt['d'] > 2 || dump_opt['m']) {
dump_log_spacemaps(spa);
dump_log_spacemap_obsolete_stats(spa);
}
if (dump_opt['d'] || dump_opt['i']) { if (dump_opt['d'] || dump_opt['i']) {
spa_feature_t f; spa_feature_t f;
@ -5635,10 +5953,9 @@ dump_zpool(spa_t *spa)
} }
} }
if (rc == 0) { if (rc == 0)
rc = verify_device_removal_feature_counts(spa); rc = verify_device_removal_feature_counts(spa);
} }
}
if (rc == 0 && (dump_opt['b'] || dump_opt['c'])) if (rc == 0 && (dump_opt['b'] || dump_opt['c']))
rc = dump_block_stats(spa); rc = dump_block_stats(spa);

View File

@ -2924,24 +2924,12 @@ vdev_lookup_by_path(vdev_t *vd, const char *path)
return (NULL); return (NULL);
} }
/* static int
* Find the first available hole which can be used as a top-level. spa_num_top_vdevs(spa_t *spa)
*/
int
find_vdev_hole(spa_t *spa)
{ {
vdev_t *rvd = spa->spa_root_vdev; vdev_t *rvd = spa->spa_root_vdev;
int c; ASSERT3U(spa_config_held(spa, SCL_VDEV, RW_READER), ==, SCL_VDEV);
return (rvd->vdev_children);
ASSERT(spa_config_held(spa, SCL_VDEV, RW_READER) == SCL_VDEV);
for (c = 0; c < rvd->vdev_children; c++) {
vdev_t *cvd = rvd->vdev_child[c];
if (cvd->vdev_ishole)
break;
}
return (c);
} }
/* /*
@ -2966,7 +2954,7 @@ ztest_vdev_add_remove(ztest_ds_t *zd, uint64_t id)
spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER); spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER);
ztest_shared->zs_vdev_next_leaf = find_vdev_hole(spa) * leaves; ztest_shared->zs_vdev_next_leaf = spa_num_top_vdevs(spa) * leaves;
/* /*
* If we have slogs then remove them 1/4 of the time. * If we have slogs then remove them 1/4 of the time.
@ -3073,7 +3061,7 @@ ztest_vdev_class_add(ztest_ds_t *zd, uint64_t id)
leaves = MAX(zs->zs_mirrors + zs->zs_splits, 1) * ztest_opts.zo_raidz; leaves = MAX(zs->zs_mirrors + zs->zs_splits, 1) * ztest_opts.zo_raidz;
spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER); spa_config_enter(spa, SCL_VDEV, FTAG, RW_READER);
ztest_shared->zs_vdev_next_leaf = find_vdev_hole(spa) * leaves; ztest_shared->zs_vdev_next_leaf = spa_num_top_vdevs(spa) * leaves;
spa_config_exit(spa, SCL_VDEV, FTAG); spa_config_exit(spa, SCL_VDEV, FTAG);
nvroot = make_vdev_root(NULL, NULL, NULL, ztest_opts.zo_vdev_size, 0, nvroot = make_vdev_root(NULL, NULL, NULL, ztest_opts.zo_vdev_size, 0,
@ -7329,6 +7317,15 @@ ztest_init(ztest_shared_t *zs)
for (i = 0; i < SPA_FEATURES; i++) { for (i = 0; i < SPA_FEATURES; i++) {
char *buf; char *buf;
/*
* 75% chance of using the log space map feature. We want ztest
* to exercise both the code paths that use the log space map
* feature and the ones that don't.
*/
if (i == SPA_FEATURE_LOG_SPACEMAP && ztest_random(4) == 0)
continue;
VERIFY3S(-1, !=, asprintf(&buf, "feature@%s", VERIFY3S(-1, !=, asprintf(&buf, "feature@%s",
spa_feature_table[i].fi_uname)); spa_feature_table[i].fi_uname));
VERIFY3U(0, ==, nvlist_add_uint64(props, buf, 0)); VERIFY3U(0, ==, nvlist_add_uint64(props, buf, 0));

View File

@ -296,6 +296,7 @@ AC_CONFIG_FILES([
tests/zfs-tests/tests/functional/link_count/Makefile tests/zfs-tests/tests/functional/link_count/Makefile
tests/zfs-tests/tests/functional/libzfs/Makefile tests/zfs-tests/tests/functional/libzfs/Makefile
tests/zfs-tests/tests/functional/limits/Makefile tests/zfs-tests/tests/functional/limits/Makefile
tests/zfs-tests/tests/functional/log_spacemap/Makefile
tests/zfs-tests/tests/functional/migration/Makefile tests/zfs-tests/tests/functional/migration/Makefile
tests/zfs-tests/tests/functional/mmap/Makefile tests/zfs-tests/tests/functional/mmap/Makefile
tests/zfs-tests/tests/functional/mmp/Makefile tests/zfs-tests/tests/functional/mmp/Makefile

View File

@ -13,7 +13,6 @@ COMMON_H = \
$(top_srcdir)/include/sys/bptree.h \ $(top_srcdir)/include/sys/bptree.h \
$(top_srcdir)/include/sys/bqueue.h \ $(top_srcdir)/include/sys/bqueue.h \
$(top_srcdir)/include/sys/cityhash.h \ $(top_srcdir)/include/sys/cityhash.h \
$(top_srcdir)/include/sys/spa_checkpoint.h \
$(top_srcdir)/include/sys/dataset_kstats.h \ $(top_srcdir)/include/sys/dataset_kstats.h \
$(top_srcdir)/include/sys/dbuf.h \ $(top_srcdir)/include/sys/dbuf.h \
$(top_srcdir)/include/sys/ddt.h \ $(top_srcdir)/include/sys/ddt.h \
@ -63,6 +62,8 @@ COMMON_H = \
$(top_srcdir)/include/sys/sha2.h \ $(top_srcdir)/include/sys/sha2.h \
$(top_srcdir)/include/sys/skein.h \ $(top_srcdir)/include/sys/skein.h \
$(top_srcdir)/include/sys/spa_boot.h \ $(top_srcdir)/include/sys/spa_boot.h \
$(top_srcdir)/include/sys/spa_checkpoint.h \
$(top_srcdir)/include/sys/spa_log_spacemap.h \
$(top_srcdir)/include/sys/space_map.h \ $(top_srcdir)/include/sys/space_map.h \
$(top_srcdir)/include/sys/space_reftree.h \ $(top_srcdir)/include/sys/space_reftree.h \
$(top_srcdir)/include/sys/spa.h \ $(top_srcdir)/include/sys/spa.h \

View File

@ -382,6 +382,7 @@ typedef struct dmu_buf {
#define DMU_POOL_OBSOLETE_BPOBJ "com.delphix:obsolete_bpobj" #define DMU_POOL_OBSOLETE_BPOBJ "com.delphix:obsolete_bpobj"
#define DMU_POOL_CONDENSING_INDIRECT "com.delphix:condensing_indirect" #define DMU_POOL_CONDENSING_INDIRECT "com.delphix:condensing_indirect"
#define DMU_POOL_ZPOOL_CHECKPOINT "com.delphix:zpool_checkpoint" #define DMU_POOL_ZPOOL_CHECKPOINT "com.delphix:zpool_checkpoint"
#define DMU_POOL_LOG_SPACEMAP_ZAP "com.delphix:log_spacemap_zap"
/* /*
* Allocate an object from this objset. The range of object numbers * Allocate an object from this objset. The range of object numbers

View File

@ -770,6 +770,8 @@ typedef struct zpool_load_policy {
"com.delphix:obsolete_counts_are_precise" "com.delphix:obsolete_counts_are_precise"
#define VDEV_TOP_ZAP_POOL_CHECKPOINT_SM \ #define VDEV_TOP_ZAP_POOL_CHECKPOINT_SM \
"com.delphix:pool_checkpoint_sm" "com.delphix:pool_checkpoint_sm"
#define VDEV_TOP_ZAP_MS_UNFLUSHED_PHYS_TXGS \
"com.delphix:ms_unflushed_phys_txgs"
#define VDEV_TOP_ZAP_ALLOCATION_BIAS \ #define VDEV_TOP_ZAP_ALLOCATION_BIAS \
"org.zfsonlinux:allocation_bias" "org.zfsonlinux:allocation_bias"

View File

@ -49,9 +49,17 @@ int metaslab_init(metaslab_group_t *, uint64_t, uint64_t, uint64_t,
metaslab_t **); metaslab_t **);
void metaslab_fini(metaslab_t *); void metaslab_fini(metaslab_t *);
void metaslab_set_unflushed_txg(metaslab_t *, uint64_t, dmu_tx_t *);
void metaslab_set_estimated_condensed_size(metaslab_t *, uint64_t, dmu_tx_t *);
uint64_t metaslab_unflushed_txg(metaslab_t *);
uint64_t metaslab_estimated_condensed_size(metaslab_t *);
int metaslab_sort_by_flushed(const void *, const void *);
uint64_t metaslab_unflushed_changes_memused(metaslab_t *);
int metaslab_load(metaslab_t *); int metaslab_load(metaslab_t *);
void metaslab_potentially_unload(metaslab_t *, uint64_t); void metaslab_potentially_unload(metaslab_t *, uint64_t);
void metaslab_unload(metaslab_t *); void metaslab_unload(metaslab_t *);
boolean_t metaslab_flush(metaslab_t *, dmu_tx_t *);
uint64_t metaslab_allocated_space(metaslab_t *); uint64_t metaslab_allocated_space(metaslab_t *);
@ -108,6 +116,9 @@ uint64_t metaslab_class_get_space(metaslab_class_t *);
uint64_t metaslab_class_get_dspace(metaslab_class_t *); uint64_t metaslab_class_get_dspace(metaslab_class_t *);
uint64_t metaslab_class_get_deferred(metaslab_class_t *); uint64_t metaslab_class_get_deferred(metaslab_class_t *);
void metaslab_space_update(vdev_t *, metaslab_class_t *,
int64_t, int64_t, int64_t);
metaslab_group_t *metaslab_group_create(metaslab_class_t *, vdev_t *, int); metaslab_group_t *metaslab_group_create(metaslab_class_t *, vdev_t *, int);
void metaslab_group_destroy(metaslab_group_t *); void metaslab_group_destroy(metaslab_group_t *);
void metaslab_group_activate(metaslab_group_t *); void metaslab_group_activate(metaslab_group_t *);
@ -124,6 +135,8 @@ void metaslab_recalculate_weight_and_sort(metaslab_t *);
void metaslab_disable(metaslab_t *); void metaslab_disable(metaslab_t *);
void metaslab_enable(metaslab_t *, boolean_t); void metaslab_enable(metaslab_t *, boolean_t);
extern int metaslab_debug_load;
#ifdef __cplusplus #ifdef __cplusplus
} }
#endif #endif

View File

@ -24,7 +24,7 @@
*/ */
/* /*
* Copyright (c) 2011, 2018 by Delphix. All rights reserved. * Copyright (c) 2011, 2019 by Delphix. All rights reserved.
*/ */
#ifndef _SYS_METASLAB_IMPL_H #ifndef _SYS_METASLAB_IMPL_H
@ -357,7 +357,7 @@ struct metaslab {
* write to metaslab data on-disk (i.e flushing entries to * write to metaslab data on-disk (i.e flushing entries to
* the metaslab's space map). It helps coordinate readers of * the metaslab's space map). It helps coordinate readers of
* the metaslab's space map [see spa_vdev_remove_thread()] * the metaslab's space map [see spa_vdev_remove_thread()]
* with writers [see metaslab_sync()]. * with writers [see metaslab_sync() or metaslab_flush()].
* *
* Note that metaslab_load(), even though a reader, uses * Note that metaslab_load(), even though a reader, uses
* a completely different mechanism to deal with the reading * a completely different mechanism to deal with the reading
@ -401,7 +401,6 @@ struct metaslab {
boolean_t ms_condensing; /* condensing? */ boolean_t ms_condensing; /* condensing? */
boolean_t ms_condense_wanted; boolean_t ms_condense_wanted;
uint64_t ms_condense_checked_txg;
/* /*
* The number of consumers which have disabled the metaslab. * The number of consumers which have disabled the metaslab.
@ -414,6 +413,8 @@ struct metaslab {
*/ */
boolean_t ms_loaded; boolean_t ms_loaded;
boolean_t ms_loading; boolean_t ms_loading;
kcondvar_t ms_flush_cv;
boolean_t ms_flushing;
/* /*
* The following histograms count entries that are in the * The following histograms count entries that are in the
@ -499,6 +500,22 @@ struct metaslab {
metaslab_group_t *ms_group; /* metaslab group */ metaslab_group_t *ms_group; /* metaslab group */
avl_node_t ms_group_node; /* node in metaslab group tree */ avl_node_t ms_group_node; /* node in metaslab group tree */
txg_node_t ms_txg_node; /* per-txg dirty metaslab links */ txg_node_t ms_txg_node; /* per-txg dirty metaslab links */
avl_node_t ms_spa_txg_node; /* node in spa_metaslabs_by_txg */
/*
* Allocs and frees that are committed to the vdev log spacemap but
* not yet to this metaslab's spacemap.
*/
range_tree_t *ms_unflushed_allocs;
range_tree_t *ms_unflushed_frees;
/*
* We have flushed entries up to but not including this TXG. In
* other words, all changes from this TXG and onward should not
* be in this metaslab's space map and must be read from the
* log space maps.
*/
uint64_t ms_unflushed_txg;
/* updated every time we are done syncing the metaslab's space map */ /* updated every time we are done syncing the metaslab's space map */
uint64_t ms_synced_length; uint64_t ms_synced_length;
@ -506,6 +523,11 @@ struct metaslab {
boolean_t ms_new; boolean_t ms_new;
}; };
typedef struct metaslab_unflushed_phys {
/* on-disk counterpart of ms_unflushed_txg */
uint64_t msp_unflushed_txg;
} metaslab_unflushed_phys_t;
#ifdef __cplusplus #ifdef __cplusplus
} }
#endif #endif

View File

@ -24,7 +24,7 @@
*/ */
/* /*
* Copyright (c) 2013, 2017 by Delphix. All rights reserved. * Copyright (c) 2013, 2019 by Delphix. All rights reserved.
*/ */
#ifndef _SYS_RANGE_TREE_H #ifndef _SYS_RANGE_TREE_H
@ -95,6 +95,7 @@ range_seg_t *range_tree_find(range_tree_t *rt, uint64_t start, uint64_t size);
void range_tree_resize_segment(range_tree_t *rt, range_seg_t *rs, void range_tree_resize_segment(range_tree_t *rt, range_seg_t *rs,
uint64_t newstart, uint64_t newsize); uint64_t newstart, uint64_t newsize);
uint64_t range_tree_space(range_tree_t *rt); uint64_t range_tree_space(range_tree_t *rt);
uint64_t range_tree_numsegs(range_tree_t *rt);
boolean_t range_tree_is_empty(range_tree_t *rt); boolean_t range_tree_is_empty(range_tree_t *rt);
void range_tree_swap(range_tree_t **rtsrc, range_tree_t **rtdst); void range_tree_swap(range_tree_t **rtsrc, range_tree_t **rtdst);
void range_tree_stat_verify(range_tree_t *rt); void range_tree_stat_verify(range_tree_t *rt);
@ -112,6 +113,11 @@ void range_tree_vacate(range_tree_t *rt, range_tree_func_t *func, void *arg);
void range_tree_walk(range_tree_t *rt, range_tree_func_t *func, void *arg); void range_tree_walk(range_tree_t *rt, range_tree_func_t *func, void *arg);
range_seg_t *range_tree_first(range_tree_t *rt); range_seg_t *range_tree_first(range_tree_t *rt);
void range_tree_remove_xor_add_segment(uint64_t start, uint64_t end,
range_tree_t *removefrom, range_tree_t *addto);
void range_tree_remove_xor_add(range_tree_t *rt, range_tree_t *removefrom,
range_tree_t *addto);
void rt_avl_create(range_tree_t *rt, void *arg); void rt_avl_create(range_tree_t *rt, void *arg);
void rt_avl_destroy(range_tree_t *rt, void *arg); void rt_avl_destroy(range_tree_t *rt, void *arg);
void rt_avl_add(range_tree_t *rt, range_seg_t *rs, void *arg); void rt_avl_add(range_tree_t *rt, range_seg_t *rs, void *arg);

View File

@ -20,7 +20,7 @@
*/ */
/* /*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011, 2018 by Delphix. All rights reserved. * Copyright (c) 2011, 2019 by Delphix. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved. * Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2014 Spectra Logic Corporation, All rights reserved. * Copyright (c) 2014 Spectra Logic Corporation, All rights reserved.
* Copyright 2013 Saso Kiselkov. All rights reserved. * Copyright 2013 Saso Kiselkov. All rights reserved.
@ -42,6 +42,7 @@
#include <sys/fs/zfs.h> #include <sys/fs/zfs.h>
#include <sys/spa_checksum.h> #include <sys/spa_checksum.h>
#include <sys/dmu.h> #include <sys/dmu.h>
#include <sys/space_map.h>
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {
@ -1075,6 +1076,7 @@ extern boolean_t spa_suspended(spa_t *spa);
extern uint64_t spa_bootfs(spa_t *spa); extern uint64_t spa_bootfs(spa_t *spa);
extern uint64_t spa_delegation(spa_t *spa); extern uint64_t spa_delegation(spa_t *spa);
extern objset_t *spa_meta_objset(spa_t *spa); extern objset_t *spa_meta_objset(spa_t *spa);
extern space_map_t *spa_syncing_log_sm(spa_t *spa);
extern uint64_t spa_deadman_synctime(spa_t *spa); extern uint64_t spa_deadman_synctime(spa_t *spa);
extern uint64_t spa_deadman_ziotime(spa_t *spa); extern uint64_t spa_deadman_ziotime(spa_t *spa);
extern uint64_t spa_dirty_data(spa_t *spa); extern uint64_t spa_dirty_data(spa_t *spa);
@ -1125,6 +1127,7 @@ extern boolean_t spa_trust_config(spa_t *spa);
extern uint64_t spa_missing_tvds_allowed(spa_t *spa); extern uint64_t spa_missing_tvds_allowed(spa_t *spa);
extern void spa_set_missing_tvds(spa_t *spa, uint64_t missing); extern void spa_set_missing_tvds(spa_t *spa, uint64_t missing);
extern boolean_t spa_top_vdevs_spacemap_addressable(spa_t *spa); extern boolean_t spa_top_vdevs_spacemap_addressable(spa_t *spa);
extern uint64_t spa_total_metaslabs(spa_t *spa);
extern boolean_t spa_multihost(spa_t *spa); extern boolean_t spa_multihost(spa_t *spa);
extern unsigned long spa_get_hostid(void); extern unsigned long spa_get_hostid(void);
extern void spa_activate_allocation_classes(spa_t *, dmu_tx_t *); extern void spa_activate_allocation_classes(spa_t *, dmu_tx_t *);

View File

@ -20,7 +20,7 @@
*/ */
/* /*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011, 2018 by Delphix. All rights reserved. * Copyright (c) 2011, 2019 by Delphix. All rights reserved.
* Copyright 2011 Nexenta Systems, Inc. All rights reserved. * Copyright 2011 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2014 Spectra Logic Corporation, All rights reserved. * Copyright (c) 2014 Spectra Logic Corporation, All rights reserved.
* Copyright 2013 Saso Kiselkov. All rights reserved. * Copyright 2013 Saso Kiselkov. All rights reserved.
@ -34,6 +34,7 @@
#include <sys/spa.h> #include <sys/spa.h>
#include <sys/spa_checkpoint.h> #include <sys/spa_checkpoint.h>
#include <sys/spa_log_spacemap.h>
#include <sys/vdev.h> #include <sys/vdev.h>
#include <sys/vdev_removal.h> #include <sys/vdev_removal.h>
#include <sys/metaslab.h> #include <sys/metaslab.h>
@ -307,6 +308,14 @@ struct spa {
spa_checkpoint_info_t spa_checkpoint_info; /* checkpoint accounting */ spa_checkpoint_info_t spa_checkpoint_info; /* checkpoint accounting */
zthr_t *spa_checkpoint_discard_zthr; zthr_t *spa_checkpoint_discard_zthr;
space_map_t *spa_syncing_log_sm; /* current log space map */
avl_tree_t spa_sm_logs_by_txg;
kmutex_t spa_flushed_ms_lock; /* for metaslabs_by_flushed */
avl_tree_t spa_metaslabs_by_flushed;
spa_unflushed_stats_t spa_unflushed_stats;
list_t spa_log_summary;
uint64_t spa_log_flushall_txg;
char *spa_root; /* alternate root directory */ char *spa_root; /* alternate root directory */
uint64_t spa_ena; /* spa-wide ereport ENA */ uint64_t spa_ena; /* spa-wide ereport ENA */
int spa_last_open_failed; /* error if last open failed */ int spa_last_open_failed; /* error if last open failed */

View File

@ -0,0 +1,79 @@
/*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License (the "License").
* You may not use this file except in compliance with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or http://www.opensolaris.org/os/licensing.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*/
/*
* Copyright (c) 2018, 2019 by Delphix. All rights reserved.
*/
#ifndef _SYS_SPA_LOG_SPACEMAP_H
#define _SYS_SPA_LOG_SPACEMAP_H
#include <sys/avl.h>
typedef struct log_summary_entry {
uint64_t lse_start; /* start TXG */
uint64_t lse_mscount; /* # of metaslabs needed to be flushed */
uint64_t lse_blkcount; /* blocks held by this entry */
list_node_t lse_node;
} log_summary_entry_t;
typedef struct spa_unflushed_stats {
/* used for memory heuristic */
uint64_t sus_memused; /* current memory used for unflushed trees */
/* used for block heuristic */
uint64_t sus_blocklimit; /* max # of log blocks allowed */
uint64_t sus_nblocks; /* # of blocks in log space maps currently */
} spa_unflushed_stats_t;
typedef struct spa_log_sm {
uint64_t sls_sm_obj; /* space map object ID */
uint64_t sls_txg; /* txg logged on the space map */
uint64_t sls_nblocks; /* number of blocks in this log */
uint64_t sls_mscount; /* # of metaslabs flushed in the log's txg */
avl_node_t sls_node; /* node in spa_sm_logs_by_txg */
} spa_log_sm_t;
int spa_ld_log_spacemaps(spa_t *);
void spa_generate_syncing_log_sm(spa_t *, dmu_tx_t *);
void spa_flush_metaslabs(spa_t *, dmu_tx_t *);
void spa_sync_close_syncing_log_sm(spa_t *);
void spa_cleanup_old_sm_logs(spa_t *, dmu_tx_t *);
uint64_t spa_log_sm_blocklimit(spa_t *);
void spa_log_sm_set_blocklimit(spa_t *);
uint64_t spa_log_sm_nblocks(spa_t *);
uint64_t spa_log_sm_memused(spa_t *);
void spa_log_sm_decrement_mscount(spa_t *, uint64_t);
void spa_log_sm_increment_current_mscount(spa_t *);
void spa_log_summary_add_flushed_metaslab(spa_t *);
void spa_log_summary_decrement_mscount(spa_t *, uint64_t);
void spa_log_summary_decrement_blkcount(spa_t *, uint64_t);
boolean_t spa_flush_all_logs_requested(spa_t *);
extern int zfs_keep_log_spacemaps_at_export;
#endif /* _SYS_SPA_LOG_SPACEMAP_H */

View File

@ -24,7 +24,7 @@
*/ */
/* /*
* Copyright (c) 2012, 2018 by Delphix. All rights reserved. * Copyright (c) 2012, 2019 by Delphix. All rights reserved.
*/ */
#ifndef _SYS_SPACE_MAP_H #ifndef _SYS_SPACE_MAP_H
@ -72,6 +72,11 @@ typedef struct space_map_phys {
* bucket, smp_histogram[i], contains the number of free regions * bucket, smp_histogram[i], contains the number of free regions
* whose size is: * whose size is:
* 2^(i+sm_shift) <= size of free region in bytes < 2^(i+sm_shift+1) * 2^(i+sm_shift) <= size of free region in bytes < 2^(i+sm_shift+1)
*
* Note that, if log space map feature is enabled, histograms of
* space maps that belong to metaslabs will take into account any
* unflushed changes for their metaslabs, even though the actual
* space map doesn't have entries for these changes.
*/ */
uint64_t smp_histogram[SPACE_MAP_HISTOGRAM_SIZE]; uint64_t smp_histogram[SPACE_MAP_HISTOGRAM_SIZE];
} space_map_phys_t; } space_map_phys_t;
@ -209,6 +214,8 @@ void space_map_histogram_add(space_map_t *sm, range_tree_t *rt,
uint64_t space_map_object(space_map_t *sm); uint64_t space_map_object(space_map_t *sm);
int64_t space_map_allocated(space_map_t *sm); int64_t space_map_allocated(space_map_t *sm);
uint64_t space_map_length(space_map_t *sm); uint64_t space_map_length(space_map_t *sm);
uint64_t space_map_entries(space_map_t *sm, range_tree_t *rt);
uint64_t space_map_nblocks(space_map_t *sm);
void space_map_write(space_map_t *sm, range_tree_t *rt, maptype_t maptype, void space_map_write(space_map_t *sm, range_tree_t *rt, maptype_t maptype,
uint64_t vdev_id, dmu_tx_t *tx); uint64_t vdev_id, dmu_tx_t *tx);

View File

@ -20,7 +20,7 @@
*/ */
/* /*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011, 2018 by Delphix. All rights reserved. * Copyright (c) 2011, 2019 by Delphix. All rights reserved.
* Copyright (c) 2017, Intel Corporation. * Copyright (c) 2017, Intel Corporation.
*/ */
@ -535,7 +535,7 @@ extern void vdev_set_min_asize(vdev_t *vd);
/* /*
* Global variables * Global variables
*/ */
extern int vdev_standard_sm_blksz; extern int zfs_vdev_standard_sm_blksz;
/* zdb uses this tunable, so it must be declared here to make lint happy. */ /* zdb uses this tunable, so it must be declared here to make lint happy. */
extern int zfs_vdev_cache_size; extern int zfs_vdev_cache_size;

View File

@ -20,7 +20,7 @@
*/ */
/* /*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2018 by Delphix. All rights reserved. * Copyright (c) 2012, 2019 by Delphix. All rights reserved.
*/ */
#ifndef _SYS_ZFS_DEBUG_H #ifndef _SYS_ZFS_DEBUG_H
@ -55,6 +55,7 @@ extern int zfs_dbgmsg_enable;
#define ZFS_DEBUG_SET_ERROR (1 << 9) #define ZFS_DEBUG_SET_ERROR (1 << 9)
#define ZFS_DEBUG_INDIRECT_REMAP (1 << 10) #define ZFS_DEBUG_INDIRECT_REMAP (1 << 10)
#define ZFS_DEBUG_TRIM (1 << 11) #define ZFS_DEBUG_TRIM (1 << 11)
#define ZFS_DEBUG_LOG_SPACEMAP (1 << 12)
extern void __zfs_dbgmsg(char *buf); extern void __zfs_dbgmsg(char *buf);
extern void __dprintf(boolean_t dprint, const char *file, const char *func, extern void __dprintf(boolean_t dprint, const char *file, const char *func,

View File

@ -70,6 +70,7 @@ typedef enum spa_feature {
SPA_FEATURE_REDACTION_BOOKMARKS, SPA_FEATURE_REDACTION_BOOKMARKS,
SPA_FEATURE_REDACTED_DATASETS, SPA_FEATURE_REDACTED_DATASETS,
SPA_FEATURE_BOOKMARK_WRITTEN, SPA_FEATURE_BOOKMARK_WRITTEN,
SPA_FEATURE_LOG_SPACEMAP,
SPA_FEATURES SPA_FEATURES
} spa_feature_t; } spa_feature_t;

View File

@ -101,6 +101,7 @@ KERNEL_C = \
spa_config.c \ spa_config.c \
spa_errlog.c \ spa_errlog.c \
spa_history.c \ spa_history.c \
spa_log_spacemap.c \
spa_misc.c \ spa_misc.c \
spa_stats.c \ spa_stats.c \
space_map.c \ space_map.c \

View File

@ -268,6 +268,17 @@ by the test suite to facilitate testing.
Default value: \fB16,777,217\fR. Default value: \fB16,777,217\fR.
.RE .RE
.sp
.ne 2
.na
\fBzfs_keep_log_spacemaps_at_export\fR (int)
.ad
.RS 12n
Prevent log spacemaps from being destroyed during pool exports and destroys.
.sp
Use \fB1\fR for yes and \fB0\fR for no (default).
.RE
.sp .sp
.ne 2 .ne 2
.na .na
@ -370,6 +381,17 @@ When a vdev is added target this number of metaslabs per top-level vdev.
Default value: \fB200\fR. Default value: \fB200\fR.
.RE .RE
.sp
.ne 2
.na
\fBzfs_vdev_default_ms_shift\fR (int)
.ad
.RS 12n
Default limit for metaslab size.
.sp
Default value: \fB29\fR [meaning (1 << 29) = 512MB].
.RE
.sp .sp
.ne 2 .ne 2
.na .na
@ -1229,6 +1251,93 @@ Rate limit delay zevents (which report slow I/Os) to this many per second.
Default value: 20 Default value: 20
.RE .RE
.sp
.ne 2
.na
\fBzfs_unflushed_max_mem_amt\fR (ulong)
.ad
.RS 12n
Upper-bound limit for unflushed metadata changes to be held by the
log spacemap in memory (in bytes).
.sp
Default value: \fB1,073,741,824\fR (1GB).
.RE
.sp
.ne 2
.na
\fBzfs_unflushed_max_mem_ppm\fR (ulong)
.ad
.RS 12n
Percentage of the overall system memory that ZFS allows to be used
for unflushed metadata changes by the log spacemap.
(value is calculated over 1000000 for finer granularity).
.sp
Default value: \fB1000\fR (which is divided by 1000000, resulting in
the limit to be \fB0.1\fR% of memory)
.RE
.sp
.ne 2
.na
\fBzfs_unflushed_log_block_max\fR (ulong)
.ad
.RS 12n
Describes the maximum number of log spacemap blocks allowed for each pool.
The default value of 262144 means that the space in all the log spacemaps
can add up to no more than 262144 blocks (which means 32GB of logical
space before compression and ditto blocks, assuming that blocksize is
128k).
.sp
This tunable is important because it involves a trade-off between import
time after an unclean export and the frequency of flushing metaslabs.
The higher this number is, the more log blocks we allow when the pool is
active which means that we flush metaslabs less often and thus decrease
the number of I/Os for spacemap updates per TXG.
At the same time though, that means that in the event of an unclean export,
there will be more log spacemap blocks for us to read, inducing overhead
in the import time of the pool.
The lower the number, the amount of flushing increases destroying log
blocks quicker as they become obsolete faster, which leaves less blocks
to be read during import time after a crash.
.sp
Each log spacemap block existing during pool import leads to approximately
one extra logical I/O issued.
This is the reason why this tunable is exposed in terms of blocks rather
than space used.
.sp
Default value: \fB262144\fR (256K).
.RE
.sp
.ne 2
.na
\fBzfs_unflushed_log_block_min\fR (ulong)
.ad
.RS 12n
If the number of metaslabs is small and our incoming rate is high, we
could get into a situation that we are flushing all our metaslabs every
TXG.
Thus we always allow at least this many log blocks.
.sp
Default value: \fB1000\fR.
.RE
.sp
.ne 2
.na
\fBzfs_unflushed_log_block_pct\fR (ulong)
.ad
.RS 12n
Tunable used to determine the number of blocks that can be used for
the spacemap log, expressed as a percentage of the total number of
metaslabs in the pool.
.sp
Default value: \fB400\fR (read as \fB400\fR% - meaning that the number
of log spacemap blocks are capped at 4 times the number of
metaslabs in the pool).
.RE
.sp .sp
.ne 2 .ne 2
.na .na
@ -1717,6 +1826,10 @@ _
_ _
2048 ZFS_DEBUG_TRIM 2048 ZFS_DEBUG_TRIM
Verify TRIM ranges are always within the allocatable range tree. Verify TRIM ranges are always within the allocatable range tree.
_
4096 ZFS_DEBUG_LOG_SPACEMAP
Verify that the log summary is consistent with the spacemap log
and enable zfs_dbgmsgs for metaslab loading and flushing.
.TE .TE
.sp .sp
* Requires debug build. * Requires debug build.
@ -1832,6 +1945,29 @@ fix existing datasets that exceed the predefined limit.
Default value: \fB50\fR. Default value: \fB50\fR.
.RE .RE
.sp
.ne 2
.na
\fBzfs_max_log_walking\fR (ulong)
.ad
.RS 12n
The number of past TXGs that the flushing algorithm of the log spacemap
feature uses to estimate incoming log blocks.
.sp
Default value: \fB5\fR.
.RE
.sp
.ne 2
.na
\fBzfs_max_logsm_summary_length\fR (ulong)
.ad
.RS 12n
Maximum number of rows allowed in the summary of the spacemap log.
.sp
Default value: \fB10\fR.
.RE
.sp .sp
.ne 2 .ne 2
.na .na
@ -1862,6 +1998,17 @@ disabled because these datasets may be missing key data.
Default value: \fB0\fR. Default value: \fB0\fR.
.RE .RE
.sp
.ne 2
.na
\fBzfs_min_metaslabs_to_flush\fR (ulong)
.ad
.RS 12n
Minimum number of metaslabs to flush per dirty TXG
.sp
Default value: \fB1\fR.
.RE
.sp .sp
.ne 2 .ne 2
.na .na

View File

@ -197,7 +197,8 @@ By default,
.Nm .Nm
verifies that all non-free blocks are referenced, which can be very expensive. verifies that all non-free blocks are referenced, which can be very expensive.
.It Fl m .It Fl m
Display the offset, spacemap, and free space of each metaslab. Display the offset, spacemap, free space of each metaslab, all the log
spacemaps and their obsolete entry statistics.
.It Fl mm .It Fl mm
Also display information about the on-disk free space histogram associated with Also display information about the on-disk free space histogram associated with
each metaslab. each metaslab.

View File

@ -348,6 +348,19 @@ zpool_feature_init(void)
ZFEATURE_FLAG_MOS | ZFEATURE_FLAG_ACTIVATE_ON_ENABLE, ZFEATURE_FLAG_MOS | ZFEATURE_FLAG_ACTIVATE_ON_ENABLE,
ZFEATURE_TYPE_BOOLEAN, NULL); ZFEATURE_TYPE_BOOLEAN, NULL);
{
static const spa_feature_t log_spacemap_deps[] = {
SPA_FEATURE_SPACEMAP_V2,
SPA_FEATURE_NONE
};
zfeature_register(SPA_FEATURE_LOG_SPACEMAP,
"com.delphix:log_spacemap", "log_spacemap",
"Log metaslab changes on a single spacemap and "
"flush them periodically.",
ZFEATURE_FLAG_READONLY_COMPAT, ZFEATURE_TYPE_BOOLEAN,
log_spacemap_deps);
}
{ {
static const spa_feature_t large_blocks_deps[] = { static const spa_feature_t large_blocks_deps[] = {
SPA_FEATURE_EXTENSIBLE_DATASET, SPA_FEATURE_EXTENSIBLE_DATASET,

View File

@ -76,6 +76,7 @@ $(MODULE)-objs += spa_checkpoint.o
$(MODULE)-objs += spa_config.o $(MODULE)-objs += spa_config.o
$(MODULE)-objs += spa_errlog.o $(MODULE)-objs += spa_errlog.o
$(MODULE)-objs += spa_history.o $(MODULE)-objs += spa_history.o
$(MODULE)-objs += spa_log_spacemap.o
$(MODULE)-objs += spa_misc.o $(MODULE)-objs += spa_misc.o
$(MODULE)-objs += spa_stats.o $(MODULE)-objs += spa_stats.o
$(MODULE)-objs += space_map.o $(MODULE)-objs += space_map.o

View File

@ -1483,7 +1483,7 @@ dmu_objset_sync_dnodes(multilist_sublist_t *list, dmu_tx_t *tx)
ASSERT(dn->dn_dbuf->db_data_pending); ASSERT(dn->dn_dbuf->db_data_pending);
/* /*
* Initialize dn_zio outside dnode_sync() because the * Initialize dn_zio outside dnode_sync() because the
* meta-dnode needs to set it ouside dnode_sync(). * meta-dnode needs to set it outside dnode_sync().
*/ */
dn->dn_zio = dn->dn_dbuf->db_data_pending->dr_zio; dn->dn_zio = dn->dn_dbuf->db_data_pending->dr_zio;
ASSERT(dn->dn_zio); ASSERT(dn->dn_zio);

View File

@ -20,7 +20,7 @@
*/ */
/* /*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011, 2018 by Delphix. All rights reserved. * Copyright (c) 2011, 2019 by Delphix. All rights reserved.
* Copyright (c) 2013 Steven Hartland. All rights reserved. * Copyright (c) 2013 Steven Hartland. All rights reserved.
* Copyright (c) 2014 Spectra Logic Corporation, All rights reserved. * Copyright (c) 2014 Spectra Logic Corporation, All rights reserved.
* Copyright 2016 Nexenta Systems, Inc. All rights reserved. * Copyright 2016 Nexenta Systems, Inc. All rights reserved.
@ -757,7 +757,7 @@ dsl_pool_sync(dsl_pool_t *dp, uint64_t txg)
dp->dp_mos_uncompressed_delta = 0; dp->dp_mos_uncompressed_delta = 0;
} }
if (!multilist_is_empty(mos->os_dirty_dnodes[txg & TXG_MASK])) { if (dmu_objset_is_dirty(mos, txg)) {
dsl_pool_sync_mos(dp, tx); dsl_pool_sync_mos(dp, tx);
} }

File diff suppressed because it is too large Load Diff

View File

@ -23,7 +23,7 @@
* Use is subject to license terms. * Use is subject to license terms.
*/ */
/* /*
* Copyright (c) 2013, 2017 by Delphix. All rights reserved. * Copyright (c) 2013, 2019 by Delphix. All rights reserved.
*/ */
#include <sys/zfs_context.h> #include <sys/zfs_context.h>
@ -578,11 +578,11 @@ range_tree_vacate(range_tree_t *rt, range_tree_func_t *func, void *arg)
void void
range_tree_walk(range_tree_t *rt, range_tree_func_t *func, void *arg) range_tree_walk(range_tree_t *rt, range_tree_func_t *func, void *arg)
{ {
range_seg_t *rs; for (range_seg_t *rs = avl_first(&rt->rt_root); rs;
rs = AVL_NEXT(&rt->rt_root, rs)) {
for (rs = avl_first(&rt->rt_root); rs; rs = AVL_NEXT(&rt->rt_root, rs))
func(arg, rs->rs_start, rs->rs_end - rs->rs_start); func(arg, rs->rs_start, rs->rs_end - rs->rs_start);
} }
}
range_seg_t * range_seg_t *
range_tree_first(range_tree_t *rt) range_tree_first(range_tree_t *rt)
@ -596,6 +596,12 @@ range_tree_space(range_tree_t *rt)
return (rt->rt_space); return (rt->rt_space);
} }
uint64_t
range_tree_numsegs(range_tree_t *rt)
{
return ((rt == NULL) ? 0 : avl_numnodes(&rt->rt_root));
}
boolean_t boolean_t
range_tree_is_empty(range_tree_t *rt) range_tree_is_empty(range_tree_t *rt)
{ {
@ -667,3 +673,73 @@ range_tree_span(range_tree_t *rt)
{ {
return (range_tree_max(rt) - range_tree_min(rt)); return (range_tree_max(rt) - range_tree_min(rt));
} }
/*
* Remove any overlapping ranges between the given segment [start, end)
* from removefrom. Add non-overlapping leftovers to addto.
*/
void
range_tree_remove_xor_add_segment(uint64_t start, uint64_t end,
range_tree_t *removefrom, range_tree_t *addto)
{
avl_index_t where;
range_seg_t starting_rs = {
.rs_start = start,
.rs_end = start + 1
};
range_seg_t *curr = avl_find(&removefrom->rt_root,
&starting_rs, &where);
if (curr == NULL)
curr = avl_nearest(&removefrom->rt_root, where, AVL_AFTER);
range_seg_t *next;
for (; curr != NULL; curr = next) {
next = AVL_NEXT(&removefrom->rt_root, curr);
if (start == end)
return;
VERIFY3U(start, <, end);
/* there is no overlap */
if (end <= curr->rs_start) {
range_tree_add(addto, start, end - start);
return;
}
uint64_t overlap_start = MAX(curr->rs_start, start);
uint64_t overlap_end = MIN(curr->rs_end, end);
uint64_t overlap_size = overlap_end - overlap_start;
ASSERT3S(overlap_size, >, 0);
range_tree_remove(removefrom, overlap_start, overlap_size);
if (start < overlap_start)
range_tree_add(addto, start, overlap_start - start);
start = overlap_end;
}
VERIFY3P(curr, ==, NULL);
if (start != end) {
VERIFY3U(start, <, end);
range_tree_add(addto, start, end - start);
} else {
VERIFY3U(start, ==, end);
}
}
/*
* For each entry in rt, if it exists in removefrom, remove it
* from removefrom. Otherwise, add it to addto.
*/
void
range_tree_remove_xor_add(range_tree_t *rt, range_tree_t *removefrom,
range_tree_t *addto)
{
for (range_seg_t *rs = avl_first(&rt->rt_root); rs;
rs = AVL_NEXT(&rt->rt_root, rs)) {
range_tree_remove_xor_add_segment(rs->rs_start, rs->rs_end,
removefrom, addto);
}
}

View File

@ -1420,19 +1420,88 @@ spa_config_parse(spa_t *spa, vdev_t **vdp, nvlist_t *nv, vdev_t *parent,
return (0); return (0);
} }
static boolean_t
spa_should_flush_logs_on_unload(spa_t *spa)
{
if (!spa_feature_is_active(spa, SPA_FEATURE_LOG_SPACEMAP))
return (B_FALSE);
if (!spa_writeable(spa))
return (B_FALSE);
if (!spa->spa_sync_on)
return (B_FALSE);
if (spa_state(spa) != POOL_STATE_EXPORTED)
return (B_FALSE);
if (zfs_keep_log_spacemaps_at_export)
return (B_FALSE);
return (B_TRUE);
}
/*
* Opens a transaction that will set the flag that will instruct
* spa_sync to attempt to flush all the metaslabs for that txg.
*/
static void
spa_unload_log_sm_flush_all(spa_t *spa)
{
dmu_tx_t *tx = dmu_tx_create_dd(spa_get_dsl(spa)->dp_mos_dir);
VERIFY0(dmu_tx_assign(tx, TXG_WAIT));
ASSERT3U(spa->spa_log_flushall_txg, ==, 0);
spa->spa_log_flushall_txg = dmu_tx_get_txg(tx);
dmu_tx_commit(tx);
txg_wait_synced(spa_get_dsl(spa), spa->spa_log_flushall_txg);
}
static void
spa_unload_log_sm_metadata(spa_t *spa)
{
void *cookie = NULL;
spa_log_sm_t *sls;
while ((sls = avl_destroy_nodes(&spa->spa_sm_logs_by_txg,
&cookie)) != NULL) {
VERIFY0(sls->sls_mscount);
kmem_free(sls, sizeof (spa_log_sm_t));
}
for (log_summary_entry_t *e = list_head(&spa->spa_log_summary);
e != NULL; e = list_head(&spa->spa_log_summary)) {
VERIFY0(e->lse_mscount);
list_remove(&spa->spa_log_summary, e);
kmem_free(e, sizeof (log_summary_entry_t));
}
spa->spa_unflushed_stats.sus_nblocks = 0;
spa->spa_unflushed_stats.sus_memused = 0;
spa->spa_unflushed_stats.sus_blocklimit = 0;
}
/* /*
* Opposite of spa_load(). * Opposite of spa_load().
*/ */
static void static void
spa_unload(spa_t *spa) spa_unload(spa_t *spa)
{ {
int i;
ASSERT(MUTEX_HELD(&spa_namespace_lock)); ASSERT(MUTEX_HELD(&spa_namespace_lock));
ASSERT(spa_state(spa) != POOL_STATE_UNINITIALIZED);
spa_import_progress_remove(spa_guid(spa)); spa_import_progress_remove(spa_guid(spa));
spa_load_note(spa, "UNLOADING"); spa_load_note(spa, "UNLOADING");
/*
* If the log space map feature is enabled and the pool is getting
* exported (but not destroyed), we want to spend some time flushing
* as many metaslabs as we can in an attempt to destroy log space
* maps and save import time.
*/
if (spa_should_flush_logs_on_unload(spa))
spa_unload_log_sm_flush_all(spa);
/* /*
* Stop async tasks. * Stop async tasks.
*/ */
@ -1454,16 +1523,15 @@ spa_unload(spa_t *spa)
} }
/* /*
* Even though vdev_free() also calls vdev_metaslab_fini, we need * This ensures that there is no async metaslab prefetching
* to call it earlier, before we wait for async i/o to complete. * while we attempt to unload the spa.
* This ensures that there is no async metaslab prefetching, by
* calling taskq_wait(mg_taskq).
*/ */
if (spa->spa_root_vdev != NULL) { if (spa->spa_root_vdev != NULL) {
spa_config_enter(spa, SCL_ALL, spa, RW_WRITER); for (int c = 0; c < spa->spa_root_vdev->vdev_children; c++) {
for (int c = 0; c < spa->spa_root_vdev->vdev_children; c++) vdev_t *vc = spa->spa_root_vdev->vdev_child[c];
vdev_metaslab_fini(spa->spa_root_vdev->vdev_child[c]); if (vc->vdev_mg != NULL)
spa_config_exit(spa, SCL_ALL, spa); taskq_wait(vc->vdev_mg->mg_taskq);
}
} }
if (spa->spa_mmp.mmp_thread) if (spa->spa_mmp.mmp_thread)
@ -1517,13 +1585,14 @@ spa_unload(spa_t *spa)
} }
ddt_unload(spa); ddt_unload(spa);
spa_unload_log_sm_metadata(spa);
/* /*
* Drop and purge level 2 cache * Drop and purge level 2 cache
*/ */
spa_l2cache_drop(spa); spa_l2cache_drop(spa);
for (i = 0; i < spa->spa_spares.sav_count; i++) for (int i = 0; i < spa->spa_spares.sav_count; i++)
vdev_free(spa->spa_spares.sav_vdevs[i]); vdev_free(spa->spa_spares.sav_vdevs[i]);
if (spa->spa_spares.sav_vdevs) { if (spa->spa_spares.sav_vdevs) {
kmem_free(spa->spa_spares.sav_vdevs, kmem_free(spa->spa_spares.sav_vdevs,
@ -1536,7 +1605,7 @@ spa_unload(spa_t *spa)
} }
spa->spa_spares.sav_count = 0; spa->spa_spares.sav_count = 0;
for (i = 0; i < spa->spa_l2cache.sav_count; i++) { for (int i = 0; i < spa->spa_l2cache.sav_count; i++) {
vdev_clear_stats(spa->spa_l2cache.sav_vdevs[i]); vdev_clear_stats(spa->spa_l2cache.sav_vdevs[i]);
vdev_free(spa->spa_l2cache.sav_vdevs[i]); vdev_free(spa->spa_l2cache.sav_vdevs[i]);
} }
@ -3723,6 +3792,13 @@ spa_ld_load_vdev_metadata(spa_t *spa)
return (spa_vdev_err(rvd, VDEV_AUX_CORRUPT_DATA, error)); return (spa_vdev_err(rvd, VDEV_AUX_CORRUPT_DATA, error));
} }
error = spa_ld_log_spacemaps(spa);
if (error != 0) {
spa_load_failed(spa, "spa_ld_log_sm_data failed [error=%d]",
error);
return (spa_vdev_err(rvd, VDEV_AUX_CORRUPT_DATA, error));
}
/* /*
* Propagate the leaf DTLs we just loaded all the way up the vdev tree. * Propagate the leaf DTLs we just loaded all the way up the vdev tree.
*/ */
@ -5864,7 +5940,7 @@ spa_reset(char *pool)
int int
spa_vdev_add(spa_t *spa, nvlist_t *nvroot) spa_vdev_add(spa_t *spa, nvlist_t *nvroot)
{ {
uint64_t txg, id; uint64_t txg;
int error; int error;
vdev_t *rvd = spa->spa_root_vdev; vdev_t *rvd = spa->spa_root_vdev;
vdev_t *vd, *tvd; vdev_t *vd, *tvd;
@ -5939,19 +6015,9 @@ spa_vdev_add(spa_t *spa, nvlist_t *nvroot)
} }
for (int c = 0; c < vd->vdev_children; c++) { for (int c = 0; c < vd->vdev_children; c++) {
/*
* Set the vdev id to the first hole, if one exists.
*/
for (id = 0; id < rvd->vdev_children; id++) {
if (rvd->vdev_child[id]->vdev_ishole) {
vdev_free(rvd->vdev_child[id]);
break;
}
}
tvd = vd->vdev_child[c]; tvd = vd->vdev_child[c];
vdev_remove_child(vd, tvd); vdev_remove_child(vd, tvd);
tvd->vdev_id = id; tvd->vdev_id = rvd->vdev_children;
vdev_add_child(rvd, tvd); vdev_add_child(rvd, tvd);
vdev_config_dirty(tvd); vdev_config_dirty(tvd);
} }
@ -7597,6 +7663,18 @@ spa_sync_deferred_frees(spa_t *spa, dmu_tx_t *tx)
if (spa_sync_pass(spa) != 1) if (spa_sync_pass(spa) != 1)
return; return;
/*
* Note:
* If the log space map feature is active, we stop deferring
* frees to the next TXG and therefore running this function
* would be considered a no-op as spa_deferred_bpobj should
* not have any entries.
*
* That said we run this function anyway (instead of returning
* immediately) for the edge-case scenario where we just
* activated the log space map feature in this TXG but we have
* deferred frees from the previous TXG.
*/
zio_t *zio = zio_root(spa, NULL, NULL, 0); zio_t *zio = zio_root(spa, NULL, NULL, 0);
VERIFY3U(bpobj_iterate(&spa->spa_deferred_bpobj, VERIFY3U(bpobj_iterate(&spa->spa_deferred_bpobj,
spa_free_sync_cb, zio, tx), ==, 0); spa_free_sync_cb, zio, tx), ==, 0);
@ -8187,7 +8265,14 @@ spa_sync_iterate_to_convergence(spa_t *spa, dmu_tx_t *tx)
spa_errlog_sync(spa, txg); spa_errlog_sync(spa, txg);
dsl_pool_sync(dp, txg); dsl_pool_sync(dp, txg);
if (pass < zfs_sync_pass_deferred_free) { if (pass < zfs_sync_pass_deferred_free ||
spa_feature_is_active(spa, SPA_FEATURE_LOG_SPACEMAP)) {
/*
* If the log space map feature is active we don't
* care about deferred frees and the deferred bpobj
* as the log space map should effectively have the
* same results (i.e. appending only to one object).
*/
spa_sync_frees(spa, free_bpl, tx); spa_sync_frees(spa, free_bpl, tx);
} else { } else {
/* /*
@ -8204,6 +8289,8 @@ spa_sync_iterate_to_convergence(spa_t *spa, dmu_tx_t *tx)
svr_sync(spa, tx); svr_sync(spa, tx);
spa_sync_upgrades(spa, tx); spa_sync_upgrades(spa, tx);
spa_flush_metaslabs(spa, tx);
vdev_t *vd = NULL; vdev_t *vd = NULL;
while ((vd = txg_list_remove(&spa->spa_vdev_txg_list, txg)) while ((vd = txg_list_remove(&spa->spa_vdev_txg_list, txg))
!= NULL) != NULL)
@ -8453,6 +8540,7 @@ spa_sync(spa_t *spa, uint64_t txg)
while ((vd = txg_list_remove(&spa->spa_vdev_txg_list, TXG_CLEAN(txg))) while ((vd = txg_list_remove(&spa->spa_vdev_txg_list, TXG_CLEAN(txg)))
!= NULL) != NULL)
vdev_sync_done(vd, txg); vdev_sync_done(vd, txg);
spa_sync_close_syncing_log_sm(spa);
spa_update_dspace(spa); spa_update_dspace(spa);
@ -8639,6 +8727,21 @@ spa_has_active_shared_spare(spa_t *spa)
return (B_FALSE); return (B_FALSE);
} }
uint64_t
spa_total_metaslabs(spa_t *spa)
{
vdev_t *rvd = spa->spa_root_vdev;
uint64_t m = 0;
for (uint64_t c = 0; c < rvd->vdev_children; c++) {
vdev_t *vd = rvd->vdev_child[c];
if (!vdev_is_concrete(vd))
continue;
m += vd->vdev_ms_count;
}
return (m);
}
sysevent_t * sysevent_t *
spa_event_create(spa_t *spa, vdev_t *vd, nvlist_t *hist_nvl, const char *name) spa_event_create(spa_t *spa, vdev_t *vd, nvlist_t *hist_nvl, const char *name)
{ {

File diff suppressed because it is too large Load Diff

View File

@ -20,7 +20,7 @@
*/ */
/* /*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2011, 2018 by Delphix. All rights reserved. * Copyright (c) 2011, 2019 by Delphix. All rights reserved.
* Copyright 2015 Nexenta Systems, Inc. All rights reserved. * Copyright 2015 Nexenta Systems, Inc. All rights reserved.
* Copyright (c) 2014 Spectra Logic Corporation, All rights reserved. * Copyright (c) 2014 Spectra Logic Corporation, All rights reserved.
* Copyright 2013 Saso Kiselkov. All rights reserved. * Copyright 2013 Saso Kiselkov. All rights reserved.
@ -64,7 +64,7 @@
/* /*
* SPA locking * SPA locking
* *
* There are four basic locks for managing spa_t structures: * There are three basic locks for managing spa_t structures:
* *
* spa_namespace_lock (global mutex) * spa_namespace_lock (global mutex)
* *
@ -613,6 +613,15 @@ spa_deadman(void *arg)
MSEC_TO_TICK(zfs_deadman_checktime_ms)); MSEC_TO_TICK(zfs_deadman_checktime_ms));
} }
int
spa_log_sm_sort_by_txg(const void *va, const void *vb)
{
const spa_log_sm_t *a = va;
const spa_log_sm_t *b = vb;
return (AVL_CMP(a->sls_txg, b->sls_txg));
}
/* /*
* Create an uninitialized spa_t with the given name. Requires * Create an uninitialized spa_t with the given name. Requires
* spa_namespace_lock. The caller must ensure that the spa_t doesn't already * spa_namespace_lock. The caller must ensure that the spa_t doesn't already
@ -640,6 +649,7 @@ spa_add(const char *name, nvlist_t *config, const char *altroot)
mutex_init(&spa->spa_suspend_lock, NULL, MUTEX_DEFAULT, NULL); mutex_init(&spa->spa_suspend_lock, NULL, MUTEX_DEFAULT, NULL);
mutex_init(&spa->spa_vdev_top_lock, NULL, MUTEX_DEFAULT, NULL); mutex_init(&spa->spa_vdev_top_lock, NULL, MUTEX_DEFAULT, NULL);
mutex_init(&spa->spa_feat_stats_lock, NULL, MUTEX_DEFAULT, NULL); mutex_init(&spa->spa_feat_stats_lock, NULL, MUTEX_DEFAULT, NULL);
mutex_init(&spa->spa_flushed_ms_lock, NULL, MUTEX_DEFAULT, NULL);
cv_init(&spa->spa_async_cv, NULL, CV_DEFAULT, NULL); cv_init(&spa->spa_async_cv, NULL, CV_DEFAULT, NULL);
cv_init(&spa->spa_evicting_os_cv, NULL, CV_DEFAULT, NULL); cv_init(&spa->spa_evicting_os_cv, NULL, CV_DEFAULT, NULL);
@ -685,6 +695,12 @@ spa_add(const char *name, nvlist_t *config, const char *altroot)
avl_create(&spa->spa_alloc_trees[i], zio_bookmark_compare, avl_create(&spa->spa_alloc_trees[i], zio_bookmark_compare,
sizeof (zio_t), offsetof(zio_t, io_alloc_node)); sizeof (zio_t), offsetof(zio_t, io_alloc_node));
} }
avl_create(&spa->spa_metaslabs_by_flushed, metaslab_sort_by_flushed,
sizeof (metaslab_t), offsetof(metaslab_t, ms_spa_txg_node));
avl_create(&spa->spa_sm_logs_by_txg, spa_log_sm_sort_by_txg,
sizeof (spa_log_sm_t), offsetof(spa_log_sm_t, sls_node));
list_create(&spa->spa_log_summary, sizeof (log_summary_entry_t),
offsetof(log_summary_entry_t, lse_node));
/* /*
* Every pool starts with the default cachefile * Every pool starts with the default cachefile
@ -748,7 +764,7 @@ spa_remove(spa_t *spa)
spa_config_dirent_t *dp; spa_config_dirent_t *dp;
ASSERT(MUTEX_HELD(&spa_namespace_lock)); ASSERT(MUTEX_HELD(&spa_namespace_lock));
ASSERT(spa->spa_state == POOL_STATE_UNINITIALIZED); ASSERT(spa_state(spa) == POOL_STATE_UNINITIALIZED);
ASSERT3U(zfs_refcount_count(&spa->spa_refcount), ==, 0); ASSERT3U(zfs_refcount_count(&spa->spa_refcount), ==, 0);
nvlist_free(spa->spa_config_splitting); nvlist_free(spa->spa_config_splitting);
@ -775,6 +791,9 @@ spa_remove(spa_t *spa)
kmem_free(spa->spa_alloc_trees, spa->spa_alloc_count * kmem_free(spa->spa_alloc_trees, spa->spa_alloc_count *
sizeof (avl_tree_t)); sizeof (avl_tree_t));
avl_destroy(&spa->spa_metaslabs_by_flushed);
avl_destroy(&spa->spa_sm_logs_by_txg);
list_destroy(&spa->spa_log_summary);
list_destroy(&spa->spa_config_list); list_destroy(&spa->spa_config_list);
list_destroy(&spa->spa_leaf_list); list_destroy(&spa->spa_leaf_list);
@ -799,6 +818,7 @@ spa_remove(spa_t *spa)
cv_destroy(&spa->spa_scrub_io_cv); cv_destroy(&spa->spa_scrub_io_cv);
cv_destroy(&spa->spa_suspend_cv); cv_destroy(&spa->spa_suspend_cv);
mutex_destroy(&spa->spa_flushed_ms_lock);
mutex_destroy(&spa->spa_async_lock); mutex_destroy(&spa->spa_async_lock);
mutex_destroy(&spa->spa_errlist_lock); mutex_destroy(&spa->spa_errlist_lock);
mutex_destroy(&spa->spa_errlog_lock); mutex_destroy(&spa->spa_errlog_lock);
@ -2570,6 +2590,12 @@ spa_missing_tvds_allowed(spa_t *spa)
return (spa->spa_missing_tvds_allowed); return (spa->spa_missing_tvds_allowed);
} }
space_map_t *
spa_syncing_log_sm(spa_t *spa)
{
return (spa->spa_syncing_log_sm);
}
void void
spa_set_missing_tvds(spa_t *spa, uint64_t missing) spa_set_missing_tvds(spa_t *spa, uint64_t missing)
{ {

View File

@ -23,7 +23,7 @@
* Use is subject to license terms. * Use is subject to license terms.
*/ */
/* /*
* Copyright (c) 2012, 2018 by Delphix. All rights reserved. * Copyright (c) 2012, 2019 by Delphix. All rights reserved.
*/ */
#include <sys/zfs_context.h> #include <sys/zfs_context.h>
@ -1067,3 +1067,11 @@ space_map_length(space_map_t *sm)
{ {
return (sm != NULL ? sm->sm_phys->smp_length : 0); return (sm != NULL ? sm->sm_phys->smp_length : 0);
} }
uint64_t
space_map_nblocks(space_map_t *sm)
{
if (sm == NULL)
return (0);
return (DIV_ROUND_UP(space_map_length(sm), sm->sm_blksz));
}

View File

@ -21,7 +21,7 @@
/* /*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
* Portions Copyright 2011 Martin Matuska * Portions Copyright 2011 Martin Matuska
* Copyright (c) 2012, 2017 by Delphix. All rights reserved. * Copyright (c) 2012, 2019 by Delphix. All rights reserved.
*/ */
#include <sys/zfs_context.h> #include <sys/zfs_context.h>
@ -272,7 +272,7 @@ txg_sync_stop(dsl_pool_t *dp)
ASSERT3U(tx->tx_threads, ==, 2); ASSERT3U(tx->tx_threads, ==, 2);
/* /*
* We need to ensure that we've vacated the deferred space_maps. * We need to ensure that we've vacated the deferred metaslab trees.
*/ */
txg_wait_synced(dp, tx->tx_open_txg + TXG_DEFER_SIZE); txg_wait_synced(dp, tx->tx_open_txg + TXG_DEFER_SIZE);

View File

@ -76,7 +76,7 @@ int vdev_validate_skip = B_FALSE;
* Since the DTL space map of a vdev is not expected to have a lot of * Since the DTL space map of a vdev is not expected to have a lot of
* entries, we default its block size to 4K. * entries, we default its block size to 4K.
*/ */
int vdev_dtl_sm_blksz = (1 << 12); int zfs_vdev_dtl_sm_blksz = (1 << 12);
/* /*
* Rate limit slow IO (delay) events to this many per second. * Rate limit slow IO (delay) events to this many per second.
@ -99,7 +99,7 @@ int zfs_scan_ignore_errors = 0;
* the end of each transaction can benefit from a higher I/O bandwidth * the end of each transaction can benefit from a higher I/O bandwidth
* (e.g. vdev_obsolete_sm), thus we default their block size to 128K. * (e.g. vdev_obsolete_sm), thus we default their block size to 128K.
*/ */
int vdev_standard_sm_blksz = (1 << 17); int zfs_vdev_standard_sm_blksz = (1 << 17);
/* /*
* Tunable parameter for debugging or performance analysis. Setting this * Tunable parameter for debugging or performance analysis. Setting this
@ -924,6 +924,7 @@ vdev_free(vdev_t *vd)
if (vd->vdev_mg != NULL) { if (vd->vdev_mg != NULL) {
vdev_metaslab_fini(vd); vdev_metaslab_fini(vd);
metaslab_group_destroy(vd->vdev_mg); metaslab_group_destroy(vd->vdev_mg);
vd->vdev_mg = NULL;
} }
ASSERT0(vd->vdev_stat.vs_space); ASSERT0(vd->vdev_stat.vs_space);
@ -1353,6 +1354,13 @@ vdev_metaslab_init(vdev_t *vd, uint64_t txg)
if (txg == 0) if (txg == 0)
spa_config_exit(spa, SCL_ALLOC, FTAG); spa_config_exit(spa, SCL_ALLOC, FTAG);
/*
* Regardless whether this vdev was just added or it is being
* expanded, the metaslab count has changed. Recalculate the
* block limit.
*/
spa_log_sm_set_blocklimit(spa);
return (0); return (0);
} }
@ -2867,7 +2875,7 @@ vdev_dtl_sync(vdev_t *vd, uint64_t txg)
if (vd->vdev_dtl_sm == NULL) { if (vd->vdev_dtl_sm == NULL) {
uint64_t new_object; uint64_t new_object;
new_object = space_map_alloc(mos, vdev_dtl_sm_blksz, tx); new_object = space_map_alloc(mos, zfs_vdev_dtl_sm_blksz, tx);
VERIFY3U(new_object, !=, 0); VERIFY3U(new_object, !=, 0);
VERIFY0(space_map_open(&vd->vdev_dtl_sm, mos, new_object, VERIFY0(space_map_open(&vd->vdev_dtl_sm, mos, new_object,
@ -2881,7 +2889,7 @@ vdev_dtl_sync(vdev_t *vd, uint64_t txg)
range_tree_walk(rt, range_tree_add, rtsync); range_tree_walk(rt, range_tree_add, rtsync);
mutex_exit(&vd->vdev_dtl_lock); mutex_exit(&vd->vdev_dtl_lock);
space_map_truncate(vd->vdev_dtl_sm, vdev_dtl_sm_blksz, tx); space_map_truncate(vd->vdev_dtl_sm, zfs_vdev_dtl_sm_blksz, tx);
space_map_write(vd->vdev_dtl_sm, rtsync, SM_ALLOC, SM_NO_VDEVID, tx); space_map_write(vd->vdev_dtl_sm, rtsync, SM_ALLOC, SM_NO_VDEVID, tx);
range_tree_vacate(rtsync, NULL, NULL); range_tree_vacate(rtsync, NULL, NULL);
@ -3172,6 +3180,25 @@ vdev_validate_aux(vdev_t *vd)
return (0); return (0);
} }
static void
vdev_destroy_ms_flush_data(vdev_t *vd, dmu_tx_t *tx)
{
objset_t *mos = spa_meta_objset(vd->vdev_spa);
if (vd->vdev_top_zap == 0)
return;
uint64_t object = 0;
int err = zap_lookup(mos, vd->vdev_top_zap,
VDEV_TOP_ZAP_MS_UNFLUSHED_PHYS_TXGS, sizeof (uint64_t), 1, &object);
if (err == ENOENT)
return;
VERIFY0(dmu_object_free(mos, object, tx));
VERIFY0(zap_remove(mos, vd->vdev_top_zap,
VDEV_TOP_ZAP_MS_UNFLUSHED_PHYS_TXGS, tx));
}
/* /*
* Free the objects used to store this vdev's spacemaps, and the array * Free the objects used to store this vdev's spacemaps, and the array
* that points to them. * that points to them.
@ -3199,6 +3226,7 @@ vdev_destroy_spacemaps(vdev_t *vd, dmu_tx_t *tx)
kmem_free(smobj_array, array_bytes); kmem_free(smobj_array, array_bytes);
VERIFY0(dmu_object_free(mos, vd->vdev_ms_array, tx)); VERIFY0(dmu_object_free(mos, vd->vdev_ms_array, tx));
vdev_destroy_ms_flush_data(vd, tx);
vd->vdev_ms_array = 0; vd->vdev_ms_array = 0;
} }
@ -4762,6 +4790,10 @@ module_param(zfs_vdev_default_ms_count, int, 0644);
MODULE_PARM_DESC(zfs_vdev_default_ms_count, MODULE_PARM_DESC(zfs_vdev_default_ms_count,
"Target number of metaslabs per top-level vdev"); "Target number of metaslabs per top-level vdev");
module_param(zfs_vdev_default_ms_shift, int, 0644);
MODULE_PARM_DESC(zfs_vdev_default_ms_shift,
"Default limit for metaslab size");
module_param(zfs_vdev_min_ms_count, int, 0644); module_param(zfs_vdev_min_ms_count, int, 0644);
MODULE_PARM_DESC(zfs_vdev_min_ms_count, MODULE_PARM_DESC(zfs_vdev_min_ms_count,
"Minimum number of metaslabs per top-level vdev"); "Minimum number of metaslabs per top-level vdev");

View File

@ -16,6 +16,7 @@
/* /*
* Copyright (c) 2014, 2017 by Delphix. All rights reserved. * Copyright (c) 2014, 2017 by Delphix. All rights reserved.
* Copyright (c) 2019, loli10K <ezomori.nozomu@gmail.com>. All rights reserved. * Copyright (c) 2019, loli10K <ezomori.nozomu@gmail.com>. All rights reserved.
* Copyright (c) 2014, 2019 by Delphix. All rights reserved.
*/ */
#include <sys/zfs_context.h> #include <sys/zfs_context.h>
@ -825,7 +826,7 @@ vdev_indirect_sync_obsolete(vdev_t *vd, dmu_tx_t *tx)
VERIFY0(vdev_obsolete_sm_object(vd, &obsolete_sm_object)); VERIFY0(vdev_obsolete_sm_object(vd, &obsolete_sm_object));
if (obsolete_sm_object == 0) { if (obsolete_sm_object == 0) {
obsolete_sm_object = space_map_alloc(spa->spa_meta_objset, obsolete_sm_object = space_map_alloc(spa->spa_meta_objset,
vdev_standard_sm_blksz, tx); zfs_vdev_standard_sm_blksz, tx);
ASSERT(vd->vdev_top_zap != 0); ASSERT(vd->vdev_top_zap != 0);
VERIFY0(zap_add(vd->vdev_spa->spa_meta_objset, vd->vdev_top_zap, VERIFY0(zap_add(vd->vdev_spa->spa_meta_objset, vd->vdev_top_zap,

View File

@ -1203,6 +1203,7 @@ vdev_remove_complete(spa_t *spa)
vdev_metaslab_fini(vd); vdev_metaslab_fini(vd);
metaslab_group_destroy(vd->vdev_mg); metaslab_group_destroy(vd->vdev_mg);
vd->vdev_mg = NULL; vd->vdev_mg = NULL;
spa_log_sm_set_blocklimit(spa);
} }
ASSERT0(vd->vdev_stat.vs_space); ASSERT0(vd->vdev_stat.vs_space);
ASSERT0(vd->vdev_stat.vs_dspace); ASSERT0(vd->vdev_stat.vs_dspace);
@ -1461,6 +1462,10 @@ spa_vdev_remove_thread(void *arg)
VERIFY0(space_map_load(msp->ms_sm, VERIFY0(space_map_load(msp->ms_sm,
svr->svr_allocd_segs, SM_ALLOC)); svr->svr_allocd_segs, SM_ALLOC));
range_tree_walk(msp->ms_unflushed_allocs,
range_tree_add, svr->svr_allocd_segs);
range_tree_walk(msp->ms_unflushed_frees,
range_tree_remove, svr->svr_allocd_segs);
range_tree_walk(msp->ms_freeing, range_tree_walk(msp->ms_freeing,
range_tree_remove, svr->svr_allocd_segs); range_tree_remove, svr->svr_allocd_segs);
@ -1685,6 +1690,11 @@ spa_vdev_remove_cancel_sync(void *arg, dmu_tx_t *tx)
mutex_enter(&svr->svr_lock); mutex_enter(&svr->svr_lock);
VERIFY0(space_map_load(msp->ms_sm, VERIFY0(space_map_load(msp->ms_sm,
svr->svr_allocd_segs, SM_ALLOC)); svr->svr_allocd_segs, SM_ALLOC));
range_tree_walk(msp->ms_unflushed_allocs,
range_tree_add, svr->svr_allocd_segs);
range_tree_walk(msp->ms_unflushed_frees,
range_tree_remove, svr->svr_allocd_segs);
range_tree_walk(msp->ms_freeing, range_tree_walk(msp->ms_freeing,
range_tree_remove, svr->svr_allocd_segs); range_tree_remove, svr->svr_allocd_segs);
@ -1813,19 +1823,14 @@ vdev_remove_make_hole_and_free(vdev_t *vd)
uint64_t id = vd->vdev_id; uint64_t id = vd->vdev_id;
spa_t *spa = vd->vdev_spa; spa_t *spa = vd->vdev_spa;
vdev_t *rvd = spa->spa_root_vdev; vdev_t *rvd = spa->spa_root_vdev;
boolean_t last_vdev = (id == (rvd->vdev_children - 1));
ASSERT(MUTEX_HELD(&spa_namespace_lock)); ASSERT(MUTEX_HELD(&spa_namespace_lock));
ASSERT(spa_config_held(spa, SCL_ALL, RW_WRITER) == SCL_ALL); ASSERT(spa_config_held(spa, SCL_ALL, RW_WRITER) == SCL_ALL);
vdev_free(vd); vdev_free(vd);
if (last_vdev) {
vdev_compact_children(rvd);
} else {
vd = vdev_alloc_common(spa, id, 0, &vdev_hole_ops); vd = vdev_alloc_common(spa, id, 0, &vdev_hole_ops);
vdev_add_child(rvd, vd); vdev_add_child(rvd, vd);
}
vdev_config_dirty(rvd); vdev_config_dirty(rvd);
/* /*
@ -1887,7 +1892,28 @@ spa_vdev_remove_log(vdev_t *vd, uint64_t *txg)
vdev_dirty_leaves(vd, VDD_DTL, *txg); vdev_dirty_leaves(vd, VDD_DTL, *txg);
vdev_config_dirty(vd); vdev_config_dirty(vd);
/*
* When the log space map feature is enabled we look at
* the vdev's top_zap to find the on-disk flush data of
* the metaslab we just flushed. Thus, while removing a
* log vdev we make sure to call vdev_metaslab_fini()
* first, which removes all metaslabs of this vdev from
* spa_metaslabs_by_flushed before vdev_remove_empty()
* destroys the top_zap of this log vdev.
*
* This avoids the scenario where we flush a metaslab
* from the log vdev being removed that doesn't have a
* top_zap and end up failing to lookup its on-disk flush
* data.
*
* We don't call metaslab_group_destroy() right away
* though (it will be called in vdev_free() later) as
* during metaslab_sync() of metaslabs from other vdevs
* we may touch the metaslab group of this vdev through
* metaslab_class_histogram_verify()
*/
vdev_metaslab_fini(vd); vdev_metaslab_fini(vd);
spa_log_sm_set_blocklimit(spa);
spa_vdev_config_exit(spa, NULL, *txg, 0, FTAG); spa_vdev_config_exit(spa, NULL, *txg, 0, FTAG);

View File

@ -1117,10 +1117,16 @@ zio_free(spa_t *spa, uint64_t txg, const blkptr_t *bp)
* deferred, and which will not need to do a read (i.e. not GANG or * deferred, and which will not need to do a read (i.e. not GANG or
* DEDUP), can be processed immediately. Otherwise, put them on the * DEDUP), can be processed immediately. Otherwise, put them on the
* in-memory list for later processing. * in-memory list for later processing.
*
* Note that we only defer frees after zfs_sync_pass_deferred_free
* when the log space map feature is disabled. [see relevant comment
* in spa_sync_iterate_to_convergence()]
*/ */
if (BP_IS_GANG(bp) || BP_GET_DEDUP(bp) || if (BP_IS_GANG(bp) ||
BP_GET_DEDUP(bp) ||
txg != spa->spa_syncing_txg || txg != spa->spa_syncing_txg ||
spa_sync_pass(spa) >= zfs_sync_pass_deferred_free) { (spa_sync_pass(spa) >= zfs_sync_pass_deferred_free &&
!spa_feature_is_active(spa, SPA_FEATURE_LOG_SPACEMAP))) {
bplist_append(&spa->spa_free_bplist[txg & TXG_MASK], bp); bplist_append(&spa->spa_free_bplist[txg & TXG_MASK], bp);
} else { } else {
VERIFY0(zio_wait(zio_free_sync(NULL, spa, txg, bp, 0))); VERIFY0(zio_wait(zio_free_sync(NULL, spa, txg, bp, 0)));
@ -1136,7 +1142,6 @@ zio_free_sync(zio_t *pio, spa_t *spa, uint64_t txg, const blkptr_t *bp,
ASSERT(!BP_IS_HOLE(bp)); ASSERT(!BP_IS_HOLE(bp));
ASSERT(spa_syncing_txg(spa) == txg); ASSERT(spa_syncing_txg(spa) == txg);
ASSERT(spa_sync_pass(spa) < zfs_sync_pass_deferred_free);
if (BP_IS_EMBEDDED(bp)) if (BP_IS_EMBEDDED(bp))
return (zio_null(pio, spa, NULL, NULL, NULL, 0)); return (zio_null(pio, spa, NULL, NULL, NULL, 0));

View File

@ -927,3 +927,9 @@ tags = ['functional', 'zvol', 'zvol_swap']
[tests/functional/libzfs] [tests/functional/libzfs]
tests = ['many_fds', 'libzfs_input'] tests = ['many_fds', 'libzfs_input']
tags = ['functional', 'libzfs'] tags = ['functional', 'libzfs']
[tests/functional/log_spacemap]
tests = ['log_spacemap_import_logs']
pre =
post =
tags = ['functional', 'log_spacemap']

View File

@ -33,8 +33,8 @@ SUBDIRS = \
largest_pool \ largest_pool \
libzfs \ libzfs \
limits \ limits \
pyzfs \
link_count \ link_count \
log_spacemap \
migration \ migration \
mmap \ mmap \
mmp \ mmp \
@ -50,6 +50,7 @@ SUBDIRS = \
privilege \ privilege \
procfs \ procfs \
projectquota \ projectquota \
pyzfs \
quota \ quota \
raidz \ raidz \
redacted_send \ redacted_send \

View File

@ -58,7 +58,7 @@ function testdbufstat # stat_name dbufstat_filter
from_dbufstat=$(grep -w "$name" "$DBUFSTATS_FILE" | awk '{ print $3 }') from_dbufstat=$(grep -w "$name" "$DBUFSTATS_FILE" | awk '{ print $3 }')
from_dbufs=$(dbufstat -bxn -i "$DBUFS_FILE" "$filter" | wc -l) from_dbufs=$(dbufstat -bxn -i "$DBUFS_FILE" "$filter" | wc -l)
within_tolerance $from_dbufstat $from_dbufs 9 \ within_tolerance $from_dbufstat $from_dbufs 15 \
|| log_fail "Stat $name exceeded tolerance" || log_fail "Stat $name exceeded tolerance"
} }

View File

@ -79,6 +79,7 @@ typeset -a properties=(
"feature@redaction_bookmarks" "feature@redaction_bookmarks"
"feature@redacted_datasets" "feature@redacted_datasets"
"feature@bookmark_written" "feature@bookmark_written"
"feature@log_spacemap"
) )
# Additional properties added for Linux. # Additional properties added for Linux.

View File

@ -0,0 +1,2 @@
pkgdatadir = $(datadir)/@PACKAGE@/zfs-tests/tests/functional/log_spacemap
dist_pkgdata_SCRIPTS = log_spacemap_import_logs.ksh

View File

@ -0,0 +1,81 @@
#! /bin/ksh -p
#
# CDDL HEADER START
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
# CDDL HEADER END
#
#
# Copyright (c) 2019 by Delphix. All rights reserved.
#
. $STF_SUITE/include/libtest.shlib
#
# DESCRIPTION:
# Log spacemaps are generally destroyed at export in order to
# not induce performance overheads at import time. As a result,
# the log spacemap codepaths that read the logs in import times
# are not tested outside of ztest and pools with DEBUG bits doing
# many imports/exports while running the test suite.
#
# This test uses an internal tunable and forces ZFS to keep the
# log spacemaps at export, and then re-imports the pool, thus
# providing explicit testing of those codepaths. It also uses
# another tunable to load all the metaslabs when the pool is
# re-imported so more assertions and verifications will be hit.
#
# STRATEGY:
# 1. Create pool.
# 2. Do a couple of writes to generate some data for spacemap logs.
# 3. Set tunable to keep logs after export.
# 4. Export pool and verify that there are logs with zdb.
# 5. Set tunable to load all metaslabs at import.
# 6. Import pool.
# 7. Reset tunables.
#
verify_runnable "global"
function cleanup
{
log_must set_tunable64 zfs_keep_log_spacemaps_at_export 0
log_must set_tunable64 metaslab_debug_load 0
if poolexists $LOGSM_POOL; then
log_must zpool destroy -f $LOGSM_POOL
fi
}
log_onexit cleanup
LOGSM_POOL="logsm_import"
TESTDISK="$(echo $DISKS | cut -d' ' -f1)"
log_must zpool create -o cachefile=none -f $LOGSM_POOL $TESTDISK
log_must zfs create $LOGSM_POOL/fs
log_must dd if=/dev/urandom of=/$LOGSM_POOL/fs/00 bs=128k count=10
log_must sync
log_must dd if=/dev/urandom of=/$LOGSM_POOL/fs/00 bs=128k count=10
log_must sync
log_must set_tunable64 zfs_keep_log_spacemaps_at_export 1
log_must zpool export $LOGSM_POOL
LOGSM_COUNT=$(zdb -m -e $LOGSM_POOL | grep "Log Spacemap object" | wc -l)
if (( LOGSM_COUNT == 0 )); then
log_fail "Pool does not have any log spacemaps after being exported"
fi
log_must set_tunable64 metaslab_debug_load 1
log_must zpool import $LOGSM_POOL
log_pass "Log spacemaps imported with no errors"

View File

@ -30,7 +30,7 @@ function reset
default_setup_noexit "$DISKS" "true" default_setup_noexit "$DISKS" "true"
log_onexit reset log_onexit reset
log_must set_tunable64 zfs_condense_indirect_commit_entry_delay_ms 1000 log_must set_tunable64 zfs_condense_indirect_commit_entry_delay_ms 5000
log_must set_tunable64 zfs_condense_min_mapping_bytes 1 log_must set_tunable64 zfs_condense_min_mapping_bytes 1
log_must zfs set recordsize=512 $TESTPOOL/$TESTFS log_must zfs set recordsize=512 $TESTPOOL/$TESTFS
@ -82,7 +82,7 @@ log_mustnot vdevs_in_pool $TESTPOOL $REMOVEDISK
log_must stride_dd -i /dev/urandom -o $TESTDIR/file -b 512 -c 20 -s 1024 log_must stride_dd -i /dev/urandom -o $TESTDIR/file -b 512 -c 20 -s 1024
sync_pool $TESTPOOL sync_pool $TESTPOOL
sleep 5 sleep 4
sync_pool $TESTPOOL sync_pool $TESTPOOL
log_must zpool export $TESTPOOL log_must zpool export $TESTPOOL
zdb -e -p $REMOVEDISKPATH $TESTPOOL | grep 'Condensing indirect vdev' || \ zdb -e -p $REMOVEDISKPATH $TESTPOOL | grep 'Condensing indirect vdev' || \