mirror of
https://git.proxmox.com/git/mirror_zfs.git
synced 2024-12-26 19:19:32 +03:00
93e28d661e
= Motivation At Delphix we've seen a lot of customer systems where fragmentation is over 75% and random writes take a performance hit because a lot of time is spend on I/Os that update on-disk space accounting metadata. Specifically, we seen cases where 20% to 40% of sync time is spend after sync pass 1 and ~30% of the I/Os on the system is spent updating spacemaps. The problem is that these pools have existed long enough that we've touched almost every metaslab at least once, and random writes scatter frees across all metaslabs every TXG, thus appending to their spacemaps and resulting in many I/Os. To give an example, assuming that every VDEV has 200 metaslabs and our writes fit within a single spacemap block (generally 4K) we have 200 I/Os. Then if we assume 2 levels of indirection, we need 400 additional I/Os and since we are talking about metadata for which we keep 2 extra copies for redundancy we need to triple that number, leading to a total of 1800 I/Os per VDEV every TXG. We could try and decrease the number of metaslabs so we have less I/Os per TXG but then each metaslab would cover a wider range on disk and thus would take more time to be loaded in memory from disk. In addition, after it's loaded, it's range tree would consume more memory. Another idea would be to just increase the spacemap block size which would allow us to fit more entries within an I/O block resulting in fewer I/Os per metaslab and a speedup in loading time. The problem is still that we don't deal with the number of I/Os going up as the number of metaslabs is increasing and the fact is that we generally write a lot to a few metaslabs and a little to the rest of them. Thus, just increasing the block size would actually waste bandwidth because we won't be utilizing our bigger block size. = About this patch This patch introduces the Log Spacemap project which provides the solution to the above problem while taking into account all the aforementioned tradeoffs. The details on how it achieves that can be found in the references sections below and in the code (see Big Theory Statement in spa_log_spacemap.c). Even though the change is fairly constraint within the metaslab and lower-level SPA codepaths, there is a side-change that is user-facing. The change is that VDEV IDs from VDEV holes will no longer be reused. To give some background and reasoning for this, when a log device is removed and its VDEV structure was replaced with a hole (or was compacted; if at the end of the vdev array), its vdev_id could be reused by devices added after that. Now with the pool-wide space maps recording the vdev ID, this behavior can cause problems (e.g. is this entry referring to a segment in the new vdev or the removed log?). Thus, to simplify things the ID reuse behavior is gone and now vdev IDs for top-level vdevs are truly unique within a pool. = Testing The illumos implementation of this feature has been used internally for a year and has been in production for ~6 months. For this patch specifically there don't seem to be any regressions introduced to ZTS and I have been running zloop for a week without any related problems. = Performance Analysis (Linux Specific) All performance results and analysis for illumos can be found in the links of the references. Redoing the same experiments in Linux gave similar results. Below are the specifics of the Linux run. After the pool reached stable state the percentage of the time spent in pass 1 per TXG was 64% on average for the stock bits while the log spacemap bits stayed at 95% during the experiment (graph: sdimitro.github.io/img/linux-lsm/PercOfSyncInPassOne.png). Sync times per TXG were 37.6 seconds on average for the stock bits and 22.7 seconds for the log spacemap bits (related graph: sdimitro.github.io/img/linux-lsm/SyncTimePerTXG.png). As a result the log spacemap bits were able to push more TXGs, which is also the reason why all graphs quantified per TXG have more entries for the log spacemap bits. Another interesting aspect in terms of txg syncs is that the stock bits had 22% of their TXGs reach sync pass 7, 55% reach sync pass 8, and 20% reach 9. The log space map bits reached sync pass 4 in 79% of their TXGs, sync pass 7 in 19%, and sync pass 8 at 1%. This emphasizes the fact that not only we spend less time on metadata but we also iterate less times to convergence in spa_sync() dirtying objects. [related graphs: stock- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGStock.png lsm- sdimitro.github.io/img/linux-lsm/NumberOfPassesPerTXGLSM.png] Finally, the improvement in IOPs that the userland gains from the change is approximately 40%. There is a consistent win in IOPS as you can see from the graphs below but the absolute amount of improvement that the log spacemap gives varies within each minute interval. sdimitro.github.io/img/linux-lsm/StockVsLog3Days.png sdimitro.github.io/img/linux-lsm/StockVsLog10Hours.png = Porting to Other Platforms For people that want to port this commit to other platforms below is a list of ZoL commits that this patch depends on: Make zdb results for checkpoint tests consistentdb587941c5
Update vdev_is_spacemap_addressable() for new spacemap encoding419ba59145
Simplify spa_sync by breaking it up to smaller functions8dc2197b7b
Factor metaslab_load_wait() in metaslab_load()b194fab0fb
Rename range_tree_verify to range_tree_verify_not_presentdf72b8bebe
Change target size of metaslabs from 256GB to 16GBc853f382db
zdb -L should skip leak detection altogether21e7cf5da8
vs_alloc can underflow in L2ARC vdevs7558997d2f
Simplify log vdev removal code6c926f426a
Get rid of space_map_update() for ms_synced_length425d3237ee
Introduce auxiliary metaslab histograms928e8ad47d
Error path in metaslab_load_impl() forgets to drop ms_sync_lock8eef997679
= References Background, Motivation, and Internals of the Feature - OpenZFS 2017 Presentation: youtu.be/jj2IxRkl5bQ - Slides: slideshare.net/SerapheimNikolaosDim/zfs-log-spacemaps-project Flushing Algorithm Internals & Performance Results (Illumos Specific) - Blogpost: sdimitro.github.io/post/zfs-lsm-flushing/ - OpenZFS 2018 Presentation: youtu.be/x6D2dHRjkxw - Slides: slideshare.net/SerapheimNikolaosDim/zfs-log-spacemap-flushing-algorithm Upstream Delphix Issues: DLPX-51539, DLPX-59659, DLPX-57783, DLPX-61438, DLPX-41227, DLPX-59320 DLPX-63385 Reviewed-by: Sean Eric Fagan <sef@ixsystems.com> Reviewed-by: Matt Ahrens <matt@delphix.com> Reviewed-by: George Wilson <gwilson@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Serapheim Dimitropoulos <serapheim@delphix.com> Closes #8442
133 lines
4.8 KiB
C
133 lines
4.8 KiB
C
/*
|
|
* CDDL HEADER START
|
|
*
|
|
* The contents of this file are subject to the terms of the
|
|
* Common Development and Distribution License (the "License").
|
|
* You may not use this file except in compliance with the License.
|
|
*
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
* or http://www.opensolaris.org/os/licensing.
|
|
* See the License for the specific language governing permissions
|
|
* and limitations under the License.
|
|
*
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
*
|
|
* CDDL HEADER END
|
|
*/
|
|
/*
|
|
* Copyright 2009 Sun Microsystems, Inc. All rights reserved.
|
|
* Use is subject to license terms.
|
|
*/
|
|
|
|
/*
|
|
* Copyright (c) 2013, 2019 by Delphix. All rights reserved.
|
|
*/
|
|
|
|
#ifndef _SYS_RANGE_TREE_H
|
|
#define _SYS_RANGE_TREE_H
|
|
|
|
#include <sys/avl.h>
|
|
#include <sys/dmu.h>
|
|
|
|
#ifdef __cplusplus
|
|
extern "C" {
|
|
#endif
|
|
|
|
#define RANGE_TREE_HISTOGRAM_SIZE 64
|
|
|
|
typedef struct range_tree_ops range_tree_ops_t;
|
|
|
|
/*
|
|
* Note: the range_tree may not be accessed concurrently; consumers
|
|
* must provide external locking if required.
|
|
*/
|
|
typedef struct range_tree {
|
|
avl_tree_t rt_root; /* offset-ordered segment AVL tree */
|
|
uint64_t rt_space; /* sum of all segments in the map */
|
|
uint64_t rt_gap; /* allowable inter-segment gap */
|
|
range_tree_ops_t *rt_ops;
|
|
|
|
/* rt_avl_compare should only be set if rt_arg is an AVL tree */
|
|
void *rt_arg;
|
|
int (*rt_avl_compare)(const void *, const void *);
|
|
|
|
|
|
/*
|
|
* The rt_histogram maintains a histogram of ranges. Each bucket,
|
|
* rt_histogram[i], contains the number of ranges whose size is:
|
|
* 2^i <= size of range in bytes < 2^(i+1)
|
|
*/
|
|
uint64_t rt_histogram[RANGE_TREE_HISTOGRAM_SIZE];
|
|
} range_tree_t;
|
|
|
|
typedef struct range_seg {
|
|
avl_node_t rs_node; /* AVL node */
|
|
avl_node_t rs_pp_node; /* AVL picker-private node */
|
|
uint64_t rs_start; /* starting offset of this segment */
|
|
uint64_t rs_end; /* ending offset (non-inclusive) */
|
|
uint64_t rs_fill; /* actual fill if gap mode is on */
|
|
} range_seg_t;
|
|
|
|
struct range_tree_ops {
|
|
void (*rtop_create)(range_tree_t *rt, void *arg);
|
|
void (*rtop_destroy)(range_tree_t *rt, void *arg);
|
|
void (*rtop_add)(range_tree_t *rt, range_seg_t *rs, void *arg);
|
|
void (*rtop_remove)(range_tree_t *rt, range_seg_t *rs, void *arg);
|
|
void (*rtop_vacate)(range_tree_t *rt, void *arg);
|
|
};
|
|
|
|
typedef void range_tree_func_t(void *arg, uint64_t start, uint64_t size);
|
|
|
|
void range_tree_init(void);
|
|
void range_tree_fini(void);
|
|
range_tree_t *range_tree_create_impl(range_tree_ops_t *ops, void *arg,
|
|
int (*avl_compare) (const void *, const void *), uint64_t gap);
|
|
range_tree_t *range_tree_create(range_tree_ops_t *ops, void *arg);
|
|
void range_tree_destroy(range_tree_t *rt);
|
|
boolean_t range_tree_contains(range_tree_t *rt, uint64_t start, uint64_t size);
|
|
void range_tree_verify_not_present(range_tree_t *rt,
|
|
uint64_t start, uint64_t size);
|
|
range_seg_t *range_tree_find(range_tree_t *rt, uint64_t start, uint64_t size);
|
|
void range_tree_resize_segment(range_tree_t *rt, range_seg_t *rs,
|
|
uint64_t newstart, uint64_t newsize);
|
|
uint64_t range_tree_space(range_tree_t *rt);
|
|
uint64_t range_tree_numsegs(range_tree_t *rt);
|
|
boolean_t range_tree_is_empty(range_tree_t *rt);
|
|
void range_tree_swap(range_tree_t **rtsrc, range_tree_t **rtdst);
|
|
void range_tree_stat_verify(range_tree_t *rt);
|
|
uint64_t range_tree_min(range_tree_t *rt);
|
|
uint64_t range_tree_max(range_tree_t *rt);
|
|
uint64_t range_tree_span(range_tree_t *rt);
|
|
|
|
void range_tree_add(void *arg, uint64_t start, uint64_t size);
|
|
void range_tree_remove(void *arg, uint64_t start, uint64_t size);
|
|
void range_tree_remove_fill(range_tree_t *rt, uint64_t start, uint64_t size);
|
|
void range_tree_adjust_fill(range_tree_t *rt, range_seg_t *rs, int64_t delta);
|
|
void range_tree_clear(range_tree_t *rt, uint64_t start, uint64_t size);
|
|
|
|
void range_tree_vacate(range_tree_t *rt, range_tree_func_t *func, void *arg);
|
|
void range_tree_walk(range_tree_t *rt, range_tree_func_t *func, void *arg);
|
|
range_seg_t *range_tree_first(range_tree_t *rt);
|
|
|
|
void range_tree_remove_xor_add_segment(uint64_t start, uint64_t end,
|
|
range_tree_t *removefrom, range_tree_t *addto);
|
|
void range_tree_remove_xor_add(range_tree_t *rt, range_tree_t *removefrom,
|
|
range_tree_t *addto);
|
|
|
|
void rt_avl_create(range_tree_t *rt, void *arg);
|
|
void rt_avl_destroy(range_tree_t *rt, void *arg);
|
|
void rt_avl_add(range_tree_t *rt, range_seg_t *rs, void *arg);
|
|
void rt_avl_remove(range_tree_t *rt, range_seg_t *rs, void *arg);
|
|
void rt_avl_vacate(range_tree_t *rt, void *arg);
|
|
extern struct range_tree_ops rt_avl_ops;
|
|
|
|
#ifdef __cplusplus
|
|
}
|
|
#endif
|
|
|
|
#endif /* _SYS_RANGE_TREE_H */
|