mirror_zfs/module
Matthew Ahrens aa755b3549
Set aside a metaslab for ZIL blocks
Mixing ZIL and normal allocations has several problems:

1. The ZIL allocations are allocated, written to disk, and then a few
seconds later freed.  This leaves behind holes (free segments) where the
ZIL blocks used to be, which increases fragmentation, which negatively
impacts performance.

2. When under moderate load, ZIL allocations are of 128KB.  If the pool
is fairly fragmented, there may not be many free chunks of that size.
This causes ZFS to load more metaslabs to locate free segments of 128KB
or more.  The loading happens synchronously (from zil_commit()), and can
take around a second even if the metaslab's spacemap is cached in the
ARC.  All concurrent synchronous operations on this filesystem must wait
while the metaslab is loading.  This can cause a significant performance
impact.

3. If the pool is very fragmented, there may be zero free chunks of
128KB or more.  In this case, the ZIL falls back to txg_wait_synced(),
which has an enormous performance impact.

These problems can be eliminated by using a dedicated log device
("slog"), even one with the same performance characteristics as the
normal devices.

This change sets aside one metaslab from each top-level vdev that is
preferentially used for ZIL allocations (vdev_log_mg,
spa_embedded_log_class).  From an allocation perspective, this is
similar to having a dedicated log device, and it eliminates the
above-mentioned performance problems.

Log (ZIL) blocks can be allocated from the following locations.  Each
one is tried in order until the allocation succeeds:
1. dedicated log vdevs, aka "slog" (spa_log_class)
2. embedded slog metaslabs (spa_embedded_log_class)
3. other metaslabs in normal vdevs (spa_normal_class)

The space required for the embedded slog metaslabs is usually between
0.5% and 1.0% of the pool, and comes out of the existing 3.2% of "slop"
space that is not available for user data.

On an all-ssd system with 4TB storage, 87% fragmentation, 60% capacity,
and recordsize=8k, testing shows a ~50% performance increase on random
8k sync writes.  On even more fragmented systems (which hit problem #3
above and call txg_wait_synced()), the performance improvement can be
arbitrarily large (>100x).

Reviewed-by: Serapheim Dimitropoulos <serapheim@delphix.com>
Reviewed-by: George Wilson <gwilson@delphix.com>
Reviewed-by: Don Brady <don.brady@delphix.com>
Reviewed-by: Mark Maybee <mark.maybee@delphix.com>
Signed-off-by: Matthew Ahrens <mahrens@delphix.com>
Closes #11389
2021-01-21 15:12:54 -08:00
..
avl Links in Source Files 2020-09-02 09:42:12 -07:00
icp Extending FreeBSD UIO Struct 2021-01-20 21:27:30 -08:00
lua lua: avoid gcc -Wreturn-local-addr bug 2020-12-15 09:20:48 -08:00
nvpair Links in Source Files 2020-09-02 09:42:12 -07:00
os Linux 5.10 compat: restore custom uio_prefaultpages() 2021-01-21 10:43:39 -08:00
spl Cleanup linux module kbuild files 2020-06-10 09:24:15 -07:00
unicode Throw const on some strings 2020-10-02 17:44:10 -07:00
zcommon Linux 5.10 compat: use iov_iter in uio structure 2020-12-18 08:48:26 -08:00
zfs Set aside a metaslab for ZIL blocks 2021-01-21 15:12:54 -08:00
zstd Optimize locking checks in mempool allocator 2020-11-02 12:10:07 -08:00
.gitignore Cleanup linux module kbuild files 2020-06-10 09:24:15 -07:00
Kbuild.in Add zstd support to zfs 2020-08-20 10:30:06 -07:00
Makefile.bsd FreeBSD: decouple ZFS_DEBUG from kernel debug settings 2020-11-24 09:16:46 -08:00
Makefile.in Fix Linux modules uninstall 2020-10-08 20:07:10 -07:00