mirror of
https://git.proxmox.com/git/mirror_zfs.git
synced 2024-11-18 02:20:59 +03:00
1b50749ce9
Remove mc_lock use from metaslab_class_throttle_*(). The math there is based on refcounts and so atomic, so the only race possible there is between zfs_refcount_count() and zfs_refcount_add(). But in most cases metaslab_class_throttle_reserve() is called with the allocator lock held, which covers the race. In cases where the lock is not held, GANG_ALLOCATION() or METASLAB_MUST_RESERVE are set, and so we do not use zfs_refcount_count(). And even if we assume some other non-existing scenario, the worst that may happen from this race is few more I/Os get to allocation earlier, that is not a problem. Move locks and data of different allocators into different cache lines to avoid false sharing. Group spa_alloc_* arrays together into single array of aligned struct spa_alloc spa_allocs. Align struct metaslab_class_allocator. Reviewed-by: Paul Dagnelie <pcd@delphix.com> Reviewed-by: Ryan Moeller <ryan@iXsystems.com> Reviewed-by: Don Brady <don.brady@delphix.com> Signed-off-by: Alexander Motin <mav@FreeBSD.org> Sponsored-By: iXsystems, Inc. Closes #12314 |
||
---|---|---|
.. | ||
os | ||
sys | ||
.gitignore | ||
cityhash.h | ||
libnvpair.h | ||
libuutil_common.h | ||
libuutil_impl.h | ||
libuutil.h | ||
libzfs_core.h | ||
libzfs.h | ||
libzfsbootenv.h | ||
libzutil.h | ||
Makefile.am | ||
thread_pool.h | ||
zfeature_common.h | ||
zfs_comutil.h | ||
zfs_deleg.h | ||
zfs_fletcher.h | ||
zfs_namecheck.h | ||
zfs_prop.h |