Fix long_free_dirty accounting for small files (#16264)

For files smaller than recordsize, it's most likely that they don't have
L1 blocks. However, current calculation will always return at least 1 L1
block.

In this change, we check dnode level to figure out if it has L1 blocks
or not, and return 0 if it doesn't. This will reduce the chance of
unnecessary throttling when deleting a large number of small files.

Signed-off-by: Chunwei Chen <david.chen@nutanix.com>
Co-authored-by: Chunwei Chen <david.chen@nutanix.com>
Reviewed-by: Alexander Motin <mav@FreeBSD.org>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
This commit is contained in:
Chunwei Chen 2024-07-23 11:34:19 -07:00 committed by GitHub
parent 37275fd109
commit 9dfc5c4a0c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -815,6 +815,13 @@ get_next_chunk(dnode_t *dn, uint64_t *start, uint64_t minimum, uint64_t *l1blks)
ASSERT3U(minimum, <=, *start); ASSERT3U(minimum, <=, *start);
/* dn_nlevels == 1 means we don't have any L1 blocks */
if (dn->dn_nlevels <= 1) {
*l1blks = 0;
*start = minimum;
return (0);
}
/* /*
* Check if we can free the entire range assuming that all of the * Check if we can free the entire range assuming that all of the
* L1 blocks in this range have data. If we can, we use this * L1 blocks in this range have data. If we can, we use this