Avoid some spa_has_pending_synctask() calls.

Since 8c4fb36a24 (PR #7795) spa_has_pending_synctask() started to
take two more locks per write inside txg_all_lists_empty().  I am
surprised those pool-wide locks are not contended, but still their
operations are visible in CPU profiles under contended vdev lock.

This commit slightly changes vdev_queue_max_async_writes() flow to
not call the function if we are going to return max_active any way
due to high amount of dirty data.  It allows to save some CPU time
exactly when the pool is busy.

Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-By: Tom Caputi <caputit1@tcnj.edu>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Closes #11280
This commit is contained in:
Alexander Motin 2020-12-06 12:55:02 -05:00 committed by GitHub
parent 6366ef2240
commit 8136b9d73b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -338,14 +338,12 @@ vdev_queue_max_async_writes(spa_t *spa)
* Sync tasks correspond to interactive user actions. To reduce the * Sync tasks correspond to interactive user actions. To reduce the
* execution time of those actions we push data out as fast as possible. * execution time of those actions we push data out as fast as possible.
*/ */
if (spa_has_pending_synctask(spa)) dirty = dp->dp_dirty_total;
if (dirty > max_bytes || spa_has_pending_synctask(spa))
return (zfs_vdev_async_write_max_active); return (zfs_vdev_async_write_max_active);
dirty = dp->dp_dirty_total;
if (dirty < min_bytes) if (dirty < min_bytes)
return (zfs_vdev_async_write_min_active); return (zfs_vdev_async_write_min_active);
if (dirty > max_bytes)
return (zfs_vdev_async_write_max_active);
/* /*
* linear interpolation: * linear interpolation: