Go to file
Alexander Motin 6f5aac3ca0
Reduce latency effects of non-interactive I/O
Investigating influence of scrub (especially sequential) on random read
latency I've noticed that on some HDDs single 4KB read may take up to 4
seconds!  Deeper investigation shown that many HDDs heavily prioritize
sequential reads even when those are submitted with queue depth of 1.

This patch addresses the latency from two sides:
 - by using _min_active queue depths for non-interactive requests while
   the interactive request(s) are active and few requests after;
 - by throttling it further if no interactive requests has completed
   while configured amount of non-interactive did.

While there, I've also modified vdev_queue_class_to_issue() to give
more chances to schedule at least _min_active requests to the lowest
priorities.  It should reduce starvation if several non-interactive
processes are running same time with some interactive and I think should
make possible setting of zfs_vdev_max_active to as low as 1.

I've benchmarked this change with 4KB random reads from ZVOL with 16KB
block size on newly written non-fragmented pool.  On fragmented pool I
also saw improvements, but not so dramatic.  Below are log2 histograms
of the random read latency in milliseconds for different devices:

4 2x mirror vdevs of SATA HDD WDC WD20EFRX-68EUZN0 before:
0, 0, 2,  1,  12,  21,  19,  18, 10, 15, 17, 21
after:
0, 0, 0, 24, 101, 195, 419, 250, 47,  4,  0,  0
, that means maximum latency reduction from 2s to 500ms.

4 2x mirror vdevs of SATA HDD WDC WD80EFZX-68UW8N0 before:
0, 0,  2,  31,  38,  28,  18,  12, 17, 20, 24, 10, 3
after:
0, 0, 55, 247, 455, 470, 412, 181, 36,  0,  0,  0, 0
, i.e. from 4s to 250ms.

1 SAS HDD SEAGATE ST14000NM0048 before:
0,  0,  29,   70, 107,   45,  27, 1, 0, 0, 1, 4, 19
after:
1, 29, 681, 1261, 676, 1633,  67, 1, 0, 0, 0, 0,  0
, i.e. from 4s to 125ms.

1 SAS SSD SEAGATE XS3840TE70014 before (microseconds):
0, 0, 0, 0, 0, 0, 0, 0,  70, 18343, 82548, 618
after:
0, 0, 0, 0, 0, 0, 0, 0, 283, 92351, 34844,  90

I've also measured scrub time during the test and on idle pools.  On
idle fragmented pool I've measured scrub getting few percent faster
due to use of QD3 instead of QD2 before.  On idle non-fragmented pool
I've measured no difference.  On busy non-fragmented pool I've measured
scrub time increase about 1.5-1.7x, while IOPS increase reached 5-9x.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Alexander Motin <mav@FreeBSD.org>
Sponsored-By: iXsystems, Inc.
Closes #11166
2020-11-24 09:26:42 -08:00
.github Document branching structure 2020-09-28 13:23:49 -07:00
cmd zpool: correctly align columns with -p 2020-11-16 09:26:20 -08:00
config Track SONAME version bump in packaging 2020-11-19 16:25:24 -08:00
contrib pam_zfs_key: accommodate different dataset naming scheme 2020-11-22 09:32:34 -08:00
etc Replace ZFS on Linux references with OpenZFS 2020-10-08 20:10:13 -07:00
include Reduce latency effects of non-interactive I/O 2020-11-24 09:26:42 -08:00
lib libzfsbootenv: do not depend on libnvpair 2020-11-22 15:16:42 -08:00
man Reduce latency effects of non-interactive I/O 2020-11-24 09:26:42 -08:00
module Reduce latency effects of non-interactive I/O 2020-11-24 09:26:42 -08:00
rpm Obsolete earlier packages due to version bump 2020-11-24 09:24:24 -08:00
scripts Distributed Spare (dRAID) Feature 2020-11-13 13:51:51 -08:00
tests Fix 'zfs userspace' for received datasets in encrypted root 2020-11-16 09:10:29 -08:00
udev Centralize variable substitution 2020-07-14 17:33:44 -07:00
.editorconfig Add an .editorconfig; document git whitespace settings 2020-01-27 13:32:52 -08:00
.gitignore Add FreeBSD support to OpenZFS 2020-04-14 11:36:28 -07:00
.gitmodules Add zimport.sh compatibility test script 2014-02-21 12:10:31 -08:00
AUTHORS Add zstd support to zfs 2020-08-20 10:30:06 -07:00
autogen.sh Cause autogen.sh to fail if autoreconf fails 2018-07-06 09:27:37 -07:00
CODE_OF_CONDUCT.md Replace ZFS on Linux references with OpenZFS 2020-10-08 20:10:13 -07:00
configure.ac Distributed Spare (dRAID) Feature 2020-11-13 13:51:51 -08:00
copy-builtin Replace ZFS on Linux references with OpenZFS 2020-10-08 20:10:13 -07:00
COPYRIGHT Fix typos 2020-06-09 21:24:09 -07:00
cppcheck-suppressions.txt Import ZStandard v1.4.5 2020-08-20 10:30:06 -07:00
LICENSE Update build system and packaging 2018-05-29 16:00:33 -07:00
Makefile.am Library ABI tracking with abigail 2020-11-17 09:18:52 -08:00
META Increase Supported Linux Kernel to 5.9 2020-10-13 09:51:13 -07:00
NEWS Fix NEWS file 2020-08-26 21:44:41 -07:00
NOTICE Update build system and packaging 2018-05-29 16:00:33 -07:00
README.md docs: update README's installation link 2020-10-08 09:33:53 -07:00
TEST Remove CI builder customization from TEST 2020-03-16 10:46:03 -07:00
zfs.release.in Move zfs.release generation to configure step 2012-07-12 12:22:51 -07:00

img

OpenZFS is an advanced file system and volume manager which was originally developed for Solaris and is now maintained by the OpenZFS community. This repository contains the code for running OpenZFS on Linux and FreeBSD.

codecov coverity

Official Resources

Installation

Full documentation for installing OpenZFS on your favorite operating system can be found at the Getting Started Page.

Contribute & Develop

We have a separate document with contribution guidelines.

We have a Code of Conduct.

Release

OpenZFS is released under a CDDL license. For more details see the NOTICE, LICENSE and COPYRIGHT files; UCRL-CODE-235197

Supported Kernels

  • The META file contains the officially recognized supported Linux kernel versions.
  • Supported FreeBSD versions are 12-STABLE and 13-CURRENT.