Go to file
Brian Behlendorf 0ad85ed91e Merge branch 'arc-changes'
This stack of patches has been empirically shown to drastically improve
the hit rate of the ARC for certain workloads. As a result, fewer reads
to disk are required, which is generally a good thing and can
drastically improve performance if the workload is disk limited.

For the impatient, I'll summarize the results of the tests performed:

    * Test 1 - Creating many empty directories. This test saw 99.9%
               fewer reads and 12.8% more inodes created when running
               *with* these changes.

    * Test 2 - Creating many empty files. This test saw 4% fewer reads
               and 0% more inodes created when running *with* these
               changes.

    * Test 3 - Creating many 4 KiB files. This test saw 96.7% fewer
               reads and 4.9% more inodes created when running *with*
               these changes.

    * Test 4 - Creating many 4096 KiB files. This test saw 99.4% fewer
               reads and 0% more inodes created (but took 6.9% fewer
               seconds to complete) when running *with* these changes.

    * Test 5 - Rsync'ing a dataset with many empty directories. This
               test saw 36.2% fewer reads and 66.2% more inodes created
               when running *with* these changes.

    * Test 6 - Rsync'ing a dataset with many empty files. This test saw
               30.9% fewer reads and 0% more inodes created (but took
               24.3% fewer seconds to complete) when running *with*
               these changes.

    * Test 7 - Rsync'ing a dataset with many 4 KiB files. This test saw
               30.8% fewer reads and 173.3% more inodes created when
               running *with* these changes.

For the patient, the following consists of more a more detailed
description of the tests performed and the results gathered.

All the tests were run using identical machines, each with a pool
consisting of 5 mirror pairs with 2TB 7200 RPM disks. Each test was run
twice, once *without* this set of patches and again *with* this set of
patches to highlight the performance changes introduced. The first four
workloads tested were:

    ** NOTE: None of these tests were run to completion. They ran for a
             set amount of time and then were terminated or hit ENOSPC.

    1. Creating many empty directories:

       * fdtree -d 10 -l 8 -s 0 -f 0 -C
         -> 111,111,111 Directories
         ->           0 Files
         ->           0 KiB File Data

    2. Creating many empty files:

       * fdtree -d 10 -l 5 -s 0 -f 10000 -C
         ->       111,111 Directories
         -> 1,111,110,000 Files
         ->             0 KiB File Data

    3. Creating many 4 KiB files:

       * fdtree -d 10 -l 5 -s 1 -f 10000 -C
         ->       111,111 Directories
         -> 1,111,110,000 Files
         -> 4,444,440,000 KiB File Data

    4. Creating many 4096 KiB files:

       * fdtree -d 10 -l 5 -s 1024 -f 10000 -C
         ->           111,111 Directories
         ->     1,111,110,000 Files
         -> 4,551,106,560,000 KiB File Data

Results for these first four tests are below:

                  | Time (s) |   inodes |  reads |    writes |
                --+----------+----------+--------+-----------+
    Test 1 Before |    65069 | 37845363 | 831975 |   3214646 |
    Test 1 After  |    65069 | 42703608 |    778 |   3327674 |
                --+----------+----------+--------+-----------+
    Test 2 Before |    65073 | 54257583 | 208647 |   2413056 |
    Test 2 After  |    65069 | 54255782 | 200038 |   2533759 |
                --+----------+----------+--------+-----------+
    Test 3 Before |    65068 | 49857744 | 487130 |   5533348 |
    Test 3 After  |    65071 | 52294311 |  16078 |   5648354 |
                --+----------+----------+--------+-----------+
    Test 4 Before |    34854 |  2448329 | 385870 | 162116572 |
    Test 4 After  |    32419 |  2448329 |   2339 | 162175706 |
                --+----------+----------+--------+-----------+

    * "Time (s)" - The run time of the test in seconds
    * "inodes"   - The number of inodes created by the test
    * "reads"    - The number of reads performed by the test
    * "writes"   - The number of writes performed by the test

As you can see from the table above, running with this patch stack
*significantly* reduced the number of reads performed in 3 out of the 4
tests (due to an improved ARC hit rate).

In addition to the tests described above, which specifically targeted
creates only, three other workloads were tested. These additional tests
were targeting rsync performance against the datasets created in the
previous tests. A brief description of the workloads and results for
these tests are below:

    ** NOTE: Aside from (6), these tests didn't run to completion. They
             ran for a set amount of time and then were terminated.

    5. Rsync the dataset created in Test 1 to a new dataset:

       * rsync -a /tank/test-1 /tank/test-5

    6. Rsync the dataset created in Test 2 to a new dataset:

       * rsync -a /tank/test-2 /tank/test-6

    7. Rsync the dataset created in Test 3 to a new dataset:

       * rsync -a /tank/test-3 /tank/test-7

Results for Test 5, 6, and 7 are below:

                  | Time (s) |   inodes |    reads |  writes |
                --+----------+----------+----------+---------+
    Test 5 Before |    93041 | 17921014 | 47632823 | 4094848 |
    Test 5 After  |    93029 | 29785847 | 30376206 | 4484459 |
                --+----------+----------+----------+---------+
    Test 6 Before |    15290 | 54264474 |  6018331 |  733087 |
    Test 6 After  |    11573 | 54260826 |  4155661 |  617285 |
                --+----------+----------+----------+---------+
    Test 7 Before |    93057 | 10093749 | 41561635 | 3659098 |
    Test 7 After  |    93045 | 27587043 | 28773151 | 5612234 |
                --+----------+----------+----------+---------+

    * "Time (s)" - The run time of the test in seconds
    * "inodes"   - The number of inodes created by the test
    * "reads"    - The number of reads performed by the test
    * "writes"   - The number of writes performed by the test

Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2110
2014-02-21 16:14:57 -08:00
cmd Add erratum for issue #2094 2014-02-21 12:10:40 -08:00
config Add systemd unit files for ZFS startup 2014-02-05 12:25:30 -08:00
dracut Refresh dracut module setup 2013-03-21 12:51:06 -07:00
etc Add systemd unit files for ZFS startup 2014-02-05 12:25:30 -08:00
include Split "data_size" into "meta" and "data" 2014-02-21 16:10:49 -08:00
lib Add generic errata infrastructure 2014-02-21 12:10:40 -08:00
man Disable arc_p adapt dampener by default 2014-02-21 16:10:49 -08:00
module Set "arc_meta_limit" to 3/4 arc_c_max by default 2014-02-21 16:10:49 -08:00
patches Adding grub2 mkconfig support patch 2012-07-30 16:17:23 -07:00
rpm Add explicit Conflicts for zfs-fuse packages 2014-02-10 15:54:27 -08:00
scripts Add zimport.sh compatibility test script 2014-02-21 12:10:31 -08:00
udev Open pools asynchronously after module load 2013-07-03 09:24:38 -07:00
.gitignore Ignore *.{deb,rpm,tar.gz} files in the top directory. 2013-04-24 16:18:59 -07:00
.gitmodules Add zimport.sh compatibility test script 2014-02-21 12:10:31 -08:00
AUTHORS Fix minor typos and update marketing copy. 2013-03-21 12:51:06 -07:00
autogen.sh build: do not call boilerplate ourself 2013-04-02 10:55:20 -07:00
configure.ac Add systemd unit files for ZFS startup 2014-02-05 12:25:30 -08:00
copy-builtin Consistent menuconfig name 2012-08-26 13:49:37 -07:00
COPYRIGHT Refresh links to web site 2013-03-06 15:46:41 -08:00
DISCLAIMER Fix minor typos and update marketing copy. 2013-03-21 12:51:06 -07:00
Makefile.am cstyle: Exclude several files from 'make checkstyle' 2013-12-18 16:46:35 -08:00
META Tag zfs-0.6.2 2013-08-22 13:33:26 -07:00
OPENSOLARIS.LICENSE Add CDDL license file 2008-12-01 14:49:34 -08:00
README.markdown Fix minor typos and update marketing copy. 2013-03-21 12:51:06 -07:00
zfs-script-config.sh.in Retire zpool_id infrastructure 2013-01-29 12:23:17 -08:00
zfs.release.in Move zfs.release generation to configure step 2012-07-12 12:22:51 -07:00

Native ZFS for Linux!

ZFS is an advanced file system and volume manager which was originally developed for Solaris and is now maintained by the Illumos community.

ZFS on Linux, which is also known as ZoL, is currently feature complete. It includes fully functional and stable SPA, DMU, ZVOL, and ZPL layers.

Full documentation for installing ZoL on your favorite Linux distribution can be found at: http://zfsonlinux.org