Go to file
Ying Zhu 6e1d7276c9 Fix inaccurate arcstat_l2_hdr_size calculations
Based on the comments in arc.c we know that buffers can exist both
in arc and l2arc, under this circumstance both arc_buf_hdr_t and
l2arc_buf_hdr_t will be allocated. However the current logic only
cares for memory that l2arc_buf_hdr takes up when the buffer's
state transfers from or to arc_l2c_only. This will cause obvious
deviations for illumos's zfs version since the sizeof(l2arc_buf_hdr)
is larger than ZOL's. We can implement the calcuation in the
following simple way:

1. When allocate a l2arc_buf_hdr_t we add its memory consumption
   instantly and subtract it when we free or evict the l2arc buf.
2. According to l2arc_hdr_stat_add and l2arc_hdr_stat_remove, if
   the buffer only stays in l2arc we should also add the memory
   its arc_buf_hdr_t consumes, so we only need to add HDR_SIZE to
   arcstat_l2_hdr_size since we already concerned with L2HDR_SIZE
   in step 1 and the same for transfering arc bufs from l2arc only
   state.

The testbox has 2 4-core Intel Xeon CPUs(2.13GHz), with 16GB memory
and tests were set upped in the following way:

1. Fdisked a SATA disk into two partitions, one partition for zpool
   storage and the other one was used as the cache device.
2. Generated some files occupying 14GB altogether in the zpool
   prepared in step 1 using iozone.
3. Read them all using md5sum and watched the l2arc related statistics
   in /proc/spl/kstat/zfs/arcstats. After the reading ended the
   l2_hdr_size and l2_size were shown like this:

      l2_size             4       4403780608
      l2_hdr_size         4       0

   which was weird.

4. After applying this patch and reran step 1-3, the results were
   as following:

      l2_size             4       4306443264
      l2_hdr_size         4       535600

   these numbers made sense, on 64-bit systems the
   sizeof(l2arc_buf_hdr_t) is 16 bytes.  Assue all blocks cached by
   l2arc are 128KB, so 535600/16*128*1024=4387635200, since not all
   blocks are equal-sized, the theoretical result will be a little
   bigger, as we can see.

Since I'm familiar with systemtap instrumentation tool I used it to
examine what had happened. The script looked like this:

probe module("zfs").function("arc_chage_state")
{
	if ($new_state == $arc_l2_only)
		printf("change arc buf to arc_l2_only\n")
}

It will print out some information each time we call funciton
arc_chage_state if the argument new_state is arc_l2_only.  I
gathered the trace logs and found that none of the arc bufs ran
into arc state arc_l2_only when the tests was running, this was
the reason why l2_hdr_size in step 3 was 0. The arc bufs fell into
arc_l2_only when the pool or the filesystem was offlined.

Signed-off-by: Ying Zhu <casualfisher@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-07-29 22:05:26 -07:00
cmd Fix 'zpool list -H' error code 2013-07-23 12:39:05 -07:00
config Fix arc_adapt() spinning in iterate_supers_type() 2013-07-17 09:28:06 -07:00
dracut Refresh dracut module setup 2013-03-21 12:51:06 -07:00
etc Improve OpenRC init script 2013-06-18 17:03:25 -07:00
include Add new kstat for monitoring time in dmu_tx_assign 2013-07-11 13:53:44 -07:00
lib Fix zpool_read_label() 2013-07-09 16:02:04 -07:00
man Add documentation for -T and interval to "zpool list" 2013-07-22 13:38:51 -07:00
module Fix inaccurate arcstat_l2_hdr_size calculations 2013-07-29 22:05:26 -07:00
patches Adding grub2 mkconfig support patch 2012-07-30 16:17:23 -07:00
rpm Add dkms_version conditional 2013-07-11 15:39:25 -07:00
scripts Open pools asynchronously after module load 2013-07-03 09:24:38 -07:00
udev Open pools asynchronously after module load 2013-07-03 09:24:38 -07:00
.gitignore Ignore *.{deb,rpm,tar.gz} files in the top directory. 2013-04-24 16:18:59 -07:00
AUTHORS Fix minor typos and update marketing copy. 2013-03-21 12:51:06 -07:00
autogen.sh build: do not call boilerplate ourself 2013-04-02 10:55:20 -07:00
configure.ac Modified arcstat.py to run on linux 2013-06-18 15:43:15 -07:00
copy-builtin Consistent menuconfig name 2012-08-26 13:49:37 -07:00
COPYRIGHT Refresh links to web site 2013-03-06 15:46:41 -08:00
DISCLAIMER Fix minor typos and update marketing copy. 2013-03-21 12:51:06 -07:00
Makefile.am build: do not call boilerplate ourself 2013-04-02 10:55:20 -07:00
META Tag zfs-0.6.1 2013-03-26 08:50:29 -07:00
OPENSOLARIS.LICENSE Add CDDL license file 2008-12-01 14:49:34 -08:00
README.markdown Fix minor typos and update marketing copy. 2013-03-21 12:51:06 -07:00
zfs-script-config.sh.in Retire zpool_id infrastructure 2013-01-29 12:23:17 -08:00
zfs.release.in Move zfs.release generation to configure step 2012-07-12 12:22:51 -07:00

Native ZFS for Linux!

ZFS is an advanced file system and volume manager which was originally developed for Solaris and is now maintained by the Illumos community.

ZFS on Linux, which is also known as ZoL, is currently feature complete. It includes fully functional and stable SPA, DMU, ZVOL, and ZPL layers.

Full documentation for installing ZoL on your favorite Linux distribution can be found at: http://zfsonlinux.org