mirror_zfs/cmd/zpool/Makefile.am

112 lines
1.9 KiB
Makefile
Raw Normal View History

include $(top_srcdir)/config/Rules.am
DEFAULT_INCLUDES += \
Support custom build directories and move includes One of the neat tricks an autoconf style project is capable of is allow configurion/building in a directory other than the source directory. The major advantage to this is that you can build the project various different ways while making changes in a single source tree. For example, this project is designed to work on various different Linux distributions each of which work slightly differently. This means that changes need to verified on each of those supported distributions perferably before the change is committed to the public git repo. Using nfs and custom build directories makes this much easier. I now have a single source tree in nfs mounted on several different systems each running a supported distribution. When I make a change to the source base I suspect may break things I can concurrently build from the same source on all the systems each in their own subdirectory. wget -c http://github.com/downloads/behlendorf/zfs/zfs-x.y.z.tar.gz tar -xzf zfs-x.y.z.tar.gz cd zfs-x-y-z ------------------------- run concurrently ---------------------- <ubuntu system> <fedora system> <debian system> <rhel6 system> mkdir ubuntu mkdir fedora mkdir debian mkdir rhel6 cd ubuntu cd fedora cd debian cd rhel6 ../configure ../configure ../configure ../configure make make make make make check make check make check make check This change also moves many of the include headers from individual incude/sys directories under the modules directory in to a single top level include directory. This has the advantage of making the build rules cleaner and logically it makes a bit more sense.
2010-09-05 00:26:23 +04:00
-I$(top_srcdir)/include \
-I$(top_srcdir)/lib/libspl/include
sbin_PROGRAMS = zpool
zpool_SOURCES = \
zpool_iter.c \
zpool_main.c \
zpool_util.c \
zpool_util.h \
zpool_vdev.c
zpool_LDADD = \
$(top_builddir)/lib/libnvpair/libnvpair.la \
$(top_builddir)/lib/libuutil/libuutil.la \
$(top_builddir)/lib/libzpool/libzpool.la \
$(top_builddir)/lib/libzfs/libzfs.la \
Properly link zpool command to libblkid 31fc19399e597e3391f19f1392ab120f1de0d5f2 incorrectly removed $(LIBBLKID) from cmd/zpool/Makefile.am. This meant that the toolchain was not given -lblkid, which resulted in the following build failure on Ubuntu 13.10: /usr/bin/ld: zpool_vdev.o: undefined reference to symbol 'blkid_put_cache@@BLKID_1.0' /lib/x86_64-linux-gnu/libblkid.so.1: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status That commit reworked various Makefile.am to follow best practices, so we reintroduce $(LIBBLKID) in a manner consistent with that, rather than explicitly reverting the change. Reproduction of this issue was done on a Gentoo Linux system by executing the following commands: zfs create -o mountpoint=/mnt/ubuntu-13.10 rpool/ROOT/ubuntu-13.10 debootstrap --variant=buildd --arch amd64 saucy /mnt/ubuntu-13.10 http://archive.ubuntu.com/ubuntu/ mount -o bind /dev /mnt/ubuntu-13.10/dev/ mount -o bind /proc/ /mnt/ubuntu-13.10/proc/ mount -o bind /sys/ /mnt/ubuntu-13.10/sys/ cp /etc/resolv.conf /mnt/ubuntu-13.10/etc/ (cd /mnt/ubuntu-13.10/root/ && git clone git://github.com/zfsonlinux/zfs.git) chroot /mnt/ubuntu-13.10/ apt-get install git autoconf libtool zlib1g-dev uuid-dev libblkid-dev \#apt-get install alien fakeroot vim cd /root/zfs ./autogen.sh ./configure --with-config=user --prefix=/usr make That will create a Ubuntu 13.10 chroot, fetch the sources and build test. At this point, cmd/zpool/Makefile.am was modified and the following commands were run to verify that the build issue was resolved: git clean -xdf ./autogen.sh ./configure --with-config=user --prefix=/usr make Although it is not shown here, the absence of libblkid-dev enables ZFS to build successfully without the patch. This could explain how this escaped detection until recently. A test without libblkid-dev was done to verify that the patch did not cause a regression in the absence of libblkid: apt-get remove libblkid-dev git clean -xdf ./autogen.sh ./configure --with-config=user --prefix=/usr make Additionally, the commands themselves were tested against my live system from within the chroot to ensure basic functionality. My live system had corresponding kernel modules already installed and basic commands such as `zpool list` and `zfs list` worked without incident. Lastly, this patch was also build tested on Gentoo Linux, where it caused no problems. At time of writing, these steps can be used to reproduce these results on any modern Linux system that has debootstrap installed. On Gentoo, installing debootstrap can be done with `emerge dev-util/debootstrap`. The current ZFSOnLinux HEAD revision as of writing is fd23720ae14dca926800ae70e6c8f4b4f82efc08. Once this is fixed in HEAD, either that revision or another before this fix and after 31fc19399e597e3391f19f1392ab120f1de0d5f2 will be needed to reproduce this issue. Lastly, it remains to be seen why the toolchains on the systems performing regression tests did not catch this. This is not a ZFS-specific issue, but it is something that we will want to explore in the future. Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #2038
2014-01-12 01:07:27 +04:00
$(top_builddir)/lib/libzfs_core/libzfs_core.la \
Add -lhHpw options to "zpool iostat" for avg latency, histograms, & queues Update the zfs module to collect statistics on average latencies, queue sizes, and keep an internal histogram of all IO latencies. Along with this, update "zpool iostat" with some new options to print out the stats: -l: Include average IO latencies stats: total_wait disk_wait syncq_wait asyncq_wait scrub read write read write read write read write wait ----- ----- ----- ----- ----- ----- ----- ----- ----- - 41ms - 2ms - 46ms - 4ms - - 5ms - 1ms - 1us - 4ms - - 5ms - 1ms - 1us - 4ms - - - - - - - - - - - 49ms - 2ms - 47ms - - - - - - - - - - - - - 2ms - 1ms - - - 1ms - ----- ----- ----- ----- ----- ----- ----- ----- ----- 1ms 1ms 1ms 413us 16us 25us - 5ms - 1ms 1ms 1ms 413us 16us 25us - 5ms - 2ms 1ms 2ms 412us 26us 25us - 5ms - - 1ms - 413us - 25us - 5ms - - 1ms - 460us - 29us - 5ms - 196us 1ms 196us 370us 7us 23us - 5ms - ----- ----- ----- ----- ----- ----- ----- ----- ----- -w: Print out latency histograms: sdb total disk sync_queue async_queue latency read write read write read write read write scrub ------- ------ ------ ------ ------ ------ ------ ------ ------ ------ 1ns 0 0 0 0 0 0 0 0 0 ... 33us 0 0 0 0 0 0 0 0 0 66us 0 0 107 2486 2 788 12 12 0 131us 2 797 359 4499 10 558 184 184 6 262us 22 801 264 1563 10 286 287 287 24 524us 87 575 71 52086 15 1063 136 136 92 1ms 152 1190 5 41292 4 1693 252 252 141 2ms 245 2018 0 50007 0 2322 371 371 220 4ms 189 7455 22 162957 0 3912 6726 6726 199 8ms 108 9461 0 102320 0 5775 2526 2526 86 17ms 23 11287 0 37142 0 8043 1813 1813 19 34ms 0 14725 0 24015 0 11732 3071 3071 0 67ms 0 23597 0 7914 0 18113 5025 5025 0 134ms 0 33798 0 254 0 25755 7326 7326 0 268ms 0 51780 0 12 0 41593 10002 10002 0 537ms 0 77808 0 0 0 64255 13120 13120 0 1s 0 105281 0 0 0 83805 20841 20841 0 2s 0 88248 0 0 0 73772 14006 14006 0 4s 0 47266 0 0 0 29783 17176 17176 0 9s 0 10460 0 0 0 4130 6295 6295 0 17s 0 0 0 0 0 0 0 0 0 34s 0 0 0 0 0 0 0 0 0 69s 0 0 0 0 0 0 0 0 0 137s 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------- -h: Help -H: Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space. -q: Include current number of entries in sync & async read/write queues, and scrub queue: syncq_read syncq_write asyncq_read asyncq_write scrubq_read pend activ pend activ pend activ pend activ pend activ ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- 0 0 0 0 78 29 0 0 0 0 0 0 0 0 78 29 0 0 0 0 0 0 0 0 0 0 0 0 0 0 - - - - - - - - - - 0 0 0 0 0 0 0 0 0 0 - - - - - - - - - - 0 0 0 0 0 0 0 0 0 0 ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- 0 0 227 394 0 19 0 0 0 0 0 0 227 394 0 19 0 0 0 0 0 0 108 98 0 19 0 0 0 0 0 0 19 98 0 0 0 0 0 0 0 0 78 98 0 0 0 0 0 0 0 0 19 88 0 0 0 0 0 0 ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -p: Display numbers in parseable (exact) values. Also, update iostat syntax to allow the user to specify specific vdevs to show statistics for. The three options for choosing pools/vdevs are: Display a list of pools: zpool iostat ... [pool ...] Display a list of vdevs from a specific pool: zpool iostat ... [pool vdev ...] Display a list of vdevs from any pools: zpool iostat ... [vdev ...] Lastly, allow zpool command "interval" value to be floating point: zpool iostat -v 0.5 Signed-off-by: Tony Hutter <hutter2@llnl.gov Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #4433
2016-02-29 21:05:23 +03:00
-lm $(LIBBLKID)
2017-04-21 19:27:04 +03:00
zpoolconfdir = $(sysconfdir)/zfs/zpool.d
zpoolexecdir = $(libexecdir)/zfs/zpool.d
EXTRA_DIST = zpool.d/README
dist_zpoolexec_SCRIPTS = \
zpool.d/enc \
zpool.d/encdev \
zpool.d/fault_led \
zpool.d/iostat \
zpool.d/iostat-1s \
zpool.d/iostat-10s \
zpool.d/label \
zpool.d/locate_led \
zpool.d/lsblk \
zpool.d/media \
2017-04-21 19:27:04 +03:00
zpool.d/model \
zpool.d/serial \
zpool.d/ses \
zpool.d/size \
zpool.d/slaves \
zpool.d/slot \
zpool.d/smart \
zpool.d/smartx \
zpool.d/temp \
zpool.d/health \
zpool.d/r_proc \
zpool.d/w_proc \
zpool.d/r_ucor \
zpool.d/w_ucor \
zpool.d/nonmed \
zpool.d/defect \
zpool.d/hours_on \
zpool.d/realloc \
zpool.d/rep_ucor \
zpool.d/cmd_to \
zpool.d/pend_sec \
zpool.d/off_ucor \
zpool.d/ata_err \
zpool.d/pwr_cyc \
2017-04-21 19:27:04 +03:00
zpool.d/upath \
zpool.d/vendor
zpoolconfdefaults = \
enc \
encdev \
fault_led \
iostat \
iostat-1s \
iostat-10s \
label \
locate_led \
lsblk \
media \
2017-04-21 19:27:04 +03:00
model \
serial \
ses \
size \
slaves \
slot \
smart \
smartx \
temp \
health \
r_proc \
w_proc \
r_ucor \
w_ucor \
nonmed \
defect \
hours_on \
realloc \
rep_ucor \
cmd_to \
pend_sec \
off_ucor \
ata_err \
pwr_cyc \
2017-04-21 19:27:04 +03:00
upath \
vendor
install-data-hook:
$(MKDIR_P) "$(DESTDIR)$(zpoolconfdir)"
for f in $(zpoolconfdefaults); do \
test -f "$(DESTDIR)$(zpoolconfdir)/$${f}" -o \
-L "$(DESTDIR)$(zpoolconfdir)/$${f}" || \
ln -s "$(zpoolexecdir)/$${f}" "$(DESTDIR)$(zpoolconfdir)"; \
done