mirror_zfs/config/zfs-build.m4

356 lines
9.9 KiB
Plaintext
Raw Normal View History

AC_DEFUN([ZFS_AC_LICENSE], [
AC_MSG_CHECKING([zfs author])
AC_MSG_RESULT([$ZFS_META_AUTHOR])
AC_MSG_CHECKING([zfs license])
AC_MSG_RESULT([$ZFS_META_LICENSE])
])
AC_DEFUN([ZFS_AC_DEBUG], [
AC_MSG_CHECKING([whether debugging is enabled])
AC_ARG_ENABLE([debug],
[AS_HELP_STRING([--enable-debug],
[Enable generic debug support @<:@default=no@:>@])],
[],
[enable_debug=no])
AS_IF([test "x$enable_debug" = xyes],
[
KERNELCPPFLAGS="${KERNELCPPFLAGS} -DDEBUG -Werror"
HOSTCFLAGS="${HOSTCFLAGS} -DDEBUG -Werror"
DEBUG_CFLAGS="-DDEBUG -Werror"
DEBUG_STACKFLAGS="-fstack-check"
DEBUG_ZFS="_with_debug"
AC_DEFINE(ZFS_DEBUG, 1, [zfs debugging enabled])
],
[
KERNELCPPFLAGS="${KERNELCPPFLAGS} -DNDEBUG "
HOSTCFLAGS="${HOSTCFLAGS} -DNDEBUG "
DEBUG_CFLAGS="-DNDEBUG"
DEBUG_STACKFLAGS=""
DEBUG_ZFS="_without_debug"
])
AC_SUBST(DEBUG_CFLAGS)
AC_SUBST(DEBUG_STACKFLAGS)
AC_SUBST(DEBUG_ZFS)
AC_MSG_RESULT([$enable_debug])
])
AC_DEFUN([ZFS_AC_DEBUG_DMU_TX], [
AC_ARG_ENABLE([debug-dmu-tx],
[AS_HELP_STRING([--enable-debug-dmu-tx],
[Enable dmu tx validation @<:@default=no@:>@])],
[],
[enable_debug_dmu_tx=no])
AS_IF([test "x$enable_debug_dmu_tx" = xyes],
[
KERNELCPPFLAGS="${KERNELCPPFLAGS} -DDEBUG_DMU_TX"
DEBUG_DMU_TX="_with_debug_dmu_tx"
AC_DEFINE([DEBUG_DMU_TX], [1],
[Define to 1 to enabled dmu tx validation])
],
[
DEBUG_DMU_TX="_without_debug_dmu_tx"
])
AC_SUBST(DEBUG_DMU_TX)
AC_MSG_CHECKING([whether dmu tx validation is enabled])
AC_MSG_RESULT([$enable_debug_dmu_tx])
])
AC_DEFUN([ZFS_AC_CONFIG_ALWAYS], [
ZFS_AC_CONFIG_ALWAYS_NO_UNUSED_BUT_SET_VARIABLE
ZFS_AC_CONFIG_ALWAYS_NO_BOOL_COMPARE
Support for vectorized algorithms on x86 This is initial support for x86 vectorized implementations of ZFS parity and checksum algorithms. For the compilation phase, configure step checks if toolchain supports relevant instruction sets. Each implementation must ensure that the code is not passed to compiler if relevant instruction set is not supported. For this purpose, following new defines are provided if instruction set is supported: - HAVE_SSE, - HAVE_SSE2, - HAVE_SSE3, - HAVE_SSSE3, - HAVE_SSE4_1, - HAVE_SSE4_2, - HAVE_AVX, - HAVE_AVX2. For detecting if an instruction set can be used in runtime, following functions are provided in (include/linux/simd_x86.h): - zfs_sse_available() - zfs_sse2_available() - zfs_sse3_available() - zfs_ssse3_available() - zfs_sse4_1_available() - zfs_sse4_2_available() - zfs_avx_available() - zfs_avx2_available() - zfs_bmi1_available() - zfs_bmi2_available() These function should be called once, on module load, or initialization. They are safe to use from user and kernel space. If an implementation is using more than single instruction set, both compiler and runtime support for all relevant instruction sets should be checked. Kernel fpu methods: - kfpu_begin() - kfpu_end() Use __get_cpuid_max and __cpuid_count from <cpuid.h> Both gcc and clang have support for these. They also handle ebx register in case it is used for PIC code. Signed-off-by: Gvozden Neskovic <neskovic@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Chunwei Chen <tuxoko@gmail.com> Closes #4381
2016-02-29 21:42:27 +03:00
ZFS_AC_CONFIG_ALWAYS_TOOLCHAIN_SIMD
])
AC_DEFUN([ZFS_AC_CONFIG], [
TARGET_ASM_DIR=asm-generic
AC_SUBST(TARGET_ASM_DIR)
ZFS_CONFIG=all
AC_ARG_WITH([config],
AS_HELP_STRING([--with-config=CONFIG],
[Config file 'kernel|user|all|srpm']),
[ZFS_CONFIG="$withval"])
AC_ARG_ENABLE([linux-builtin],
[AC_HELP_STRING([--enable-linux-builtin],
[Configure for builtin in-tree kernel modules @<:@default=no@:>@])],
[],
[enable_linux_builtin=no])
AC_MSG_CHECKING([zfs config])
AC_MSG_RESULT([$ZFS_CONFIG]);
AC_SUBST(ZFS_CONFIG)
ZFS_AC_CONFIG_ALWAYS
case "$ZFS_CONFIG" in
kernel) ZFS_AC_CONFIG_KERNEL ;;
user) ZFS_AC_CONFIG_USER ;;
Add the ZFS Test Suite Add the ZFS Test Suite and test-runner framework from illumos. This is a continuation of the work done by Turbo Fredriksson to port the ZFS Test Suite to Linux. While this work was originally conceived as a stand alone project integrating it directly with the ZoL source tree has several advantages: * Allows the ZFS Test Suite to be packaged in zfs-test package. * Facilitates easy integration with the CI testing. * Users can locally run the ZFS Test Suite to validate ZFS. This testing should ONLY be done on a dedicated test system because the ZFS Test Suite in its current form is destructive. * Allows the ZFS Test Suite to be run directly in the ZoL source tree enabled developers to iterate quickly during development. * Developers can easily add/modify tests in the framework as features are added or functionality is changed. The tests will then always be in sync with the implementation. Full documentation for how to run the ZFS Test Suite is available in the tests/README.md file. Warning: This test suite is designed to be run on a dedicated test system. It will make modifications to the system including, but not limited to, the following. * Adding new users * Adding new groups * Modifying the following /proc files: * /proc/sys/kernel/core_pattern * /proc/sys/kernel/core_uses_pid * Creating directories under / Notes: * Not all of the test cases are expected to pass and by default these test cases are disabled. The failures are primarily due to assumption made for illumos which are invalid under Linux. * When updating these test cases it should be done in as generic a way as possible so the patch can be submitted back upstream. Most existing library functions have been updated to be Linux aware, and the following functions and variables have been added. * Functions: * is_linux - Used to wrap a Linux specific section. * block_device_wait - Waits for block devices to be added to /dev/. * Variables: Linux Illumos * ZVOL_DEVDIR "/dev/zvol" "/dev/zvol/dsk" * ZVOL_RDEVDIR "/dev/zvol" "/dev/zvol/rdsk" * DEV_DSKDIR "/dev" "/dev/dsk" * DEV_RDSKDIR "/dev" "/dev/rdsk" * NEWFS_DEFAULT_FS "ext2" "ufs" * Many of the disabled test cases fail because 'zfs/zpool destroy' returns EBUSY. This is largely causes by the asynchronous nature of device handling on Linux and is expected, the impacted test cases will need to be updated to handle this. * There are several test cases which have been disabled because they can trigger a deadlock. A primary example of this is to recursively create zpools within zpools. These tests have been disabled until the root issue can be addressed. * Illumos specific utilities such as (mkfile) should be added to the tests/zfs-tests/cmd/ directory. Custom programs required by the test scripts can also be added here. * SELinux should be either is permissive mode or disabled when running the tests. The test cases should be updated to conform to a standard policy. * Redundant test functionality has been removed (zfault.sh). * Existing test scripts (zconfig.sh) should be migrated to use the framework for consistency and ease of testing. * The DISKS environment variable currently only supports loopback devices because of how the ZFS Test Suite expects partitions to be named (p1, p2, etc). Support must be added to generate the correct partition name based on the device location and name. * The ZFS Test Suite is part of the illumos code base at: https://github.com/illumos/illumos-gate/tree/master/usr/src/test Original-patch-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Olaf Faaland <faaland1@llnl.gov> Closes #6 Closes #1534
2015-07-02 01:23:09 +03:00
all) ZFS_AC_CONFIG_USER
ZFS_AC_CONFIG_KERNEL ;;
srpm) ;;
*)
AC_MSG_RESULT([Error!])
AC_MSG_ERROR([Bad value "$ZFS_CONFIG" for --with-config,
user kernel|user|all|srpm]) ;;
esac
AM_CONDITIONAL([CONFIG_USER],
[test "$ZFS_CONFIG" = user -o "$ZFS_CONFIG" = all])
AM_CONDITIONAL([CONFIG_KERNEL],
[test "$ZFS_CONFIG" = kernel -o "$ZFS_CONFIG" = all] &&
[test "x$enable_linux_builtin" != xyes ])
AM_CONDITIONAL([WANT_DEVNAME2DEVID],
[test "x$user_libudev" = xyes ])
])
dnl #
dnl # Check for rpm+rpmbuild to build RPM packages. If these tools
dnl # are missing it is non-fatal but you will not be able to build
dnl # RPM packages and will be warned if you try too.
dnl #
dnl # By default the generic spec file will be used because it requires
dnl # minimal dependencies. Distribution specific spec files can be
dnl # placed under the 'rpm/<distribution>' directory and enabled using
dnl # the --with-spec=<distribution> configure option.
dnl #
AC_DEFUN([ZFS_AC_RPM], [
RPM=rpm
RPMBUILD=rpmbuild
AC_MSG_CHECKING([whether $RPM is available])
AS_IF([tmp=$($RPM --version 2>/dev/null)], [
RPM_VERSION=$(echo $tmp | $AWK '/RPM/ { print $[3] }')
HAVE_RPM=yes
AC_MSG_RESULT([$HAVE_RPM ($RPM_VERSION)])
],[
HAVE_RPM=no
AC_MSG_RESULT([$HAVE_RPM])
])
AC_MSG_CHECKING([whether $RPMBUILD is available])
AS_IF([tmp=$($RPMBUILD --version 2>/dev/null)], [
RPMBUILD_VERSION=$(echo $tmp | $AWK '/RPM/ { print $[3] }')
HAVE_RPMBUILD=yes
AC_MSG_RESULT([$HAVE_RPMBUILD ($RPMBUILD_VERSION)])
],[
HAVE_RPMBUILD=no
AC_MSG_RESULT([$HAVE_RPMBUILD])
])
RPM_DEFINE_COMMON='--define "$(DEBUG_ZFS) 1" --define "$(DEBUG_DMU_TX) 1"'
Initramfs scripts for ZoL. * Supports booting of a ZFS snapshot. Do this by cloning the snapshot into a dataset. If this, the resulting dataset, already exists, destroy it. Then mount it on root. * If snapshot does not exist, use base dataset (the part before '@') as boot filesystem instead. * If no snapshot is specified on the 'root=' kernel command line, but there is an '@', then get a list of snapshots below that filesystem and ask the user which to use. * Clone with 'mountpoint=none' and 'canmount=noauto' - we mount manually and explicitly. * For sub-filesystems, that doesn't have a mountpoint property set, we use the 'org.zol:mountpoint' to keep track of it's mountpoint. * Allow rollback of snapshots instead of clone it and boot from the clone. * Allow mounting a root- and subfs with mountpoint=legacy set * Allow mounting a filesystem which is using nativ encryption. * Support all currently used kernel command line arguments All the different distributions have their own standard on what to specify on the kernel command line to boot of a ZFS filesystem. * Extra options: * zfsdebug=(on,yes,1) Show extra debugging information * zfsforce=(on,yes,1) Force import the pool * rollback=(on,yes,1) Rollback (instead of clone) the snapshot * Only try to import pool if it haven't already been imported * This will negate the need to force import a pool that have not been exported cleanly. * Support exclusion of pools to import by setting ZFS_POOL_EXCEPTIONS in /etc/default/zfs. * Support additional configuration variable ZFS_INITRD_ADDITIONAL_DATASETS to mount additional filesystems not located under your root dataset. * Include /etc/modprobe.d/{zfs,spl}.conf in the initrd if it/they exist. * Include the udev rule to use by-vdev for pool imports. * Include the /etc/default/zfs file to the initrd. * Only try /dev/disk/by-* in the initrd if USE_DISK_BY_ID is set. * Use /dev/disk/by-vdev before anything. * Add /dev as a last ditch attempt. * Fallback to using the cache file if that exist if nothing else worked. * Use /sbin/modprobe instead of built-in (BusyBox) modprobe. This gets rid of the message "modprobe: can't load module zcommon". Thanx to pcoultha for finding this. Signed-off-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #2116 Closes #2114
2014-01-30 20:26:48 +04:00
RPM_DEFINE_UTIL='--define "_dracutdir $(dracutdir)" --define "_udevdir $(udevdir)" --define "_udevruledir $(udevruledir)" --define "_initconfdir $(DEFAULT_INITCONF_DIR)" $(DEFINE_INITRAMFS)'
RPM_DEFINE_KMOD='--define "kernels $(LINUX_VERSION)" --define "require_spldir $(SPL)" --define "require_splobj $(SPL_OBJ)" --define "ksrc $(LINUX)" --define "kobj $(LINUX_OBJ)"'
RPM_DEFINE_DKMS=
SRPM_DEFINE_COMMON='--define "build_src_rpm 1"'
SRPM_DEFINE_UTIL=
SRPM_DEFINE_KMOD=
SRPM_DEFINE_DKMS=
RPM_SPEC_DIR="rpm/generic"
AC_ARG_WITH([spec],
AS_HELP_STRING([--with-spec=SPEC],
[Spec files 'generic|redhat']),
[RPM_SPEC_DIR="rpm/$withval"])
AC_MSG_CHECKING([whether spec files are available])
AC_MSG_RESULT([yes ($RPM_SPEC_DIR/*.spec.in)])
AC_SUBST(HAVE_RPM)
AC_SUBST(RPM)
AC_SUBST(RPM_VERSION)
AC_SUBST(HAVE_RPMBUILD)
AC_SUBST(RPMBUILD)
AC_SUBST(RPMBUILD_VERSION)
AC_SUBST(RPM_SPEC_DIR)
AC_SUBST(RPM_DEFINE_UTIL)
AC_SUBST(RPM_DEFINE_KMOD)
AC_SUBST(RPM_DEFINE_DKMS)
AC_SUBST(RPM_DEFINE_COMMON)
AC_SUBST(SRPM_DEFINE_UTIL)
AC_SUBST(SRPM_DEFINE_KMOD)
AC_SUBST(SRPM_DEFINE_DKMS)
AC_SUBST(SRPM_DEFINE_COMMON)
])
dnl #
dnl # Check for dpkg+dpkg-buildpackage to build DEB packages. If these
dnl # tools are missing it is non-fatal but you will not be able to build
dnl # DEB packages and will be warned if you try too.
dnl #
AC_DEFUN([ZFS_AC_DPKG], [
DPKG=dpkg
DPKGBUILD=dpkg-buildpackage
AC_MSG_CHECKING([whether $DPKG is available])
AS_IF([tmp=$($DPKG --version 2>/dev/null)], [
DPKG_VERSION=$(echo $tmp | $AWK '/Debian/ { print $[7] }')
HAVE_DPKG=yes
AC_MSG_RESULT([$HAVE_DPKG ($DPKG_VERSION)])
],[
HAVE_DPKG=no
AC_MSG_RESULT([$HAVE_DPKG])
])
AC_MSG_CHECKING([whether $DPKGBUILD is available])
AS_IF([tmp=$($DPKGBUILD --version 2>/dev/null)], [
DPKGBUILD_VERSION=$(echo $tmp | \
$AWK '/Debian/ { print $[4] }' | cut -f-4 -d'.')
HAVE_DPKGBUILD=yes
AC_MSG_RESULT([$HAVE_DPKGBUILD ($DPKGBUILD_VERSION)])
],[
HAVE_DPKGBUILD=no
AC_MSG_RESULT([$HAVE_DPKGBUILD])
])
AC_SUBST(HAVE_DPKG)
AC_SUBST(DPKG)
AC_SUBST(DPKG_VERSION)
AC_SUBST(HAVE_DPKGBUILD)
AC_SUBST(DPKGBUILD)
AC_SUBST(DPKGBUILD_VERSION)
])
dnl #
dnl # Until native packaging for various different packing systems
dnl # can be added the least we can do is attempt to use alien to
dnl # convert the RPM packages to the needed package type. This is
dnl # a hack but so far it has worked reasonable well.
dnl #
AC_DEFUN([ZFS_AC_ALIEN], [
ALIEN=alien
AC_MSG_CHECKING([whether $ALIEN is available])
AS_IF([tmp=$($ALIEN --version 2>/dev/null)], [
ALIEN_VERSION=$(echo $tmp | $AWK '{ print $[3] }')
HAVE_ALIEN=yes
AC_MSG_RESULT([$HAVE_ALIEN ($ALIEN_VERSION)])
],[
HAVE_ALIEN=no
AC_MSG_RESULT([$HAVE_ALIEN])
])
AC_SUBST(HAVE_ALIEN)
AC_SUBST(ALIEN)
AC_SUBST(ALIEN_VERSION)
])
dnl #
dnl # Using the VENDOR tag from config.guess set the default
dnl # package type for 'make pkg': (rpm | deb | tgz)
dnl #
AC_DEFUN([ZFS_AC_DEFAULT_PACKAGE], [
AC_MSG_CHECKING([linux distribution])
if test -f /etc/toss-release ; then
VENDOR=toss ;
elif test -f /etc/fedora-release ; then
VENDOR=fedora ;
elif test -f /etc/redhat-release ; then
VENDOR=redhat ;
elif test -f /etc/gentoo-release ; then
VENDOR=gentoo ;
elif test -f /etc/arch-release ; then
VENDOR=arch ;
elif test -f /etc/SuSE-release ; then
VENDOR=sles ;
elif test -f /etc/slackware-version ; then
VENDOR=slackware ;
elif test -f /etc/lunar.release ; then
VENDOR=lunar ;
elif test -f /etc/lsb-release ; then
VENDOR=ubuntu ;
elif test -f /etc/debian_version ; then
VENDOR=debian ;
elif test -f /etc/alpine-release ; then
VENDOR=alpine ;
else
VENDOR= ;
fi
AC_MSG_RESULT([$VENDOR])
AC_SUBST(VENDOR)
AC_MSG_CHECKING([default package type])
case "$VENDOR" in
toss) DEFAULT_PACKAGE=rpm ;;
redhat) DEFAULT_PACKAGE=rpm ;;
fedora) DEFAULT_PACKAGE=rpm ;;
gentoo) DEFAULT_PACKAGE=tgz ;;
alpine) DEFAULT_PACKAGE=tgz ;;
arch) DEFAULT_PACKAGE=tgz ;;
sles) DEFAULT_PACKAGE=rpm ;;
slackware) DEFAULT_PACKAGE=tgz ;;
lunar) DEFAULT_PACKAGE=tgz ;;
ubuntu) DEFAULT_PACKAGE=deb ;;
debian) DEFAULT_PACKAGE=deb ;;
*) DEFAULT_PACKAGE=rpm ;;
esac
AC_MSG_RESULT([$DEFAULT_PACKAGE])
AC_SUBST(DEFAULT_PACKAGE)
DEFAULT_INIT_DIR=$sysconfdir/init.d
AC_MSG_CHECKING([default init directory])
AC_MSG_RESULT([$DEFAULT_INIT_DIR])
AC_SUBST(DEFAULT_INIT_DIR)
AC_MSG_CHECKING([default init script type])
case "$VENDOR" in
toss) DEFAULT_INIT_SCRIPT=redhat ;;
redhat) DEFAULT_INIT_SCRIPT=redhat ;;
fedora) DEFAULT_INIT_SCRIPT=fedora ;;
gentoo) DEFAULT_INIT_SCRIPT=openrc ;;
alpine) DEFAULT_INIT_SCRIPT=openrc ;;
arch) DEFAULT_INIT_SCRIPT=lsb ;;
sles) DEFAULT_INIT_SCRIPT=lsb ;;
slackware) DEFAULT_INIT_SCRIPT=lsb ;;
lunar) DEFAULT_INIT_SCRIPT=lunar ;;
ubuntu) DEFAULT_INIT_SCRIPT=lsb ;;
debian) DEFAULT_INIT_SCRIPT=lsb ;;
*) DEFAULT_INIT_SCRIPT=lsb ;;
esac
AC_MSG_RESULT([$DEFAULT_INIT_SCRIPT])
AC_SUBST(DEFAULT_INIT_SCRIPT)
Base init scripts for SYSV systems * Based on the init scripts included with Debian GNU/Linux, then take code from the already existing ones, trying to merge them into one set of scripts that will work for 'everyone' for better maintainability. * Add configurable variables to control the workings of the init scripts: * ZFS_INITRD_PRE_MOUNTROOT_SLEEP Set a sleep time before we load the module (used primarily by initrd scripts to allow for slower media (such as USB devices etc) to be availible before we load the zfs module). * ZFS_INITRD_POST_MODPROBE_SLEEP Set a timed sleep in the initrd to after the load of the zfs module. * ZFS_INITRD_ADDITIONAL_DATASETS To allow for mounting additional datasets in the initrd. Primarily used in initrd scripts to allow for when filesystem needed to boot (such as /usr, /opt, /var etc) isn't directly under the root dataset. * ZFS_POOL_EXCEPTIONS Exclude pools from being imported (in the initrd and/or init scripts). * ZFS_DKMS_ENABLE_DEBUG, ZFS_DKMS_ENABLE_DEBUG_DMU_TX, ZFS_DKMS_DISABLE_STRIP Set to control how dkms should build the dkms packages. * ZPOOL_IMPORT_PATH Set path(s) where "zpool import" should import pools from. This was previously the job of "USE_DISK_BY_ID" (which is still used for backwards compatibility) but was renamed to allow for better control of import path(s). * If old USE_DISK_BY_ID is set, but not new ZPOOL_IMPORT_PATH, then we set ZPOOL_IMPORT_PATH to sane defaults just to be on the safe side. * ZED_ARGS To allow for local options to zed without having to change the init script. * The import function, do_import(), imports pools by name instead of '-a' for better control of pools to import and from where. * If USE_DISK_BY_ID is set (for backwards compatibility), but isn't 'yes' then ignore it. * If pool(s) isn't found with a simple "zpool import" (seen it happen), try looking for them in /dev/disk/by-id (if it exists). Any duplicates (pools found with both commands) is filtered out. * IF we have found extra pool(s) this way, we must force USE_DISK_BY_ID so that the first, simple "zpool import $pool" is able to find it. * Fallback on importing the pool using the cache file (if it exists) only if 'simple' import (either with ZPOOL_IMPORT_PATH or the 'built in' defaults) didn't work. * The export function, do_export(), will export all pools imported, EXCEPT the root pool (if there is one). * ZED script from the Debian GNU/Linux packages added. * Refreshed ZED init script from behlendorf@5e7a660 to be portable so it may be used on both LSB and Redhat style systems. * If there is no pool(s) imported and zed successfully shut down, we will unload the zfs modules. * The function library file for the ZoL init script is installed as /etc/init.d/zfs-functions. * The four init scripts, the /etc/{defaults,sysconfig,conf.d}/zfs config file as well as the common function library is tagged as '%config(noreplace)' in the rpm rules file to make sure they are not replaced automatically if locally modifed. * Pitfals and workarounds: * If we're running from init, remove stale /etc/dfs/sharetab before importing pools in the zfs-import init script. * On Debian GNU/Linux, there's a 'sendsigs' script that will kill basically everything quite early in the shutdown phase and zed is/should be stopped much later than that. We don't want zed to be among the ones killed, so add the zed pid to list of pids for 'sendsigs' to ignore. * CentOS uses echo_success() and echo_failure() to print out status of command. These in turn uses "echo -n \0xx[etc]" to move cursor and choose colour etc. This doesn't work with the modified IFS variable we need to use in zfs-import for some reason, so work around that when we define zfs_log_{end,failure}_msg() for RedHat and derivative distributions. * All scripts passes ShellCheck (with one false positive in do_mount()). Signed-off-by: Turbo Fredriksson turbo@bayour.com Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed by: Richard Yao <ryao@gentoo.org> Reviewed by: Chris Dunlap <cdunlap@llnl.gov> Closes #2974 Closes #2107
2015-04-23 21:35:45 +03:00
AC_MSG_CHECKING([default init config direectory])
case "$VENDOR" in
alpine) DEFAULT_INITCONF_DIR=/etc/conf.d ;;
Base init scripts for SYSV systems * Based on the init scripts included with Debian GNU/Linux, then take code from the already existing ones, trying to merge them into one set of scripts that will work for 'everyone' for better maintainability. * Add configurable variables to control the workings of the init scripts: * ZFS_INITRD_PRE_MOUNTROOT_SLEEP Set a sleep time before we load the module (used primarily by initrd scripts to allow for slower media (such as USB devices etc) to be availible before we load the zfs module). * ZFS_INITRD_POST_MODPROBE_SLEEP Set a timed sleep in the initrd to after the load of the zfs module. * ZFS_INITRD_ADDITIONAL_DATASETS To allow for mounting additional datasets in the initrd. Primarily used in initrd scripts to allow for when filesystem needed to boot (such as /usr, /opt, /var etc) isn't directly under the root dataset. * ZFS_POOL_EXCEPTIONS Exclude pools from being imported (in the initrd and/or init scripts). * ZFS_DKMS_ENABLE_DEBUG, ZFS_DKMS_ENABLE_DEBUG_DMU_TX, ZFS_DKMS_DISABLE_STRIP Set to control how dkms should build the dkms packages. * ZPOOL_IMPORT_PATH Set path(s) where "zpool import" should import pools from. This was previously the job of "USE_DISK_BY_ID" (which is still used for backwards compatibility) but was renamed to allow for better control of import path(s). * If old USE_DISK_BY_ID is set, but not new ZPOOL_IMPORT_PATH, then we set ZPOOL_IMPORT_PATH to sane defaults just to be on the safe side. * ZED_ARGS To allow for local options to zed without having to change the init script. * The import function, do_import(), imports pools by name instead of '-a' for better control of pools to import and from where. * If USE_DISK_BY_ID is set (for backwards compatibility), but isn't 'yes' then ignore it. * If pool(s) isn't found with a simple "zpool import" (seen it happen), try looking for them in /dev/disk/by-id (if it exists). Any duplicates (pools found with both commands) is filtered out. * IF we have found extra pool(s) this way, we must force USE_DISK_BY_ID so that the first, simple "zpool import $pool" is able to find it. * Fallback on importing the pool using the cache file (if it exists) only if 'simple' import (either with ZPOOL_IMPORT_PATH or the 'built in' defaults) didn't work. * The export function, do_export(), will export all pools imported, EXCEPT the root pool (if there is one). * ZED script from the Debian GNU/Linux packages added. * Refreshed ZED init script from behlendorf@5e7a660 to be portable so it may be used on both LSB and Redhat style systems. * If there is no pool(s) imported and zed successfully shut down, we will unload the zfs modules. * The function library file for the ZoL init script is installed as /etc/init.d/zfs-functions. * The four init scripts, the /etc/{defaults,sysconfig,conf.d}/zfs config file as well as the common function library is tagged as '%config(noreplace)' in the rpm rules file to make sure they are not replaced automatically if locally modifed. * Pitfals and workarounds: * If we're running from init, remove stale /etc/dfs/sharetab before importing pools in the zfs-import init script. * On Debian GNU/Linux, there's a 'sendsigs' script that will kill basically everything quite early in the shutdown phase and zed is/should be stopped much later than that. We don't want zed to be among the ones killed, so add the zed pid to list of pids for 'sendsigs' to ignore. * CentOS uses echo_success() and echo_failure() to print out status of command. These in turn uses "echo -n \0xx[etc]" to move cursor and choose colour etc. This doesn't work with the modified IFS variable we need to use in zfs-import for some reason, so work around that when we define zfs_log_{end,failure}_msg() for RedHat and derivative distributions. * All scripts passes ShellCheck (with one false positive in do_mount()). Signed-off-by: Turbo Fredriksson turbo@bayour.com Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed by: Richard Yao <ryao@gentoo.org> Reviewed by: Chris Dunlap <cdunlap@llnl.gov> Closes #2974 Closes #2107
2015-04-23 21:35:45 +03:00
gentoo) DEFAULT_INITCONF_DIR=/etc/conf.d ;;
toss) DEFAULT_INITCONF_DIR=/etc/sysconfig ;;
redhat) DEFAULT_INITCONF_DIR=/etc/sysconfig ;;
fedora) DEFAULT_INITCONF_DIR=/etc/sysconfig ;;
sles) DEFAULT_INITCONF_DIR=/etc/sysconfig ;;
ubuntu) DEFAULT_INITCONF_DIR=/etc/default ;;
debian) DEFAULT_INITCONF_DIR=/etc/default ;;
*) DEFAULT_INITCONF_DIR=/etc/default ;;
esac
AC_MSG_RESULT([$DEFAULT_INITCONF_DIR])
AC_SUBST(DEFAULT_INITCONF_DIR)
Initramfs scripts for ZoL. * Supports booting of a ZFS snapshot. Do this by cloning the snapshot into a dataset. If this, the resulting dataset, already exists, destroy it. Then mount it on root. * If snapshot does not exist, use base dataset (the part before '@') as boot filesystem instead. * If no snapshot is specified on the 'root=' kernel command line, but there is an '@', then get a list of snapshots below that filesystem and ask the user which to use. * Clone with 'mountpoint=none' and 'canmount=noauto' - we mount manually and explicitly. * For sub-filesystems, that doesn't have a mountpoint property set, we use the 'org.zol:mountpoint' to keep track of it's mountpoint. * Allow rollback of snapshots instead of clone it and boot from the clone. * Allow mounting a root- and subfs with mountpoint=legacy set * Allow mounting a filesystem which is using nativ encryption. * Support all currently used kernel command line arguments All the different distributions have their own standard on what to specify on the kernel command line to boot of a ZFS filesystem. * Extra options: * zfsdebug=(on,yes,1) Show extra debugging information * zfsforce=(on,yes,1) Force import the pool * rollback=(on,yes,1) Rollback (instead of clone) the snapshot * Only try to import pool if it haven't already been imported * This will negate the need to force import a pool that have not been exported cleanly. * Support exclusion of pools to import by setting ZFS_POOL_EXCEPTIONS in /etc/default/zfs. * Support additional configuration variable ZFS_INITRD_ADDITIONAL_DATASETS to mount additional filesystems not located under your root dataset. * Include /etc/modprobe.d/{zfs,spl}.conf in the initrd if it/they exist. * Include the udev rule to use by-vdev for pool imports. * Include the /etc/default/zfs file to the initrd. * Only try /dev/disk/by-* in the initrd if USE_DISK_BY_ID is set. * Use /dev/disk/by-vdev before anything. * Add /dev as a last ditch attempt. * Fallback to using the cache file if that exist if nothing else worked. * Use /sbin/modprobe instead of built-in (BusyBox) modprobe. This gets rid of the message "modprobe: can't load module zcommon". Thanx to pcoultha for finding this. Signed-off-by: Turbo Fredriksson <turbo@bayour.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #2116 Closes #2114
2014-01-30 20:26:48 +04:00
AC_MSG_CHECKING([whether initramfs-tools is available])
if test -d /usr/share/initramfs-tools ; then
DEFINE_INITRAMFS='--define "_initramfs 1"'
AC_MSG_RESULT([yes])
else
DEFINE_INITRAMFS=''
AC_MSG_RESULT([no])
fi
AC_SUBST(DEFINE_INITRAMFS)
])
dnl #
dnl # Default ZFS package configuration
dnl #
AC_DEFUN([ZFS_AC_PACKAGE], [
ZFS_AC_DEFAULT_PACKAGE
ZFS_AC_RPM
ZFS_AC_DPKG
ZFS_AC_ALIEN
])