mirror_zfs/include/linux/rwsem_compat.h

67 lines
2.6 KiB
C
Raw Normal View History

Correctly handle rwsem_is_locked() behavior A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2010-08-10 22:01:46 +04:00
/*****************************************************************************\
* Copyright (C) 2007-2010 Lawrence Livermore National Security, LLC.
* Copyright (C) 2007 The Regents of the University of California.
* Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
* Written by Brian Behlendorf <behlendorf1@llnl.gov>.
* UCRL-CODE-235197
*
* This file is part of the SPL, Solaris Porting Layer.
* For details, see <http://zfsonlinux.org/>.
Correctly handle rwsem_is_locked() behavior A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2010-08-10 22:01:46 +04:00
*
* The SPL is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* The SPL is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
* You should have received a copy of the GNU General Public License along
* with the SPL. If not, see <http://www.gnu.org/licenses/>.
\*****************************************************************************/
#ifndef _SPL_RWSEM_COMPAT_H
#define _SPL_RWSEM_COMPAT_H
#include <linux/rwsem.h>
#if defined(RWSEM_SPINLOCK_IS_RAW)
#define spl_rwsem_lock_irqsave(lk, fl) raw_spin_lock_irqsave(lk, fl)
#define spl_rwsem_unlock_irqrestore(lk, fl) raw_spin_unlock_irqrestore(lk, fl)
#define spl_rwsem_trylock_irqsave(lk, fl) raw_spin_trylock_irqsave(lk, fl)
#else
#define spl_rwsem_lock_irqsave(lk, fl) spin_lock_irqsave(lk, fl)
#define spl_rwsem_unlock_irqrestore(lk, fl) spin_unlock_irqrestore(lk, fl)
#define spl_rwsem_trylock_irqsave(lk, fl) spin_trylock_irqsave(lk, fl)
#endif /* RWSEM_SPINLOCK_IS_RAW */
Correctly handle rwsem_is_locked() behavior A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2010-08-10 22:01:46 +04:00
/*
* Prior to Linux 2.6.33 there existed a race condition in rwsem_is_locked().
* The semaphore's activity was checked outside of the wait_lock which
* could result in some readers getting the incorrect activity value.
Correctly handle rwsem_is_locked() behavior A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2010-08-10 22:01:46 +04:00
*
* When a kernel without this fix is detected the SPL takes responsibility
* for acquiring the wait_lock to avoid this race.
Correctly handle rwsem_is_locked() behavior A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2010-08-10 22:01:46 +04:00
*/
#if defined(RWSEM_IS_LOCKED_TAKES_WAIT_LOCK)
#define spl_rwsem_is_locked(rwsem) rwsem_is_locked(rwsem)
Correctly handle rwsem_is_locked() behavior A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2010-08-10 22:01:46 +04:00
#else
static inline int
spl_rwsem_is_locked(struct rw_semaphore *rwsem)
{
unsigned long flags;
int rc = 1;
Correctly handle rwsem_is_locked() behavior A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2010-08-10 22:01:46 +04:00
if (spl_rwsem_trylock_irqsave(&rwsem->wait_lock, flags)) {
rc = rwsem_is_locked(rwsem);
spl_rwsem_unlock_irqrestore(&rwsem->wait_lock, flags);
}
Correctly handle rwsem_is_locked() behavior A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2010-08-10 22:01:46 +04:00
return (rc);
}
Correctly handle rwsem_is_locked() behavior A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2010-08-10 22:01:46 +04:00
#endif /* RWSEM_IS_LOCKED_TAKES_WAIT_LOCK */
#endif /* _SPL_RWSEM_COMPAT_H */