When the Linux 3.6 KERN_PATH_LOCKED compatibility code was added
by commit bcb1589 an entirely new vn_remove() implementation was
added. That function did not properly handle an error from
spl_kern_path_locked() which would result in an panic. This
patch addresses the issue by returning the error to the caller.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#187
The preferred kernel interface for creating threads has been
kthread_create() for a long time now. However, several of the
SPLAT tests still use the legacy kernel_thread() function which
has finally been dropped (mostly).
Update the condvar and rwlock SPLAT tests to use the modern
interface. Frankly this is something we should have done a
long time ago.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#194
Previously this check was only performed when ./configure was
attempting to autodetect your kernel source directory. But we
should also handle the case where --with-linux was provided
and is obviously wrong. This way we catch the error before
invoking make and compiling the source with an incorrect
autoconf results.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#162
Allowing the spl_cache_grow_work() function to reclaim inodes
allows for two unlikely deadlocks. Therefore, we clear __GFP_FS
for these allocations. The two deadlocks are:
* While holding the ZFS_OBJ_HOLD_ENTER(zsb, obj1) lock a function
calls kmem_cache_alloc() which happens to need to allocate a
new slab. To allocate the new slab we enter FS level reclaim
and attempt to evict several inodes. To evict these inodes we
need to take the ZFS_OBJ_HOLD_ENTER(zsb, obj2) lock and it
just happens that obj1 and obj2 use the same hashed lock.
* Similar to the first case however instead of getting blocked
on the hash lock we block in txg_wait_open() which is waiting
for the next txg which isn't coming because the txg_sync
thread is blocked in kmem_cache_alloc().
Note this isn't a 100% fix because vmalloc() won't strictly
honor __GFP_FS. However, it practice this is sufficient because
several very unlikely things must all occur concurrently.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue zfsonlinux/zfs#1101
This branch contains kmem cache optimizations designed to resolve
the lockups reported in zfsonlinux/zfs#922. The lockups were
largely the result of spin lock contention in the slab under low
memory conditions. Fundamentally, these changes are all designed
to minimize that contention though a variety of methods.
* Improved vmem cached deadlock detection
* Track emergency objects in rbtree
* Optimize spl_kmem_cache_free()
* Never spin in kmem_cache_alloc()
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
zfsonlinux/zfs#922
If we are reaping from the cache and a concurrent allocation
occurs then the caller must block until the reaping is complete.
This is signaled by the clearing of the KMC_BIT_REAPING bit.
Otherwise the caller will be in a tight loop which takes and
releases the skc->skc_cache lock. When there are multiple
concurrent callers the system will thrash on the lock and
appear to lock up.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Because only virtual slabs may have emergency objects and these
objects are guaranteed to have physical addresses. It can be
easily determined if the passed object is a virtual slab object
or an emergency object. This allows us to completely optimize
the emergency object free case out of the common free path.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
In the initial implementation emergency objects were tracked on a
per-cache list. The assumption was that under normal operation we
would never allocate more than a handful of these objects. So the
cost of walking the list during free was expected to be negligible.
However real world usage has shown that emergency objects tend to
be allocated in batches. A deadlock will be detected and several
thousand emergency objects will be allocated before the original
blocked slab allocation can complete.
Therefore the original list has been replaced by a red black tree
which is sorted by the memory address of each allocated object.
This bounds the worst case insertion and removal time to O(log n)
which minimize contention on the assoicated spin lock.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
The entire goal of performing the slab allocations asynchronously
is to be able to detect when a vmalloc() deadlocks. In this case,
and only this case, do we want to start allocating emergency objects.
The trick here is to minimize false positives because the overhead
of tracking emergency objects is far higher than normal slab objects.
With that goal in mind the code was reworked to be less sensitive
to slow allocations by increasing the wait time. Once a cache is
is marked deadlocked all subsequent allocations which can not be
satisfied with existing cache objects will immediately allocate new
emergency objects. This behavior persists until the asynchronous
allocation completes and clears the deadlocked flag.
The result of these tweaks is that far fewer emergency objects
get created which is important because this minimizes the cost of
releasing them latter in kmem_cache_free().
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Additional debugging, some cleanup, and an assortment of fixes
to the SPLAT tests and infrastructure. Full details in the
individual patches.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Disable this test because it may result in an OOM event on the
system which can result in the test infrastructure being killed.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
The Fedora 3.6 debug kernel identified the following issue where
we create a thread under a spin lock. This isn't safe because
sleeping could result in a deadlock. Therefore the lock is changed
to a mutex so it's safe to sleep.
BUG: sleeping function called from invalid context at mm/slub.c:930
in_atomic(): 1, irqs_disabled(): 0, pid: 10583, name: splat
1 lock held by splat/10583:
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
The Fedora 3.6 debug kernel identified the following issue where
we call copy_to_user() under a spin lock(). This used to be safe
in older kernels but no longer appears to be true so the spin
lock was changed to a mutex. None of this code is performance
critical so allowing the process to sleep is harmless.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Restructure the the SPLAT headers such that each test only
includes the minimal set of headers it requires.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reference count every entry and exit from the condition variable
functions: cv_wait(), cv_wait_timeout(), cv_signal(), cv_broadcast().
This allows us to safely block in cv_destroy() until all consumers
have been scheduled and are no longer accessing the condition
variable memory.
In addition poison the magic value at the start of cv_destroy() to
ensure there are never any new callers after cv_destroy() is called.
The consumer is responsible for ensuring this never occurs.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Bring in support for the new KSTAT_TYPE_TXG type. This allows for
additional visibility in to the txg handling.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Add a new kstat type for tracking useful statistics about a TXG.
The new KSTAT_TYPE_TXG type can be used to tracks the following
statistics per-txg.
txg - Unique txg number
state - State (O)pen/(Q)uiescing/(S)yncing/(C)ommitted
birth; - Creation time
nread - Bytes read
nwritten; - Bytes written
reads - IOPs read
writes - IOPs write
open_time; - Length in nanoseconds the txg was open
quiesce_time - Length in nanoseconds the txg was quiescing
sync_time; - Length in nanoseconds the txg was syncing
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Move the kstat ks_update() callback under the ks_lock. This
enables dynamically sized kstats without modification to the
kstat API.
* Create a kstat with the KSTAT_FLAG_VIRTUAL flag.
* Register a ->ks_update() callback which does:
o Frees any existing ks_data buffer.
o Set ks_data_size to the kstat array size.
o Set ks_data to an allocated buffer of size ks_data_size
o Populate the array of buffers with the required data.
The buffer allocated in the ks_update() callback is guaranteed
to remain allocated and valid while the proc sequence handler
iterates over the buffer. The lock will not be dropped until
kstat_seq_stop() function is run making it safe for concurrent
access. To allow the ks_update() callback to perform memory
allocations the lock was changed to a mutex.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Commit torvalds/linux@b8318b0 moved the __clear_close_on_exec()
function out of include/linux/fdtable.h and in to fs/file.c
making it unavailable to the SPL.
Now as it turns out we only used this function to tear down
some test infrastructure for the vn_getf()/vn_releasef() SPLAT
regression tests. Rather than implement even more autoconf
compatibilty code to handle this we just remove the test case.
This also allows us to drop three existing autoconf tests.
This does mean the SPLAT tests will no longer verify these
functions but historically they have never been a problem.
And if we feel we absolutely need this test coverage I'm
sure a more portable version of the test case could be added.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#183
The kern_path_parent() function was removed from Linux 3.6 because
it was observed that all the callers just want the parent dentry.
The simpler kern_path_locked() function replaces kern_path_parent()
and does the lookup while holding the ->i_mutex lock.
This is good news for the vn implementation because it removes the
need for us to handle the locking. However, it makes it harder to
implement a single readable vn_remove()/vn_rename() function which
is usually what we prefer.
Therefore, we implement a new version of vn_remove()/vn_rename()
for Linux 3.6 and newer kernels. This allows us to leave the
existing working implementation untouched, and to add a simpler
version for newer kernels.
Long term I would very much like to see all of the vn code removed
since what this code enabled is generally frowned upon in the kernel.
But that can't happen util we either abondon the zpool.cache file
or implement alternate infrastructure to update is correctly in
user space.
Signed-off-by: Yuxuan Shui <yshuiv7@gmail.com>
Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#154
In this particular instance the allocation occurred in the context
of sys_msync()->...->zpl_putpage() where we must be careful not to
initiate additional I/O.
Signed-off-by: Massimo Maggi <massimo@mmmm.it>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
This adds an interface to "punch holes" (deallocate space) in VFS
files. The interface is identical to the Solaris VOP_SPACE interface.
This interface is necessary for TRIM support on file vdevs.
This is implemented using Linux fallocate(FALLOC_FL_PUNCH_HOLE), which
was introduced in 2.6.38. For a brief time before 2.6.38 this was done
using the truncate_range inode operation, which was quickly deprecated.
This patch only supports FALLOC_FL_PUNCH_HOLE.
This adds support for the truncate_range() inode operation to
VOP_SPACE() for file hole punching. This API is deprecated and removed
in 3.5, so it's only useful for old kernels.
On tmpfs, the truncate_range() inode operation translates to
shmem_truncate_range(). Unfortunately, this function expects the end
offset to be inclusive and aligned to the end of a page. If it is not,
the kernel will stop with a BUG_ON().
This patch fixes the issue by adapting to the constraints set forth by
shmem_truncate_range().
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#168
Under certain circumstances the following functions may be called
in a context where KM_SLEEP is unsafe and can result in a deadlocked
system. To avoid this problem the unconditional KM_SLEEPs are
converted to KM_PUSHPAGEs. This will prevent them from attempting
to initiate any I/O during direct reclaim.
This change was originally part of cd5ca4b but was reverted by
330fe01. It always should have had its own commit for exactly
this reason.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
When the taskq code was originally written it seemed like a good
idea to simply map TQ_SLEEP to KM_SLEEP. Unfortunately, this
assumed that the TQ_* flags would never confict with any of the
Linux GFP_* flags. When adding the TQ_PUSHPAGE support in commit
cd5ca4b this invariant was accidentally broken.
Therefore to support TQ_PUSHPAGE, which is needed for Linux, and
prevent any further confusion I have removed this direct mapping.
The TQ_SLEEP, TQ_NOSLEEP, and TQ_PUSHPAGE are no longer defined
in terms of their KM_* counterparts. Instead a simple mapping
function is introduce to convert TQ_* -> KM_* where needed.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #171
This reverts commit cd5ca4b2f8
due to conflicts in the higher TQ_ bits which caused incorrect
behavior.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
spl_config.h.in is a generated file: remove and .gitignore it
Signed-off-by: Chris Dunlop <chris@onthe.net.au>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
There still appears to be a race in the condition variables where
->cv_mutex is set after we are woken from the cv_destroy wait queue.
This might be possible when cv_destroy() is called immediately after
cv_broadcast(). We had some troubles with this previously but
there may still be a small race, see commit d599e4f.
The following patch closes one small race and improves the ASSERTs
such that they log the offending value.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
zfsonlinux/zfs#943
The workspace required by zlib to perform compression is roughly
512MB (order-7). These allocations are so large that we should
never attempt to directly kmalloc an emergency object for them.
It is far preferable to asynchronously vmalloc an additional slab
in case it's needed. Then simply block waiting for an existing
object to be released or for the new slab to be allocated.
This can be accomplished by disabling emergency slab objects by
passing the KMC_NOEMERGENCY flag at slab creation time.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
zfsonlinux/zfs#917
Provide a flag to disable the use of emergency objects for a
specific kmem cache. There may be instances where under no
circumstances should you kmalloc() an emergency object. For
example, when you cache contains very large objects (>128k).
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
See dechamps/zfs@cc6cd40ad7 for details.
This harmless addition was merged to simplify testing the ZFS TRIM
support patches.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#167
When various kernel debuging options are enabled this allocation
may be larger than usual as shown by the following warning. It
is in no way harmful so we suppress the warning.
SPL: large kmem_alloc(40960, 0x80d0) at
tsd_hash_table_init:358 (76495/76495)
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#93
After the emergency slab objects were merged I started observing
timeout failures in the kmem:slab_overcommit test. These were
due to the ineffecient way the slab_overcommit reclaim function
was implemented. And due to the additional cost of potentially
allocating ten of thousands of emergency objects and tracking
them on a single list.
This patch addresses the first concern by enhansing the test
case to trace all of the allocations objects as a linked list.
This allows for a cleaner version of the reclaim function to
simply release SPLAT_KMEM_OBJ_RECLAIM objects.
Since this touches some common code all the tests which share
these data structions were also updated. After making these
changes slab_overcommit is reliably passing. However, there
is certainly additional cleanup which could be done here.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Under certain circumstances the following functions may be called
in a context where KM_SLEEP is unsafe and can result in a deadlocked
system. To avoid this problem the unconditional KM_SLEEPs are
converted to KM_PUSHPAGEs. This will prevent them from attempting
to initiate any I/O during direct reclaim.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Generate an assertion if we're going to deadlock the system by
attempting to acquire a mutex the process is already holding.
There are currently no known instances of this under normal
operation, but it _might_ be possible when using a ZVOL as a
swap device. I want to ensure we catch this immediately if it
were to occur.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
PF_NOFS is a per-process debug flag which is set in current->flags to
detect when a process is performing an unsafe allocation. All tasks
with PF_NOFS set must strictly use KM_PUSHPAGE for allocations because
if they enter direct reclaim and initiate I/O they may deadlock.
When debugging is disabled, any incorrect usage will be detected and
a call stack with a warning will be printed to the console. The flags
will then be automatically corrected to allow for safe execution. If
debugging is enabled this will be treated as a fatal condition.
To avoid any risk of conflicting with the existing PF_ flags. The
PF_NOFS bit shadows the rarely used PF_MUTEX_TESTER bit. Only when
CONFIG_RT_MUTEX_TESTER is not set, and we know this bit is unused,
will the PF_NOFS bit be valid. Happily, most existing distributions
ship a kernel with CONFIG_RT_MUTEX_TESTER disabled.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
This reverts commit 2092cf68d8. The
use of the PF_MEMALLOC flag was always a hack to work around memory
reclaim deadlocks. Those issues are believed to be resolved so this
workaround can be safely reverted.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
This reverts commit b8b6e4c453. The
use of the PF_MEMALLOC flag was always a hack to work around memory
reclaim deadlocks. Those issues are believed to be resolved so this
workaround can be safely reverted.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
This reverts commit 36811b4430.
Which is no longer required because there is now SPL code in
place to safely handle the deadlocks the kernel patch was designed
to address. Therefore we can unconditionally use vmalloc() and
drop all the PF_MEMALLOC code.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
This reverts commit 372c257233. The
use of the PF_MEMALLOC flag was always a hack to work around memory
reclaim deadlocks. Those issues are believed to be resolved so this
workaround can be safely reverted.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
This patch is designed to resolve a deadlock which can occur with
__vmalloc() based slabs. The issue is that the Linux kernel does
not honor the flags passed to __vmalloc(). This makes it unsafe
to use in a writeback context. Unfortunately, this is a use case
ZFS depends on for correct operation.
Fixing this issue in the upstream kernel was pursued and patches
are available which resolve the issue.
https://bugs.gentoo.org/show_bug.cgi?id=416685
However, these changes were rejected because upstream felt that
using __vmalloc() in the context of writeback should never be done.
Their solution was for us to rewrite parts of ZFS to accomidate
the Linux VM.
While that is probably the right long term solution, and it is
something we want to pursue, it is not a trivial task and will
likely destabilize the existing code. This work has been planned
for the 0.7.0 release but in the meanwhile we want to improve the
SPL slab implementation to accomidate this expected ZFS usage.
This is accomplished by performing the __vmalloc() asynchronously
in the context of a work queue. This doesn't prevent the posibility
of the worker thread from deadlocking. However, the caller can now
safely block on a wait queue for the slab allocation to complete.
Normally this will occur in a reasonable amount of time and the
caller will be woken up when the new slab is available,. The objects
will then get cached in the per-cpu magazines and everything will
proceed as usual.
However, if the __vmalloc() deadlocks for the reasons described
above, or is just very slow, then the callers on the wait queues
will timeout out. When this rare situation occurs they will attempt
to kmalloc() a single minimally sized object using the GFP_NOIO flags.
This allocation will not deadlock because kmalloc() will honor the
passed flags and the caller will be able to make forward progress.
As long as forward progress can be maintained then even if the
worker thread is deadlocked the critical thread will make progress.
This will eventually allow the deadlocked worker thread to complete
and normal operation will resume.
These emergency allocations will likely be slow since they require
contiguous pages. However, their use should be rare so the impact
is expected to be minimal. If that turns out not to be the case in
practice further optimizations are possible.
One additional concern is if these emergency objects are long lived.
Right now they are simply tracked on a list which must be walked when
an object is freed. Is they accumulate on a system and the list
grows freeing objects will become more expensive. This could be
handled relatively easily by using a hash instead of a list, but that
optimization (if needed) is left for a follow up patch.
Additionally, these emeregency objects could be repacked in to existing
slabs as objects are freed if the kmem_cache_set_move() functionality
was implemented. See issue https://github.com/zfsonlinux/spl/issues/26
for full details. This work would also help reduce ZFS's memory
fragmentation problems.
The /proc/spl/kmem/slab file has had two new columns added at the
end. The 'emerg' column reports the current number of these emergency
objects in use for the cache, and the following 'max' column shows
the historical worst case. These value should give us a good idea
of how often these objects are needed. Based on these values under
real use cases we can tune the default behavior.
Lastly, as a side benefit using a single work queue for the slab
allocations should reduce cpu contention on the global virtual address
space lock. This should manifest itself as reduced cpu usage for
the system.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Since removing the check for CONFIG_PREEMPT, there are no consumers of
the SPL_LINUX_CONFIG macro. As such, there is no reason to keep it
around.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#164
Remove all of the generated autotools products from the repository
and update the .gitignore files accordingly.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue zfsonlinux/zfs#718
To support preempt enabled kernels in ZFS on Linux, there are a couple
places where the ZFS code needs to disable interrupts. This change adds
the Solaris preempt functions and maps them to the equivalent ZFS
functions, allowing the ZFS to make use of them.
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #98
The spl_magazine_age function had the implied assumption that it will
remain on its current cpu through its execution. In order to support
preempt enabled kernels, this assumption had to be removed.
The spl_kmem_magazine structure now holds the cpu id of the cpu it is
local to. This allows spl_magazine_age to use this field when scheduling
work to be done by the magazine's local cpu.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #98
When building SPL support into the kernel, ./copy-builtin will copy
non-toplevel .gitignore files. These files list /Makefile, which causes
git-archive to omit ./module/{spl,splat}/Makefile. The absence of these
files result in build failures when SPL is selected. ZFS is unaffected
because it puts Makefile in the toplevel .gitignore, which is not
copied. We fix SPL by emulating that behavior.
Reported-by: Fabio Erculiani <lxnay@gentoo.org>
Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#152
To properly support CONFIG_PREEMPT enabled kernels, we must refrain from
using a CPU index when preemption is enabled. As a result, this change
moves the trace_set_debug_header call (which calls smp_processor_id)
within trace_get_tcd and trace_put_tcd (which disable and enable
preemption respectively).
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes#160