436ad60faa
The kmem cache implementation always adds new slabs by dispatching a task to the spl_kmem_cache taskq to perform the allocation. This is done because large slabs must be allocated using vmalloc(). It is possible these allocations will block on IO because the GFP_NOIO flag is not honored. This can result in a deadlock. Therefore, a deadlock detection strategy was implemented to deal with this case. When it is determined, by timeout, that the spl_kmem_cache thread has deadlocked attempting to add a new slab. Then all callers attempting to allocate from the cache fall back to using kmalloc() which does honor all passed flags. This logic was correct but an optimization in the code allowed for a deadlock. Because only slabs backed by vmalloc() can deadlock in the way described above. An optimization was made to only invoke this deadlock detection code for vmalloc() backed caches. This had the advantage of making it easy to distinguish these objects when they were freed. But this isn't strictly safe. If all the spl_kmem_cache threads end up deadlocked than we can't grow any of the other caches either. This can once again result in a deadlock if memory needs to be allocated from one of these other caches to ensure forward progress. The fix here is to remove the optimization which limits this fall back allocation stratagy to vmalloc() backed caches. Doing this means we may need to take the cache lock in spl_kmem_cache_free() call path. But this small cost can be mitigated by ignoring objects with virtual addresses. For good measure the default number of spl_kmem_cache threads has been increased from 1 to 4, and made tunable. This alone wouldn't resolve the original issue since it's still possible for all the threads to be deadlocked. However, it does help responsiveness by ensuring that a single deadlocked spl_kmem_cache thread doesn't block allocations from other caches until the timeout is reached. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> |
||
---|---|---|
cmd | ||
config | ||
include | ||
lib | ||
man | ||
module | ||
rpm | ||
scripts | ||
.gitignore | ||
AUTHORS | ||
autogen.sh | ||
configure.ac | ||
copy-builtin | ||
COPYING | ||
DISCLAIMER | ||
Makefile.am | ||
META | ||
README.markdown | ||
spl.release.in |
The Solaris Porting Layer (SPL) is a Linux kernel module which provides many of the Solaris kernel APIs. This shim layer makes it possible to run Solaris kernel code in the Linux kernel with relatively minimal modification. This can be particularly useful when you want to track upstream Solaris development closely and do not want the overhead of maintaining a large patch which converts Solaris primitives to Linux primitives.
To build packages for your distribution:
$ ./configure
$ make pkg
If you are building directly from the git tree and not an officially released tarball you will need to generate the configure script. This can be done by executing the autogen.sh script after installing the GNU autotools for your distribution.
To copy the kernel code inside your kernel source tree for builtin compilation:
$ ./configure --enable-linux-builtin --with-linux=/usr/src/linux-...
$ ./copy-builtin /usr/src/linux-...
The SPL comes with an automated test suite called SPLAT. The test suite is implemented in two parts. There is a kernel module which contains the tests and a user space utility which controls which tests are run. To run the full test suite:
$ sudo insmod ./module/splat/splat.ko
$ sudo ./cmd/splat --all
Full documentation for building, configuring, testing, and using the SPL can be found at: http://zfsonlinux.org