Commit Graph

9 Commits

Author SHA1 Message Date
Brian Behlendorf
63a93055fb Coverity 9657: Resource Leak
Accidentally leaked list item li in error path.  The fix is to
adjust this error path to ensure the allocated list item which
has not yet been added to the list gets freed.  To do this we
simply add a new goto label slightly earlier to use the existing
cleanup logic and minimize the number of unique return points.
2009-02-18 10:16:26 -08:00
Brian Behlendorf
02c7f16494 Coverity 9656: Forward NULL
This was a false positive the callpath being walked is impossible
because the splat_kmem_cache_test_kcp_alloc() function will ensure
kcp->kcp_kcd[0] is initialized to NULL.  However, there is no harm
is making this explicit for the test case so I'm adding a line to
clearly set it to correct the analysis.
2009-02-18 10:09:01 -08:00
Brian Behlendorf
31a033ecd4 2.6.27+ portability changes
- Added SPL_AC_3ARGS_ON_EACH_CPU configure check to determine
  if the older 4 argument version of on_each_cpu() should be
  used or the new 3 argument version.  The retry argument was
  dropped in the new API which was never used anyway.
- Updated work queue compatibility wrappers.  The old way this
  worked was to pass a data point when initialized the workqueue.
  The new API assumed the work item is embedding in a structure
  and we us container_of() to find that data pointer.
- Updated skc->skc_flags to be an unsigned long which is now
  type checked in the bit operations.  This silences the warnings.
- Updated autogen products and splat tests accordingly
2009-02-02 15:12:30 -08:00
Brian Behlendorf
10a4be0f03 Update thread tests to have max_time 2009-01-30 21:24:42 -08:00
Brian Behlendorf
ea3e6ca9e5 kmem_cache hardening and performance improvements
- Added slab work queue task which gradually ages and free's slabs
  from the cache which have not been used recently.
- Optimized slab packing algorithm to ensure each slab contains the
  maximum number of objects without create to large a slab.
- Fix deadlock, we can never call kv_free() under the skc_lock.  We
  now unlink the objects and slabs from the cache itself and attach
  them to a private work list.  The contents of the list are then
  subsequently freed outside the spin lock.
- Move magazine create/destroy operation on to local cpu.
- Further performace optimizations by minimize the usage of the large
  per-cache skc_lock.  This includes the addition of KMC_BIT_REAPING
  bit mask which is used to prevent concurrent reaping, and to defer
  new slab creation when reaping is occuring.
- Add KMC_BIT_DESTROYING bit mask which is set when the cache is being
  destroyed, this is used to catch any task accessing the cache while
  it is being destroyed.
- Add comments to all the functions and additional comments to try
  and make everything as clear as possible.
- Major cleanup and additions to the SPLAT kmem tests to more
  rigerously stress the cache implementation and look for any problems.
  This includes correctness and performance tests.
- Updated portable work queue interfaces
2009-01-30 20:54:49 -08:00
Brian Behlendorf
48e0606a52 Implement kmem cache alignment argument 2009-01-26 09:02:04 -08:00
Brian Behlendorf
3f4126739d Sleep uninteruptibly, waking up early may result in a crash 2009-01-22 09:58:48 -08:00
Brian Behlendorf
ae3b87f908 KMEM_TRACKING turned up a missing free in list test 6, fix the leak 2009-01-20 12:47:53 -08:00
Brian Behlendorf
617d5a673c Rename modules to module and update references 2009-01-15 10:44:54 -08:00