Hi, >From the lockdep annotation and the comment that existed before the lockdep annotations were introduced, mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock held.
However, there's a call path in deactivate_slab() when (new.inuse || n->nr_partial <= s->min_partial) && !(new.freelist) && !(kmem_cache_debug(s)) which ends up calling add_full() without holding n->list_lock. This was discovered while onlining/offlining cpus in 3.14-rc1 due to the lockdep annotations added by commit c65c1877bd6826ce0d9713d76e30a7bed8e49f38. Fix this by unconditionally taking the lock irrespective of the state of kmem_cache_debug(s). Cc: Peter Zijlstra <pet...@infradead.org> Cc: Pekka Enberg <penb...@kernel.org> Signed-off-by: Gautham R. Shenoy <e...@linux.vnet.ibm.com> --- mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 7e3e045..1f723f7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1882,7 +1882,7 @@ redo: } } else { m = M_FULL; - if (kmem_cache_debug(s) && !lock) { + if (!lock) { lock = 1; /* * This also ensures that the scanning of full -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/