On Fri, 6 Jun 2014, Vladimir Davydov wrote: > This patch makes SLUB's implementation of kmem_cache_free > non-preemptable. As a result, synchronize_sched() will work as a barrier > against kmem_cache_free's in flight, so that issuing it before cache > destruction will protect us against the use-after-free.
Subject: slub: reenable preemption before the freeing of slabs from slab_free I would prefer to call the page allocator with preemption enabled if possible. Signed-off-by: Christoph Lameter <[email protected]> Index: linux/mm/slub.c =================================================================== --- linux.orig/mm/slub.c 2014-05-29 11:45:32.065859887 -0500 +++ linux/mm/slub.c 2014-06-06 09:45:12.822480834 -0500 @@ -1998,6 +1998,7 @@ if (n) spin_unlock(&n->list_lock); + preempt_enable(); while (discard_page) { page = discard_page; discard_page = discard_page->next; @@ -2006,6 +2007,7 @@ discard_slab(s, page); stat(s, FREE_SLAB); } + preempt_disable(); #endif } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

