Current implementation of bootstrap() is not sufficient for kmem_cache and kmem_cache_node.
First, for kmem_cache. bootstrap() call kmem_cache_zalloc() at first. When kmem_cache_zalloc() is called, kmem_cache's slab is moved to cpu slab for satisfying kmem_cache allocation request. In current implementation, we only consider n->partial slabs, so, we miss this cpu slab for kmem_cache. Second, for kmem_cache_node. When slab_state = PARTIAL, create_boot_cache() is called. And then, kmem_cache_node's slab is moved to cpu slab for satisfying kmem_cache_node allocation request. So, we also miss this slab. These didn't make any error previously, because we normally don't free objects which comes from kmem_cache's first slab and kmem_cache_node's. Problem will be solved if we consider a cpu slab in bootstrap(). This patch implement it. v2: don't loop over all processors in bootstrap(). Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com> diff --git a/mm/slub.c b/mm/slub.c index 7204c74..8b95364 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3614,10 +3614,15 @@ static int slab_memory_callback(struct notifier_block *self, static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) { int node; + struct kmem_cache_cpu *c; struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT); memcpy(s, static_cache, kmem_cache->object_size); + c = this_cpu_ptr(s->cpu_slab); + if (c->page) + c->page->slab_cache = s; + for_each_node_state(node, N_NORMAL_MEMORY) { struct kmem_cache_node *n = get_node(s, node); struct page *p; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/