From: Joonsoo Kim <[email protected]>

There is a bug report that SLAB makes extreme load average due to
over 2000 kworker thread.

https://bugzilla.kernel.org/show_bug.cgi?id=172981

This issue is caused by kmemcg feature that try to create new set of
kmem_caches for each memcg. Recently, kmem_cache creation is slowed by
synchronize_sched() and futher kmem_cache creation is also delayed
since kmem_cache creation is synchronized by a global slab_mutex lock.
So, the number of kworker that try to create kmem_cache increases quitely.
synchronize_sched() is for lockless access to node's shared array but
it's not needed when a new kmem_cache is created. So, this patch
rules out that case.

Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache")
Cc: [email protected]
Reported-by: Doug Smythies <[email protected]>
Tested-by: Doug Smythies <[email protected]>
Signed-off-by: Joonsoo Kim <[email protected]>
---
 mm/slab.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slab.c b/mm/slab.c
index 6508b4d..3c83c29 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -961,7 +961,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
         * guaranteed to be valid until irq is re-enabled, because it will be
         * freed after synchronize_sched().
         */
-       if (force_change)
+       if (old_shared && force_change)
                synchronize_sched();
 
 fail:
-- 
1.9.1

Reply via email to