On Wed, Jan 28, 2015 at 10:33:50AM -0600, Christoph Lameter wrote:
> On Wed, 28 Jan 2015, Vladimir Davydov wrote:
> 
> > @@ -3419,6 +3420,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
> >             for (i = SHRINK_PROMOTE_MAX - 1; i >= 0; i--)
> >                     list_splice_init(promote + i, &n->partial);
> >
> > +           if (n->nr_partial || slabs_node(s, node))
> 
> The total number of slabs obtained via slabs_node always contains the
> number of partial ones. So no need to check n->nr_partial.

Yeah, right. In addition to that I misplaced the check - it should go
after discard_slab, where we decrement nr_slabs. Here goes the updated
patch:

From: Vladimir Davydov <vdavy...@parallels.com>
Subject: [PATCH] slub: fix kmem_cache_shrink return value

It is supposed to return 0 if the cache has no remaining objects and 1
otherwise, while currently it always returns 0. Fix it.

Signed-off-by: Vladimir Davydov <vdavy...@parallels.com>
---
 mm/slub.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index dbf9334b6a5c..5626588db884 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3379,6 +3379,7 @@ int __kmem_cache_shrink(struct kmem_cache *s)
        LIST_HEAD(discard);
        struct list_head promote[SHRINK_PROMOTE_MAX];
        unsigned long flags;
+       int ret = 0;
 
        for (i = 0; i < SHRINK_PROMOTE_MAX; i++)
                INIT_LIST_HEAD(promote + i);
@@ -3424,9 +3425,12 @@ int __kmem_cache_shrink(struct kmem_cache *s)
                /* Release empty slabs */
                list_for_each_entry_safe(page, t, &discard, lru)
                        discard_slab(s, page);
+
+               if (slabs_node(s, node))
+                       ret = 1;
        }
 
-       return 0;
+       return ret;
 }
 
 static int slab_mem_going_offline_callback(void *arg)
-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to