On Thu, 24 Jul 2014, Johannes Weiner wrote:

> > diff --git a/mm/slub.c b/mm/slub.c
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3195,12 +3195,13 @@ static void list_slab_objects(struct kmem_cache *s, 
> > struct page *page,
> >  /*
> >   * Attempt to free all partial slabs on a node.
> >   * This is called from kmem_cache_close(). We must be the last thread
> > - * using the cache and therefore we do not need to lock anymore.
> > + * using the cache, but we still have to lock for lockdep's sake.
> >   */
> >  static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> >  {
> >     struct page *page, *h;
> >  
> > +   spin_lock_irq(&n->list_lock);
> >     list_for_each_entry_safe(page, h, &n->partial, lru) {
> >             if (!page->inuse) {
> >                     __remove_partial(n, page);
> 
> This already uses __remove_partial(), which does not have the lockdep
> assertion.  You even acked the patch that made this change, why add
> the spinlock now?
> 

Yup, thanks.  This was sitting in Pekka's slab/next branch but isn't 
actually needed after commit 1e4dd9461fab ("slub: do not assert not 
having lock in removing freed partial").  Good catch!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to