Hi,

On (07/16/15 08:38), Minchan Kim wrote:
> > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > > index b10a228..824c182 100644
> > > --- a/mm/zsmalloc.c
> > > +++ b/mm/zsmalloc.c
> > > @@ -1811,9 +1811,7 @@ unsigned long zs_pages_to_compact(struct zs_pool 
> > > *pool)
> > >           if (class->index != i)
> > >                   continue;
> > >  
> > > -         spin_lock(&class->lock);
> > >           pages_to_free += zs_can_compact(class);
> > > -         spin_unlock(&class->lock);
> > >   }
> > >  
> > >   return pages_to_free;
> > 
> > This patch still makes sense. Agree?
> 
> There is already race window between shrink_count and shrink_slab so
> it would be okay if we return stale stat with removing the lock if
> the difference is not huge.
> 
> Even, now we don't obey nr_to_scan of shrinker in zs_shrinker_scan
> so the such accuracy would be pointless.

Yeah, automatic shrinker may work concurrently with the user triggered
one, so it may be hard (time consuming) to release the exact amount of
pages that we returned from _count(). We can look at `sc->nr_to_reclaim'
to avoid releasing more pages than shrinker wants us to release, but
I'd probably prefer to keep the existing behaviour if we were called by
the shrinker.

OK, will resend later today.

        -ss
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to