Hi Huang,

On Tue, Jan 03, 2017 at 01:43:43PM +0800, Huang, Ying wrote:
> Hi, Minchan,
> 
> Minchan Kim <minc...@kernel.org> writes:
> 
> > Hi Jan,
> >
> > On Mon, Jan 02, 2017 at 04:48:41PM +0100, Jan Kara wrote:
> >> Hi,
> >> 
> >> On Tue 27-12-16 16:45:03, Minchan Kim wrote:
> >> > > Patch 3 splits the swap cache radix tree into 64MB chunks, reducing
> >> > >         the rate that we have to contende for the radix tree.
> >> > 
> >> > To me, it's rather hacky. I think it might be common problem for page 
> >> > cache
> >> > so can we think another generalized way like range_lock? Ccing Jan.
> >> 
> >> I agree on the hackyness of the patch and that page cache would suffer with
> >> the same contention (although the files are usually smaller than swap so it
> >> would not be that visible I guess). But I don't see how range lock would
> >> help here - we need to serialize modifications of the tree structure itself
> >> and that is difficult to achieve with the range lock. So what you would
> >> need is either a different data structure for tracking swap cache entries
> >> or a finer grained locking of the radix tree.
> >
> > Thanks for the comment, Jan.
> >
> > I think there are more general options. One is to shrink batching pages like
> > Mel and Tim had approached.
> >
> > https://patchwork.kernel.org/patch/9008421/
> > https://patchwork.kernel.org/patch/9322793/
> 
> This helps to reduce the lock contention on radix tree of swap cache.
> But splitting swap cache has much better performance.  So we switched
> from that solution to current solution.
> 
> > Or concurrent page cache by peter.
> >
> > https://www.kernel.org/doc/ols/2007/ols2007v2-pages-311-318.pdf
> 
> I think this is good, it helps swap and file cache.  But I don't know
> whether other people want to go this way and how much effort will be
> needed.
> 
> In contrast, splitting swap cache is quite simple, for implementation
> and review.  And the effect is good.

I think general approach is better but I don't want to be a a party pooper
if every people are okay with this. I just wanted to point out we need to
consider more general approach and I did my best.

Decision depends on you guys.

Thanks.

Reply via email to