Hi,
On Thu, Jan 05, 2017 at 09:33:55AM +0800, Huang, Ying wrote:
> Hi, Minchan,
>
> Minchan Kim writes:
> [snip]
> >
> > The patchset has used several techniqueus to reduce lock contention, for
> > example,
> > batching alloc/free, fine-grained lock and cluster distribution to avoid
> > cache
Minchan Kim writes:
> Hi,
>
> On Thu, Jan 05, 2017 at 09:33:55AM +0800, Huang, Ying wrote:
>> Hi, Minchan,
>>
>> Minchan Kim writes:
>> [snip]
>> >
>> > The patchset has used several techniqueus to reduce lock contention, for
>> > example,
>> > batching alloc/free, fine-grained lock and cluste
Hi Huang,
On Tue, Jan 03, 2017 at 01:43:43PM +0800, Huang, Ying wrote:
> Hi, Minchan,
>
> Minchan Kim writes:
>
> > Hi Jan,
> >
> > On Mon, Jan 02, 2017 at 04:48:41PM +0100, Jan Kara wrote:
> >> Hi,
> >>
> >> On Tue 27-12-16 16:45:03, Minchan Kim wrote:
> >> > > Patch 3 splits the swap cache r
Hi, Minchan,
Minchan Kim writes:
[snip]
>
> The patchset has used several techniqueus to reduce lock contention, for
> example,
> batching alloc/free, fine-grained lock and cluster distribution to avoid cache
> false-sharing. Each items has different complexity and benefits so could you
> show t
On Tue, 2017-01-03 at 13:34 +0900, Minchan Kim wrote:
> Hi Jan,
>
> On Mon, Jan 02, 2017 at 04:48:41PM +0100, Jan Kara wrote:
> >
> > Hi,
> >
> > On Tue 27-12-16 16:45:03, Minchan Kim wrote:
> > >
> > > >
> > > > Patch 3 splits the swap cache radix tree into 64MB chunks, reducing
> > > >
Hi, Minchan,
Minchan Kim writes:
> Hi Jan,
>
> On Mon, Jan 02, 2017 at 04:48:41PM +0100, Jan Kara wrote:
>> Hi,
>>
>> On Tue 27-12-16 16:45:03, Minchan Kim wrote:
>> > > Patch 3 splits the swap cache radix tree into 64MB chunks, reducing
>> > > the rate that we have to contende for the
Hi Jan,
On Mon, Jan 02, 2017 at 04:48:41PM +0100, Jan Kara wrote:
> Hi,
>
> On Tue 27-12-16 16:45:03, Minchan Kim wrote:
> > > Patch 3 splits the swap cache radix tree into 64MB chunks, reducing
> > > the rate that we have to contende for the radix tree.
> >
> > To me, it's rather hacky.
Hi,
On Tue 27-12-16 16:45:03, Minchan Kim wrote:
> > Patch 3 splits the swap cache radix tree into 64MB chunks, reducing
> > the rate that we have to contende for the radix tree.
>
> To me, it's rather hacky. I think it might be common problem for page cache
> so can we think another gene
Minchan Kim writes:
> On Wed, Dec 28, 2016 at 11:31:06AM +0800, Huang, Ying wrote:
>
> < snip >
>
>> >>> > Frankly speaking, although I'm huge user of bit_spin_lock(zram/zsmalloc
>> >>> > have used it heavily), I don't like swap subsystem uses it.
>> >>> > During zram development, it really hurts
On Wed, Dec 28, 2016 at 11:31:06AM +0800, Huang, Ying wrote:
< snip >
> >>> > Frankly speaking, although I'm huge user of bit_spin_lock(zram/zsmalloc
> >>> > have used it heavily), I don't like swap subsystem uses it.
> >>> > During zram development, it really hurts debugging due to losing
> >>>
"Huang, Ying" writes:
> Minchan Kim writes:
>
>> Hi Huang,
>>
>> On Wed, Dec 28, 2016 at 09:54:27AM +0800, Huang, Ying wrote:
>>
>> < snip >
>>
>>> > The patchset has used several techniqueus to reduce lock contention, for
>>> > example,
>>> > batching alloc/free, fine-grained lock and cluster
Minchan Kim writes:
> Hi Huang,
>
> On Wed, Dec 28, 2016 at 09:54:27AM +0800, Huang, Ying wrote:
>
> < snip >
>
>> > The patchset has used several techniqueus to reduce lock contention, for
>> > example,
>> > batching alloc/free, fine-grained lock and cluster distribution to avoid
>> > cache
>>
Hi Huang,
On Wed, Dec 28, 2016 at 09:54:27AM +0800, Huang, Ying wrote:
< snip >
> > The patchset has used several techniqueus to reduce lock contention, for
> > example,
> > batching alloc/free, fine-grained lock and cluster distribution to avoid
> > cache
> > false-sharing. Each items has dif
Hi, Minchan,
Minchan Kim writes:
> Hi,
>
> On Fri, Dec 09, 2016 at 01:09:13PM -0800, Tim Chen wrote:
>> Change Log:
>> v4:
>> 1. Fix a bug in unlock cluster in add_swap_count_continuation(). We
>> should use unlock_cluster() instead of unlock_cluser_or_swap_info().
>> 2. During swap off, handle
Hi,
On Fri, Dec 09, 2016 at 01:09:13PM -0800, Tim Chen wrote:
> Change Log:
> v4:
> 1. Fix a bug in unlock cluster in add_swap_count_continuation(). We
> should use unlock_cluster() instead of unlock_cluser_or_swap_info().
> 2. During swap off, handle race when swap slot is marked unused but alloc
15 matches
Mail list logo