On Fri, May 29, 2020 at 08:50:25AM -0700, Andi Kleen wrote:
> >
> > ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
> > - if (ret == 0 && write)
> > + if (ret == 0 && write) {
> > + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER)
> > + schedule_
>
> ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
> - if (ret == 0 && write)
> + if (ret == 0 && write) {
> + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER)
> + schedule_on_each_cpu(sync_overcommit_as);
The schedule is not atomic.
On Thu, May 28, 2020 at 11:21:36PM +0800, Kleen, Andi wrote:
>
>
> >If it's true, then there could be 2 solutions, one is to skip the WARN_ONCE
> >as it has no practical value, as the real >check is the following code, the
> >other is to rectify the percpu counter when the policy is changing to
On Thu 28-05-20 23:10:20, Feng Tang wrote:
[...]
> If it's true, then there could be 2 solutions, one is to
> skip the WARN_ONCE as it has no practical value, as the real
> check is the following code, the other is to rectify the
> percpu counter when the policy is changing to OVERCOMMIT_NEVER.
I
>If it's true, then there could be 2 solutions, one is to skip the WARN_ONCE as
>it has no practical value, as the real >check is the following code, the other
>is to rectify the percpu counter when the policy is changing to
>>OVERCOMMIT_NEVER.
I think it's better to fix it up when the polic
On Thu, May 28, 2020 at 10:18:02AM -0400, Qian Cai wrote:
> > > I have been reproduced this on both AMD and Intel. The test just
> > > allocating memory and swapping.
> > >
> > > https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/mem/oom/oom01.c
> > > https://github.com/linux-t
On Wed, May 27, 2020 at 06:46:06PM +0800, Feng Tang wrote:
> Hi Qian,
>
> On Tue, May 26, 2020 at 10:25:39PM -0400, Qian Cai wrote:
> > > > > > [1] https://lkml.org/lkml/2020/3/5/57
> > > > >
> > > > > Reverted this series fixed a warning under memory pressue.
> > > >
> > > > Andrew, Stephen, ca
On Wed, May 27, 2020 at 09:33:32PM +0800, Feng Tang wrote:
> Hi Qian,
>
> On Wed, May 27, 2020 at 08:05:49AM -0400, Qian Cai wrote:
> > On Wed, May 27, 2020 at 06:46:06PM +0800, Feng Tang wrote:
> > > Hi Qian,
> > >
> > > On Tue, May 26, 2020 at 10:25:39PM -0400, Qian Cai wrote:
> > > > > > > > [
Hi Qian,
On Wed, May 27, 2020 at 08:05:49AM -0400, Qian Cai wrote:
> On Wed, May 27, 2020 at 06:46:06PM +0800, Feng Tang wrote:
> > Hi Qian,
> >
> > On Tue, May 26, 2020 at 10:25:39PM -0400, Qian Cai wrote:
> > > > > > > [1] https://lkml.org/lkml/2020/3/5/57
> > > > > >
> > > > > > Reverted this
On Wed, May 27, 2020 at 06:46:06PM +0800, Feng Tang wrote:
> Hi Qian,
>
> On Tue, May 26, 2020 at 10:25:39PM -0400, Qian Cai wrote:
> > > > > > [1] https://lkml.org/lkml/2020/3/5/57
> > > > >
> > > > > Reverted this series fixed a warning under memory pressue.
> > > >
> > > > Andrew, Stephen, ca
Hi Qian,
On Tue, May 26, 2020 at 10:25:39PM -0400, Qian Cai wrote:
> > > > > [1] https://lkml.org/lkml/2020/3/5/57
> > > >
> > > > Reverted this series fixed a warning under memory pressue.
> > >
> > > Andrew, Stephen, can you drop this series?
> > >
> > > >
> > > > [ 3319.257898] LTP: startin
On Wed, May 27, 2020 at 09:46:47AM +0800, Feng Tang wrote:
> Hi Qian,
>
> On Tue, May 26, 2020 at 02:14:59PM -0400, Qian Cai wrote:
> > On Thu, May 21, 2020 at 05:27:26PM -0400, Qian Cai wrote:
> > > On Fri, May 08, 2020 at 03:25:14PM +0800, Feng Tang wrote:
> > > > When checking a performance cha
Hi Qian,
On Tue, May 26, 2020 at 02:14:59PM -0400, Qian Cai wrote:
> On Thu, May 21, 2020 at 05:27:26PM -0400, Qian Cai wrote:
> > On Fri, May 08, 2020 at 03:25:14PM +0800, Feng Tang wrote:
> > > When checking a performance change for will-it-scale scalability
> > > mmap test [1], we found very hi
On Tue, May 26, 2020 at 06:14:13PM -0700, Andi Kleen wrote:
> On Tue, May 26, 2020 at 02:14:59PM -0400, Qian Cai wrote:
> > On Thu, May 21, 2020 at 05:27:26PM -0400, Qian Cai wrote:
> > > On Fri, May 08, 2020 at 03:25:14PM +0800, Feng Tang wrote:
> > > > When checking a performance change for will-
On Tue, May 26, 2020 at 02:14:59PM -0400, Qian Cai wrote:
> On Thu, May 21, 2020 at 05:27:26PM -0400, Qian Cai wrote:
> > On Fri, May 08, 2020 at 03:25:14PM +0800, Feng Tang wrote:
> > > When checking a performance change for will-it-scale scalability
> > > mmap test [1], we found very high lock co
On Thu, May 21, 2020 at 05:27:26PM -0400, Qian Cai wrote:
> On Fri, May 08, 2020 at 03:25:14PM +0800, Feng Tang wrote:
> > When checking a performance change for will-it-scale scalability
> > mmap test [1], we found very high lock contention for spinlock of
> > percpu counter 'vm_committed_as':
> >
On Fri, May 08, 2020 at 03:25:14PM +0800, Feng Tang wrote:
> When checking a performance change for will-it-scale scalability
> mmap test [1], we found very high lock contention for spinlock of
> percpu counter 'vm_committed_as':
>
> 94.14% 0.35% [kernel.kallsyms] [k] _raw_spin_lo
17 matches
Mail list logo