We tried to comment those lines and it doesn’t seem to get rid of the
performance regression we are seeing.
Can you elaborate a bit more about the test you are performing, what kind of
resources it uses ?
I am running 1 and 2 Oracle DB instances each running TPC-C workload. The
clients driving
> >>>Is the core wide lock primarily responsible for the regression? I ran
> >>>upto patch
> >>>12 which also has the core wide lock for tagged cgroups and also calls
> >>>newidle_balance() from pick_next_task(). I don't see any regression.
> >>>Of
> >>>course
> >>>the core sched version of pick_n
On Fri, Mar 29, 2019 at 03:23:14PM -0700, Subhra Mazumdar wrote:
>
> On 3/29/19 6:35 AM, Julien Desfossez wrote:
> > On Fri, Mar 22, 2019 at 8:09 PM Subhra Mazumdar
> > wrote:
> > > Is the core wide lock primarily responsible for the regression? I ran
> > > upto patch
> > > 12 which also has the
On 3/29/19 3:23 PM, Subhra Mazumdar wrote:
On 3/29/19 6:35 AM, Julien Desfossez wrote:
On Fri, Mar 22, 2019 at 8:09 PM Subhra Mazumdar
wrote:
Is the core wide lock primarily responsible for the regression? I ran
upto patch
12 which also has the core wide lock for tagged cgroups and also ca
On 3/29/19 6:35 AM, Julien Desfossez wrote:
On Fri, Mar 22, 2019 at 8:09 PM Subhra Mazumdar
wrote:
Is the core wide lock primarily responsible for the regression? I ran
upto patch
12 which also has the core wide lock for tagged cgroups and also calls
newidle_balance() from pick_next_task(). I
On Fri, Mar 22, 2019 at 8:09 PM Subhra Mazumdar
wrote:
> Is the core wide lock primarily responsible for the regression? I ran
> upto patch
> 12 which also has the core wide lock for tagged cgroups and also calls
> newidle_balance() from pick_next_task(). I don't see any regression. Of
> course
>
On 3/22/19 5:06 PM, Subhra Mazumdar wrote:
On 3/21/19 2:20 PM, Julien Desfossez wrote:
On Tue, Mar 19, 2019 at 10:31 PM Subhra Mazumdar
wrote:
On 3/18/19 8:41 AM, Julien Desfossez wrote:
On further investigation, we could see that the contention is mostly
in the
way rq locks are taken.
On 3/21/19 2:20 PM, Julien Desfossez wrote:
On Tue, Mar 19, 2019 at 10:31 PM Subhra Mazumdar
wrote:
On 3/18/19 8:41 AM, Julien Desfossez wrote:
On further investigation, we could see that the contention is mostly in the
way rq locks are taken. With this patchset, we lock the whole core if
c
On 3/22/19 4:28 PM, Tim Chen wrote:
> On 3/19/19 7:29 PM, Subhra Mazumdar wrote:
>>
>> On 3/18/19 8:41 AM, Julien Desfossez wrote:
>>> The case where we try to acquire the lock on 2 runqueues belonging to 2
>>> different cores requires the rq_lockp wrapper as well otherwise we
>>> frequently deadlo
On 3/19/19 7:29 PM, Subhra Mazumdar wrote:
>
> On 3/18/19 8:41 AM, Julien Desfossez wrote:
>> The case where we try to acquire the lock on 2 runqueues belonging to 2
>> different cores requires the rq_lockp wrapper as well otherwise we
>> frequently deadlock in there.
>>
>> This fixes the crash re
On Fri, Mar 22, 2019 at 9:34 AM Peter Zijlstra wrote:
> On Thu, Mar 21, 2019 at 05:20:17PM -0400, Julien Desfossez wrote:
> > On further investigation, we could see that the contention is mostly in
> the
> > way rq locks are taken. With this patchset, we lock the whole core if
> > cpu.tag is set f
On Thu, Mar 21, 2019 at 05:20:17PM -0400, Julien Desfossez wrote:
> On further investigation, we could see that the contention is mostly in the
> way rq locks are taken. With this patchset, we lock the whole core if
> cpu.tag is set for at least one cgroup. Due to this, __schedule() is more or
> le
On Tue, Mar 19, 2019 at 10:31 PM Subhra Mazumdar
wrote:
> On 3/18/19 8:41 AM, Julien Desfossez wrote:
> > The case where we try to acquire the lock on 2 runqueues belonging to 2
> > different cores requires the rq_lockp wrapper as well otherwise we
> > frequently deadlock in there.
> >
> > This fi
On 3/18/19 8:41 AM, Julien Desfossez wrote:
The case where we try to acquire the lock on 2 runqueues belonging to 2
different cores requires the rq_lockp wrapper as well otherwise we
frequently deadlock in there.
This fixes the crash reported in
1552577311-8218-1-git-send-email-jdesfos...@digi
The case where we try to acquire the lock on 2 runqueues belonging to 2
different cores requires the rq_lockp wrapper as well otherwise we
frequently deadlock in there.
This fixes the crash reported in
1552577311-8218-1-git-send-email-jdesfos...@digitalocean.com
diff --git a/kernel/sched/sched.h
On Tue, Feb 19, 2019 at 05:22:50PM +0100 Peter Zijlstra wrote:
> On Tue, Feb 19, 2019 at 11:13:43AM -0500, Phil Auld wrote:
> > On Mon, Feb 18, 2019 at 05:56:23PM +0100 Peter Zijlstra wrote:
> > > In preparation of playing games with rq->lock, abstract the thing
> > > using an accessor.
> > >
> >
On Tue, Feb 19, 2019 at 11:13:43AM -0500, Phil Auld wrote:
> On Mon, Feb 18, 2019 at 05:56:23PM +0100 Peter Zijlstra wrote:
> > In preparation of playing games with rq->lock, abstract the thing
> > using an accessor.
> >
> > Signed-off-by: Peter Zijlstra (Intel)
>
> Hi Peter,
>
> Sorry... what
On Mon, Feb 18, 2019 at 05:56:23PM +0100 Peter Zijlstra wrote:
> In preparation of playing games with rq->lock, abstract the thing
> using an accessor.
>
> Signed-off-by: Peter Zijlstra (Intel)
Hi Peter,
Sorry... what tree are these for? They don't apply to mainline.
Some branch on tip, I gue
In preparation of playing games with rq->lock, abstract the thing
using an accessor.
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/core.c | 44 ++--
kernel/sched/deadline.c | 18
kernel/sched/debug.c|4 -
kernel/sched/fair.c | 41 +-
19 matches
Mail list logo