Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-09-15 Thread Joel Fernandes
On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote: > From: Peter Zijlstra > > Instead of only selecting a local task, select a task for all SMT > siblings for every reschedule on the core (irrespective which logical > CPU does the reschedule). > > During a CPU hotplug event,

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-09-01 Thread Joel Fernandes
On Tue, Sep 1, 2020 at 5:23 PM Vineeth Pillai wrote: > > Also, Peter said pick_seq is for core-wide picking. If you want to add > > another semantic, then maybe add another counter which has a separate > > meaning and justify why you are adding it. > I think just one counter is enough. Unless,

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-09-01 Thread Vineeth Pillai
Hi Joel, On 9/1/20 1:30 PM, Joel Fernandes wrote: I think we can come here when hotplug thread is scheduled during online, but mask is not yet updated. Probably can add it with this comment as well. I don't see how that is possible. Because the cpuhp threads run during the CPU onlining

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-09-01 Thread Joel Fernandes
Hi Vineeth, On Tue, Sep 01, 2020 at 08:34:23AM -0400, Vineeth Pillai wrote: > Hi Joel, > > On 9/1/20 1:10 AM, Joel Fernandes wrote: > > 3. The 'Rescheduling siblings' loop of pick_next_task() is quite fragile. It > > calls various functions on rq->core_pick which could very well be NULL > >

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-09-01 Thread Vineeth Pillai
Hi Joel, On 9/1/20 1:10 AM, Joel Fernandes wrote: 3. The 'Rescheduling siblings' loop of pick_next_task() is quite fragile. It calls various functions on rq->core_pick which could very well be NULL because: An online sibling might have gone offline before a task could be picked for it, or it

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-31 Thread Joel Fernandes
On Sat, Aug 29, 2020 at 09:47:19AM +0200, pet...@infradead.org wrote: > On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote: > > On 8/28/20 4:51 PM, Peter Zijlstra wrote: > > > > So where do things go side-ways? > > > During hotplug stress test, we have noticed that while a sibling is

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-31 Thread Joel Fernandes
On Sat, Aug 29, 2020 at 09:47:19AM +0200, pet...@infradead.org wrote: > On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote: > > On 8/28/20 4:51 PM, Peter Zijlstra wrote: > > > > So where do things go side-ways? > > > During hotplug stress test, we have noticed that while a sibling is

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-31 Thread Joel Fernandes
Hi Peter, On Sat, Aug 29, 2020 at 09:47:19AM +0200, pet...@infradead.org wrote: > On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote: > > On 8/28/20 4:51 PM, Peter Zijlstra wrote: > > > > So where do things go side-ways? > > > During hotplug stress test, we have noticed that while a

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-31 Thread Vineeth Pillai
On 8/29/20 3:47 AM, pet...@infradead.org wrote: During hotplug stress test, we have noticed that while a sibling is in pick_next_task, another sibling can go offline or come online. What we have observed is smt_mask get updated underneath us even if we hold the lock. From reading the code,

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-29 Thread peterz
On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote: > On 8/28/20 4:51 PM, Peter Zijlstra wrote: > > So where do things go side-ways? > During hotplug stress test, we have noticed that while a sibling is in > pick_next_task, another sibling can go offline or come online. What > we

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-28 Thread Joel Fernandes
On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote: [...] > > Can we please split out this hotplug 'fix' into a separate patch with a > > coherent changelog. > Sorry about this. I had posted this as separate patches in v6 list, > but merged it for v7. Will split it and have details

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-28 Thread Vineeth Pillai
On 8/28/20 4:55 PM, Peter Zijlstra wrote: On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote: + if (is_idle_task(rq_i->core_pick) && rq_i->nr_running) + rq_i->core_forceidle = true; Did you mean: rq_i->core_pick == rq_i->idle ?

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-28 Thread Vineeth Pillai
On 8/28/20 4:51 PM, Peter Zijlstra wrote: cpumask_weigt() is fairly expensive, esp. for something that should 'never' happen. What exactly is the race here? We'll update the cpu_smt_mask() fairly early in secondary bringup, but where does it become a problem? The moment the new thread

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-28 Thread Peter Zijlstra
On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote: > + if (is_idle_task(rq_i->core_pick) && rq_i->nr_running) > + rq_i->core_forceidle = true; Did you mean: rq_i->core_pick == rq_i->idle ? is_idle_task() will also match idle-injection threads, which

Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-28 Thread Peter Zijlstra
On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote: > + smt_weight = cpumask_weight(smt_mask); > + for_each_cpu_wrap_or(i, smt_mask, cpumask_of(cpu), cpu) { > + struct rq *rq_i = cpu_rq(i); > + struct task_struct *p; > + > +

[RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling.

2020-08-28 Thread Julien Desfossez
From: Peter Zijlstra Instead of only selecting a local task, select a task for all SMT siblings for every reschedule on the core (irrespective which logical CPU does the reschedule). During a CPU hotplug event, schedule would be called with the hotplugged CPU not in the cpumask. So use