On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote:
> From: Peter Zijlstra
>
> Instead of only selecting a local task, select a task for all SMT
> siblings for every reschedule on the core (irrespective which logical
> CPU does the reschedule).
>
> During a CPU hotplug event,
On Tue, Sep 1, 2020 at 5:23 PM Vineeth Pillai
wrote:
> > Also, Peter said pick_seq is for core-wide picking. If you want to add
> > another semantic, then maybe add another counter which has a separate
> > meaning and justify why you are adding it.
> I think just one counter is enough. Unless,
Hi Joel,
On 9/1/20 1:30 PM, Joel Fernandes wrote:
I think we can come here when hotplug thread is scheduled during online, but
mask is not yet updated. Probably can add it with this comment as well.
I don't see how that is possible. Because the cpuhp threads run during the
CPU onlining
Hi Vineeth,
On Tue, Sep 01, 2020 at 08:34:23AM -0400, Vineeth Pillai wrote:
> Hi Joel,
>
> On 9/1/20 1:10 AM, Joel Fernandes wrote:
> > 3. The 'Rescheduling siblings' loop of pick_next_task() is quite fragile. It
> > calls various functions on rq->core_pick which could very well be NULL
> >
Hi Joel,
On 9/1/20 1:10 AM, Joel Fernandes wrote:
3. The 'Rescheduling siblings' loop of pick_next_task() is quite fragile. It
calls various functions on rq->core_pick which could very well be NULL because:
An online sibling might have gone offline before a task could be picked for it,
or it
On Sat, Aug 29, 2020 at 09:47:19AM +0200, pet...@infradead.org wrote:
> On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote:
> > On 8/28/20 4:51 PM, Peter Zijlstra wrote:
>
> > > So where do things go side-ways?
>
> > During hotplug stress test, we have noticed that while a sibling is
On Sat, Aug 29, 2020 at 09:47:19AM +0200, pet...@infradead.org wrote:
> On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote:
> > On 8/28/20 4:51 PM, Peter Zijlstra wrote:
>
> > > So where do things go side-ways?
>
> > During hotplug stress test, we have noticed that while a sibling is
Hi Peter,
On Sat, Aug 29, 2020 at 09:47:19AM +0200, pet...@infradead.org wrote:
> On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote:
> > On 8/28/20 4:51 PM, Peter Zijlstra wrote:
>
> > > So where do things go side-ways?
>
> > During hotplug stress test, we have noticed that while a
On 8/29/20 3:47 AM, pet...@infradead.org wrote:
During hotplug stress test, we have noticed that while a sibling is in
pick_next_task, another sibling can go offline or come online. What
we have observed is smt_mask get updated underneath us even if
we hold the lock. From reading the code,
On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote:
> On 8/28/20 4:51 PM, Peter Zijlstra wrote:
> > So where do things go side-ways?
> During hotplug stress test, we have noticed that while a sibling is in
> pick_next_task, another sibling can go offline or come online. What
> we
On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote:
[...]
> > Can we please split out this hotplug 'fix' into a separate patch with a
> > coherent changelog.
> Sorry about this. I had posted this as separate patches in v6 list,
> but merged it for v7. Will split it and have details
On 8/28/20 4:55 PM, Peter Zijlstra wrote:
On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote:
+ if (is_idle_task(rq_i->core_pick) && rq_i->nr_running)
+ rq_i->core_forceidle = true;
Did you mean: rq_i->core_pick == rq_i->idle ?
On 8/28/20 4:51 PM, Peter Zijlstra wrote:
cpumask_weigt() is fairly expensive, esp. for something that should
'never' happen.
What exactly is the race here?
We'll update the cpu_smt_mask() fairly early in secondary bringup, but
where does it become a problem?
The moment the new thread
On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote:
> + if (is_idle_task(rq_i->core_pick) && rq_i->nr_running)
> + rq_i->core_forceidle = true;
Did you mean: rq_i->core_pick == rq_i->idle ?
is_idle_task() will also match idle-injection threads, which
On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote:
> + smt_weight = cpumask_weight(smt_mask);
> + for_each_cpu_wrap_or(i, smt_mask, cpumask_of(cpu), cpu) {
> + struct rq *rq_i = cpu_rq(i);
> + struct task_struct *p;
> +
> +
From: Peter Zijlstra
Instead of only selecting a local task, select a task for all SMT
siblings for every reschedule on the core (irrespective which logical
CPU does the reschedule).
During a CPU hotplug event, schedule would be called with the hotplugged
CPU not in the cpumask. So use
16 matches
Mail list logo