On Fri, 24 Jul 2020 at 01:39, Jiang Biao <humjb_1...@163.com> wrote: > > From: Jiang Biao <benbji...@tencent.com> > > Sched-idle CPU has been considered in select_idle_cpu and > select_idle_smt, it also needs to be considered in select_idle_core to > be consistent and keep the same *idle* policy.
In the case of select_idle_core, we are looking for a core that is fully idle but if one CPU of the core is running a sched_idle task, the core will not be idle and we might end up having the wakeup task on a CPU and a sched_idle task on another CPU of the core which is not what we want > > Signed-off-by: Jiang Biao <benbji...@tencent.com> > --- > kernel/sched/fair.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 04fa8dbcfa4d..f430a9820d08 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -6014,7 +6014,7 @@ void __update_idle_core(struct rq *rq) > if (cpu == core) > continue; > > - if (!available_idle_cpu(cpu)) > + if (!available_idle_cpu(cpu) && !sched_idle_cpu(cpu)) > goto unlock; > } > > @@ -6045,7 +6045,7 @@ static int select_idle_core(struct task_struct *p, > struct sched_domain *sd, int > bool idle = true; > > for_each_cpu(cpu, cpu_smt_mask(core)) { > - if (!available_idle_cpu(cpu)) { > + if (!available_idle_cpu(cpu) && !sched_idle_cpu(cpu)) > { > idle = false; > break; > } > -- > 2.21.0 > >