Hello Mel, Peter, On Tue, Jan 19, 2021 at 11:22:11AM +0000, Mel Gorman wrote: > From: Peter Zijlstra (Intel) <pet...@infradead.org> > > Both select_idle_core() and select_idle_cpu() do a loop over the same > cpumask. Observe that by clearing the already visited CPUs, we can > fold the iteration and iterate a core at a time. > > All we need to do is remember any non-idle CPU we encountered while > scanning for an idle core. This way we'll only iterate every CPU once. > > Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> > Signed-off-by: Mel Gorman <mgor...@techsingularity.net> > --- > kernel/sched/fair.c | 101 ++++++++++++++++++++++++++------------------ > 1 file changed, 61 insertions(+), 40 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 12e08da90024..822851b39b65 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c
[..snip..] > @@ -6157,18 +6169,31 @@ static int select_idle_cpu(struct task_struct *p, > struct sched_domain *sd, int t > } > > for_each_cpu_wrap(cpu, cpus, target) { > - if (!--nr) > - return -1; > - if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) > - break; > + if (smt) { > + i = select_idle_core(p, cpu, cpus, &idle_cpu); > + if ((unsigned int)i < nr_cpumask_bits) > + return i; > + > + } else { > + if (!--nr) > + return -1; > + i = __select_idle_cpu(cpu); > + if ((unsigned int)i < nr_cpumask_bits) { > + idle_cpu = i; > + break; > + } > + } > } > > - if (sched_feat(SIS_PROP)) { > + if (smt) > + set_idle_cores(this, false); Shouldn't we set_idle_cores(false) only if this was the last idle core in the LLC ? -- Thanks and Regards gautham.