find_idlest_group returns NULL when the local group is idlest. The caller then continues the find_idlest_group search at a lower level of the current CPU's sched_domain hierarchy. find_idlest_group_cpu is not consulted and, crucially, @new_cpu is not updated. This means the search is pointless and we return @prev_cpu from select_task_rq_fair.
This is fixed by initialising @new_cpu to @cpu instead of @prev_cpu. Signed-off-by: Brendan Jackman <brendan.jack...@arm.com> Cc: Dietmar Eggemann <dietmar.eggem...@arm.com> Cc: Vincent Guittot <vincent.guit...@linaro.org> Cc: Josef Bacik <jo...@toxicpanda.com> Cc: Ingo Molnar <mi...@redhat.com> Cc: Morten Rasmussen <morten.rasmus...@arm.com> Cc: Peter Zijlstra <pet...@infradead.org> Reviewed-by: Josef Bacik <jba...@fb.com> Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org> --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4a14ebca4d79..82a8e206657f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5955,7 +5955,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p, int cpu, int prev_cpu, int sd_flag) { - int new_cpu = prev_cpu; + int new_cpu = cpu; if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed)) return prev_cpu; -- 2.14.1