When p is allowed on none of the CPUs in the sched_domain, we currently return NULL from find_idlest_group, and pointlessly continue the search on lower sched_domain levels (where p is also not allowed) before returning prev_cpu regardless (as we have not updated new_cpu).
Add an explicit check for this case, and a comment to find_idlest_group. Now when find_idlest_group returns NULL, it always means that the local group is allowed and idlest. Signed-off-by: Brendan Jackman <[email protected]> Cc: Dietmar Eggemann <[email protected]> Cc: Vincent Guittot <[email protected]> Cc: Josef Bacik <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Morten Rasmussen <[email protected]> Cc: Peter Zijlstra <[email protected]> --- kernel/sched/fair.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0ce75bbcde45..26080917ff8d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5380,6 +5380,8 @@ static unsigned long capacity_spare_wake(int cpu, struct task_struct *p) /* * find_idlest_group finds and returns the least busy CPU group within the * domain. + * + * Assumes p is allowed on at least one CPU in sd. */ static struct sched_group * find_idlest_group(struct sched_domain *sd, struct task_struct *p, @@ -5567,6 +5569,9 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p { int new_cpu = prev_cpu; + if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed)) + return prev_cpu; + while (sd) { struct sched_group *group; struct sched_domain *tmp; -- 2.14.1

