Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.

Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Cc: Josef Bacik <jo...@toxicpanda.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Morten Rasmussen <morten.rasmus...@arm.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Reviewed-by: Josef Bacik <jba...@fb.com>
Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7df72795b20b..8b12c76a8b62 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5971,7 +5971,7 @@ static inline int find_idlest_cpu(struct sched_domain 
*sd, struct task_struct *p
                }
 
                new_cpu = find_idlest_group_cpu(group, p, cpu);
-               if (new_cpu == -1 || new_cpu == cpu) {
+               if (new_cpu == cpu) {
                        /* Now try balancing at a lower domain level of cpu */
                        sd = sd->child;
                        continue;
-- 
2.14.1

Reply via email to