On Sat, Oct 28 2017 at 09:59, Joel Fernandes wrote: > find_idlest_group_cpu goes through CPUs of a group previous selected by > find_idlest_group. find_idlest_group returns NULL if the local group is the > selected one and doesn't execute find_idlest_group_cpu if the group to which > 'cpu' belongs to is chosen. So we're always guaranteed to call > find_idlest_group_cpu with a group to which cpu is non-local. This makes one > of > the conditions in find_idlest_group_cpu an impossible one, which we can get > rid > off. > > Cc: Ingo Molnar <[email protected]> > Cc: Peter Zijlstra <[email protected]> > Cc: Brendan Jackman <[email protected]> > Cc: Dietmar <[email protected]> > Signed-off-by: Joel Fernandes <[email protected]>
FWIW: Reviewed-by: Brendan Jackman <[email protected]> > --- > kernel/sched/fair.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 5c49fdb4c508..740602ce799f 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5922,7 +5922,7 @@ find_idlest_group_cpu(struct sched_group *group, struct > task_struct *p, int this > } > } else if (shallowest_idle_cpu == -1) { > load = weighted_cpuload(cpu_rq(i)); > - if (load < min_load || (load == min_load && i == > this_cpu)) { > + if (load < min_load) { > min_load = load; > least_loaded_cpu = i; > }

