The following commit has been merged into the sched/core branch of tip: Commit-ID: 2ab4092fc82d6001fdd9d51dbba27d04dec967e0 Gitweb: https://git.kernel.org/tip/2ab4092fc82d6001fdd9d51dbba27d04dec967e0 Author: Vincent Guittot <vincent.guit...@linaro.org> AuthorDate: Fri, 18 Oct 2019 15:26:34 +02:00 Committer: Ingo Molnar <mi...@kernel.org> CommitterDate: Mon, 21 Oct 2019 09:40:54 +02:00
sched/fair: Spread out tasks evenly when not overloaded When there is only one CPU per group, using the idle CPUs to evenly spread tasks doesn't make sense and nr_running is a better metrics. Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org> Cc: Ben Segall <bseg...@google.com> Cc: Dietmar Eggemann <dietmar.eggem...@arm.com> Cc: Juri Lelli <juri.le...@redhat.com> Cc: Linus Torvalds <torva...@linux-foundation.org> Cc: Mel Gorman <mgor...@suse.de> Cc: Mike Galbraith <efa...@gmx.de> Cc: morten.rasmus...@arm.com Cc: Peter Zijlstra <pet...@infradead.org> Cc: Steven Rostedt <rost...@goodmis.org> Cc: Thomas Gleixner <t...@linutronix.de> Cc: hdan...@sina.com Cc: pa...@linux.ibm.com Cc: pa...@redhat.com Cc: quentin.per...@arm.com Cc: r...@surriel.com Cc: sri...@linux.vnet.ibm.com Cc: valentin.schnei...@arm.com Link: https://lkml.kernel.org/r/1571405198-27570-8-git-send-email-vincent.guit...@linaro.org Signed-off-by: Ingo Molnar <mi...@kernel.org> --- kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++------------ 1 file changed, 28 insertions(+), 12 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e6a3db0..f489f60 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8591,18 +8591,34 @@ static struct sched_group *find_busiest_group(struct lb_env *env) busiest->sum_nr_running > local->sum_nr_running + 1) goto force_balance; - if (busiest->group_type != group_overloaded && - (env->idle == CPU_NOT_IDLE || - local->idle_cpus <= (busiest->idle_cpus + 1))) - /* - * If the busiest group is not overloaded - * and there is no imbalance between this and busiest group - * wrt. idle CPUs, it is balanced. The imbalance - * becomes significant if the diff is greater than 1 otherwise - * we might end up just moving the imbalance to another - * group. - */ - goto out_balanced; + if (busiest->group_type != group_overloaded) { + if (env->idle == CPU_NOT_IDLE) + /* + * If the busiest group is not overloaded (and as a + * result the local one too) but this CPU is already + * busy, let another idle CPU try to pull task. + */ + goto out_balanced; + + if (busiest->group_weight > 1 && + local->idle_cpus <= (busiest->idle_cpus + 1)) + /* + * If the busiest group is not overloaded + * and there is no imbalance between this and busiest + * group wrt idle CPUs, it is balanced. The imbalance + * becomes significant if the diff is greater than 1 + * otherwise we might end up to just move the imbalance + * on another group. Of course this applies only if + * there is more than 1 CPU per group. + */ + goto out_balanced; + + if (busiest->sum_h_nr_running == 1) + /* + * busiest doesn't have any tasks waiting to run + */ + goto out_balanced; + } force_balance: /* Looks like there is an imbalance. Compute it */