Commit-ID:  88a2f7ffd19245a75143a8180f75c0972cff0350
Gitweb:     https://git.kernel.org/tip/88a2f7ffd19245a75143a8180f75c0972cff0350
Author:     Valentin Schneider <valentin.schnei...@arm.com>
AuthorDate: Mon, 11 Feb 2019 17:59:45 +0000
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Sat, 9 Mar 2019 14:03:53 +0100

sched/fair: Tune down misfit NOHZ kicks

In this commit:

  3b1baa6496e6 ("sched/fair: Add 'group_misfit_task' load-balance type")

we set rq->misfit_task_load whenever the current running task has a
utilization greater than 80% of rq->cpu_capacity. A non-zero value in
this field enables misfit load balancing.

However, if the task being looked at is already running on a CPU of
highest capacity, there's nothing more we can do for it. We can
currently spot this in update_sd_pick_busiest(), which prevents us
from selecting a sched_group of group_type == group_misfit_task as the
busiest group, but we don't do any of that in nohz_balancer_kick().

This means that we could repeatedly kick NOHZ CPUs when there's no
improvements in terms of load balance to be done.

Introduce a check_misfit_status() helper that returns true iff there
is a CPU in the system that could give more CPU capacity to a rq's
misfit task - IOW, there exists a CPU of higher capacity_orig or the
rq's CPU is severely pressured by rt/IRQ.

Signed-off-by: Valentin Schneider <valentin.schnei...@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Andy Lutomirski <l...@kernel.org>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Dave Hansen <dave.han...@linux.intel.com>
Cc: dietmar.eggem...@arm.com
Cc: H. Peter Anvin <h...@zytor.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rik van Riel <r...@surriel.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: morten.rasmus...@arm.com
Cc: vincent.guit...@linaro.org
Link: https://lkml.kernel.org/r/20190211175946.4961-3-valentin.schnei...@arm.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/sched/fair.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e6f7d39d4d45..f0d2f8a352bf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8058,6 +8058,18 @@ check_cpu_capacity(struct rq *rq, struct sched_domain 
*sd)
                                (rq->cpu_capacity_orig * 100));
 }
 
+/*
+ * Check whether a rq has a misfit task and if it looks like we can actually
+ * help that task: we can migrate the task to a CPU of higher capacity, or
+ * the task's current CPU is heavily pressured.
+ */
+static inline int check_misfit_status(struct rq *rq, struct sched_domain *sd)
+{
+       return rq->misfit_task_load &&
+               (rq->cpu_capacity_orig < rq->rd->max_cpu_capacity ||
+                check_cpu_capacity(rq, sd));
+}
+
 /*
  * Group imbalance indicates (and tries to solve) the problem where balancing
  * groups is inadequate due to ->cpus_allowed constraints.
@@ -9585,7 +9597,7 @@ static void nohz_balancer_kick(struct rq *rq)
        if (time_before(now, nohz.next_balance))
                goto out;
 
-       if (rq->nr_running >= 2 || rq->misfit_task_load) {
+       if (rq->nr_running >= 2) {
                flags = NOHZ_KICK_MASK;
                goto out;
        }
@@ -9623,6 +9635,18 @@ static void nohz_balancer_kick(struct rq *rq)
                }
        }
 
+       sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu));
+       if (sd) {
+               /*
+                * When ASYM_CPUCAPACITY; see if there's a higher capacity CPU
+                * to run the misfit task on.
+                */
+               if (check_misfit_status(rq, sd)) {
+                       flags = NOHZ_KICK_MASK;
+                       goto unlock;
+               }
+       }
+
        sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
        if (sd) {
                /*

Reply via email to