Date: Tue, 9 Apr 2019 20:23:16 +1000
Subject: [PATCH] kernel/sched: run nohz idle load balancer on HK_FLAG_MISC
 CPUs

The nohz idle balancer runs on the lowest idle CPU. This can
interfere with isolated CPUs, so confine it to HK_FLAG_MISC
housekeeping CPUs.

HK_FLAG_SCHED is not used for this because it is not set anywhere
at the moment. This could be folded into HK_FLAG_SCHED once that
option is fixed.

The problem was observed with increased jitter on an application
running on CPU0, caused by nohz idle load balancing being run on
CPU1 (an SMT sibling).

Signed-off-by: Nicholas Piggin <npig...@gmail.com>
---
 kernel/sched/fair.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fdab7eb6f351..d29ca323214d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9522,22 +9522,26 @@ static inline int on_null_domain(struct rq *rq)
  * - When one of the busy CPUs notice that there may be an idle rebalancing
  *   needed, they will kick the idle load balancer, which then does idle
  *   load balancing for all the idle CPUs.
+ * - HK_FLAG_MISC CPUs are used for this task, because HK_FLAG_SCHED not set
+ *   anywhere yet.
  */
 
 static inline int find_new_ilb(void)
 {
-       int ilb = cpumask_first(nohz.idle_cpus_mask);
+       int ilb;
 
-       if (ilb < nr_cpu_ids && idle_cpu(ilb))
-               return ilb;
+       for_each_cpu_and(ilb, nohz.idle_cpus_mask,
+                             housekeeping_cpumask(HK_FLAG_MISC)) {
+               if (idle_cpu(ilb))
+                       return ilb;
+       }
 
        return nr_cpu_ids;
 }
 
 /*
- * Kick a CPU to do the nohz balancing, if it is time for it. We pick the
- * nohz_load_balancer CPU (if there is one) otherwise fallback to any idle
- * CPU (if there is one).
+ * Kick a CPU to do the nohz balancing, if it is time for it. We pick any
+ * idle CPU in the HK_FLAG_MISC housekeeping set (if there is one).
  */
 static void kick_ilb(unsigned int flags)
 {
-- 
2.20.1

Reply via email to