Commit-ID:  edd8e41d2e3cbd6ebe13ead30eb1adc6f48cbb33
Gitweb:     http://git.kernel.org/tip/edd8e41d2e3cbd6ebe13ead30eb1adc6f48cbb33
Author:     Peter Zijlstra <pet...@infradead.org>
AuthorDate: Thu, 7 Sep 2017 17:03:51 +0200
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Tue, 12 Sep 2017 17:41:04 +0200

sched/fair: Plug hole between hotplug and active_load_balance()

The load balancer applies cpu_active_mask to whatever sched_domains it
finds, however in the case of active_balance there is a hole between
setting rq->{active_balance,push_cpu} and running the stop_machine
work doing the actual migration.

The @push_cpu can go offline in this window, which would result in us
moving a task onto a dead cpu, which is a fairly bad thing.

Double check the active mask before the stop work does the migration.

  CPU0                                  CPU1

  <SoftIRQ>
                                        stop_machine(takedown_cpu)
    load_balance()                      cpu_stopper_thread()
      ...                                 work = multi_cpu_stop
      stop_one_cpu_nowait(                  /* wait for CPU0 */
        .func = active_load_balance_cpu_stop
      );
  </SoftIRQ>

  cpu_stopper_thread()
    work = multi_cpu_stop
      /* sync with CPU1 */
                                            take_cpu_down()
                                        <idle>
                                          play_dead();

    work = active_load_balance_cpu_stop
      set_task_cpu(p, CPU1); /* oops!! */

Reported-by: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Link: http://lkml.kernel.org/r/20170907150614.044460...@infradead.org
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/sched/fair.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3bcea40..efeebed 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8560,6 +8560,13 @@ static int active_load_balance_cpu_stop(void *data)
        struct rq_flags rf;
 
        rq_lock_irq(busiest_rq, &rf);
+       /*
+        * Between queueing the stop-work and running it is a hole in which
+        * CPUs can become inactive. We should not move tasks from or to
+        * inactive CPUs.
+        */
+       if (!cpu_active(busiest_cpu) || !cpu_active(target_cpu))
+               goto out_unlock;
 
        /* make sure the requested cpu hasn't gone down in the meantime */
        if (unlikely(busiest_cpu != smp_processor_id() ||

Reply via email to