4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Paul E. McKenney <paul...@linux.vnet.ibm.com>

commit 7c2102e56a3f7d85b5d8f33efbd7aecc1f36fdd8 upstream.

The current implementation of synchronize_sched_expedited() incorrectly
assumes that resched_cpu() is unconditional, which it is not.  This means
that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
fails as follows (analysis by Neeraj Upadhyay):

o       CPU1 is waiting for expedited wait to complete:

        sync_rcu_exp_select_cpus
             rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
             IPI sent to CPU5

        synchronize_sched_expedited_wait
                 ret = swait_event_timeout(rsp->expedited_wq,
                                           sync_rcu_preempt_exp_done(rnp_root),
                                           jiffies_stall);

        expmask = 0x20, CPU 5 in idle path (in cpuidle_enter())

o       CPU5 handles IPI and fails to acquire rq lock.

        Handles IPI
             sync_sched_exp_handler
                 resched_cpu
                     returns while failing to try lock acquire rq->lock
                 need_resched is not set

o       CPU5 calls  rcu_idle_enter() and as need_resched is not set, goes to
        idle (schedule() is not called).

o       CPU 1 reports RCU stall.

Given that resched_cpu() is now used only by RCU, this commit fixes the
assumption by making resched_cpu() unconditional.

Reported-by: Neeraj Upadhyay <neer...@codeaurora.org>
Suggested-by: Neeraj Upadhyay <neer...@codeaurora.org>
Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Acked-by: Steven Rostedt (VMware) <rost...@goodmis.org>
Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 kernel/sched/core.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -505,8 +505,7 @@ void resched_cpu(int cpu)
        struct rq *rq = cpu_rq(cpu);
        unsigned long flags;
 
-       if (!raw_spin_trylock_irqsave(&rq->lock, flags))
-               return;
+       raw_spin_lock_irqsave(&rq->lock, flags);
        resched_curr(rq);
        raw_spin_unlock_irqrestore(&rq->lock, flags);
 }


Reply via email to