On Tue, Sep 19, 2017 at 11:58:59AM -0400, Steven Rostedt wrote:
> On Tue, 19 Sep 2017 08:31:26 -0700
> "Paul E. McKenney" <paul...@linux.vnet.ibm.com> wrote:
> 
> > commit bc43e2e7e08134e6f403ac845edcf4f85668d803
> > Author: Paul E. McKenney <paul...@linux.vnet.ibm.com>
> > Date:   Mon Sep 18 08:54:40 2017 -0700
> > 
> >     sched: Make resched_cpu() unconditional
> >     
> >     The current implementation of synchronize_sched_expedited() incorrectly
> >     assumes that resched_cpu() is unconditional, which it is not.  This 
> > means
> >     that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
> >     fails as follows (analysis by Neeraj Upadhyay):
> >     
> >     o    CPU1 is waiting for expedited wait to complete:
> >     sync_rcu_exp_select_cpus
> >          rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
> >          IPI sent to CPU5
> >     
> >     synchronize_sched_expedited_wait
> >              ret = swait_event_timeout(
> >                                          rsp->expedited_wq,
> >       sync_rcu_preempt_exp_done(rnp_root),
> >                                          jiffies_stall);
> >     
> >                 expmask = 0x20 , and CPU 5 is in idle path (in 
> > cpuidle_enter())
> >     
> >     o    CPU5 handles IPI and fails to acquire rq lock.
> >     
> >     Handles IPI
> >          sync_sched_exp_handler
> >              resched_cpu
> >                  returns while failing to try lock acquire rq->lock
> >              need_resched is not set
> >     
> >     o    CPU5 calls  rcu_idle_enter() and as need_resched is not set, goes 
> > to
> >          idle (schedule() is not called).
> >     
> >     o    CPU 1 reports RCU stall.
> >     
> >     Given that resched_cpu() is used only by RCU, this commit fixes the
> 
> "is now only used by RCU", as it was created for another purpose.

Good catch, fixed.

> >     assumption by making resched_cpu() unconditional.
> >     
> >     Reported-by: Neeraj Upadhyay <neer...@codeaurora.org>
> >     Suggested-by: Neeraj Upadhyay <neer...@codeaurora.org>
> >     Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
> >     Cc: Peter Zijlstra <pet...@infradead.org>
> >     Cc: Steven Rostedt <rost...@goodmis.org>
> 
> Acked-by: Steven Rostedt (VMware) <rost...@goodmis.org>

Applied, thank you!

                                                        Thanx, Paul

> -- Steve
> 
> > 
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index cab8c5ec128e..b2281971894c 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -505,8 +505,7 @@ void resched_cpu(int cpu)
> >     struct rq *rq = cpu_rq(cpu);
> >     unsigned long flags;
> >  
> > -   if (!raw_spin_trylock_irqsave(&rq->lock, flags))
> > -           return;
> > +   raw_spin_lock_irqsave(&rq->lock, flags);
> >     resched_curr(rq);
> >     raw_spin_unlock_irqrestore(&rq->lock, flags);
> >  }
> 

Reply via email to