On Mon, Sep 18, 2017 at 09:24:12AM -0700, Paul E. McKenney wrote:
> On Mon, Sep 18, 2017 at 12:12:13PM -0400, Steven Rostedt wrote:
> > On Mon, 18 Sep 2017 09:01:25 -0700
> > "Paul E. McKenney" <paul...@linux.vnet.ibm.com> wrote:
> > 
> > 
> > >     sched: Make resched_cpu() unconditional
> > >     
> > >     The current implementation of synchronize_sched_expedited() 
> > > incorrectly
> > >     assumes that resched_cpu() is unconditional, which it is not.  This 
> > > means
> > >     that synchronize_sched_expedited() can hang when resched_cpu()'s 
> > > trylock
> > >     fails as follows (analysis by Neeraj Upadhyay):
> > >     
> > >     o    CPU1 is waiting for expedited wait to complete:
> > >     sync_rcu_exp_select_cpus
> > >          rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
> > >          IPI sent to CPU5
> > >     
> > >     synchronize_sched_expedited_wait
> > >              ret = swait_event_timeout(
> > >                                          rsp->expedited_wq,
> > >       sync_rcu_preempt_exp_done(rnp_root),
> > >                                          jiffies_stall);
> > >     
> > >                 expmask = 0x20 , and CPU 5 is in idle path (in 
> > > cpuidle_enter())
> > >     
> > >     o    CPU5 handles IPI and fails to acquire rq lock.
> > >     
> > >     Handles IPI
> > >          sync_sched_exp_handler
> > >              resched_cpu
> > >                  returns while failing to try lock acquire rq->lock
> > >              need_resched is not set
> > >     
> > >     o    CPU5 calls  rcu_idle_enter() and as need_resched is not set, 
> > > goes to
> > >          idle (schedule() is not called).
> > >     
> > >     o    CPU 1 reports RCU stall.
> > >     
> > >     Given that resched_cpu() is used only by RCU, this commit fixes the
> > >     assumption by making resched_cpu() unconditional.
> > 
> > Probably want to run this with several workloads with lockdep enabled
> > first.
> 
> As soon as I work through the backlog of lockdep complaints that
> appeared in the last merge window...  :-(

And this patch survived all rcutorture scenarios, including those with
lockdep enabled.  There were failures, but these are pre-existing issues
I am chasing:  Lost timeouts on TREE01 and rt_mutex trying to awaken
an offline CPU in TREE03.

So I have this one queued.  Objections?

                                                        Thanx, Paul

------------------------------------------------------------------------

commit bc43e2e7e08134e6f403ac845edcf4f85668d803
Author: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Date:   Mon Sep 18 08:54:40 2017 -0700

    sched: Make resched_cpu() unconditional
    
    The current implementation of synchronize_sched_expedited() incorrectly
    assumes that resched_cpu() is unconditional, which it is not.  This means
    that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
    fails as follows (analysis by Neeraj Upadhyay):
    
    o    CPU1 is waiting for expedited wait to complete:
    sync_rcu_exp_select_cpus
         rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
         IPI sent to CPU5
    
    synchronize_sched_expedited_wait
             ret = swait_event_timeout(
                                         rsp->expedited_wq,
      sync_rcu_preempt_exp_done(rnp_root),
                                         jiffies_stall);
    
                expmask = 0x20 , and CPU 5 is in idle path (in cpuidle_enter())
    
    o    CPU5 handles IPI and fails to acquire rq lock.
    
    Handles IPI
         sync_sched_exp_handler
             resched_cpu
                 returns while failing to try lock acquire rq->lock
             need_resched is not set
    
    o    CPU5 calls  rcu_idle_enter() and as need_resched is not set, goes to
         idle (schedule() is not called).
    
    o    CPU 1 reports RCU stall.
    
    Given that resched_cpu() is used only by RCU, this commit fixes the
    assumption by making resched_cpu() unconditional.
    
    Reported-by: Neeraj Upadhyay <neer...@codeaurora.org>
    Suggested-by: Neeraj Upadhyay <neer...@codeaurora.org>
    Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
    Cc: Peter Zijlstra <pet...@infradead.org>
    Cc: Steven Rostedt <rost...@goodmis.org>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index cab8c5ec128e..b2281971894c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -505,8 +505,7 @@ void resched_cpu(int cpu)
        struct rq *rq = cpu_rq(cpu);
        unsigned long flags;
 
-       if (!raw_spin_trylock_irqsave(&rq->lock, flags))
-               return;
+       raw_spin_lock_irqsave(&rq->lock, flags);
        resched_curr(rq);
        raw_spin_unlock_irqrestore(&rq->lock, flags);
 }

Reply via email to