On Wed, Sep 30, 2020 at 03:10:26PM -0400, Steven Rostedt wrote:
> On Wed, 30 Sep 2020 20:13:23 +0200
> Peter Zijlstra <[email protected]> wrote:
> 
> >  diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> > > index 6a584b3e5c74..3e5bc1dd71c6 100644
> > > --- a/include/linux/lockdep.h
> > > +++ b/include/linux/lockdep.h
> > > @@ -550,7 +550,8 @@ do {                                                  
> > >                 \
> > >  
> > >  #define lockdep_assert_irqs_disabled()                                   
> > > \
> > >  do {                                                                     
> > > \
> > > - WARN_ON_ONCE(debug_locks && raw_cpu_read(hardirqs_enabled));    \
> > > + WARN_ON_ONCE(debug_locks && raw_cpu_read(hardirqs_enabled) &&   \
> > > +           likely(!(current->lockdep_recursion & 
> > > LOCKDEP_RECURSION_MASK)));\
> > >  } while (0)  
> > 
> > Blergh, IIRC there's header hell that way. The sane fix is killing off
> > that trace_*_rcuidle() disease.
> 
> Really?
> 
> I could run this through all my other tests to see if that is the case.
> That is, to see if it stumbles across header hell.

I went through a lot of pain to make that per-cpu to avoid using
current. But that might've been driven by
lockdep_assert_preemption_disabled(), which is used in seqlock.h which
in turn is included all over the place.

That said, there's at least two things we can do:

 - make lockdep_recursion per-cpu too, IIRC we only ever set that when
   we have IRQs disabled anyway.

OR

 - inspired by the above, as can save/clear - restore hardirqs_enabled
   when we frob lockdep_recursion.

Admittedly, the second is somewhat gross :-)

Reply via email to