On Wed, 30 Sep 2020 20:13:23 +0200
Peter Zijlstra <pet...@infradead.org> wrote:

>  diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> > index 6a584b3e5c74..3e5bc1dd71c6 100644
> > --- a/include/linux/lockdep.h
> > +++ b/include/linux/lockdep.h
> > @@ -550,7 +550,8 @@ do {                                                    
> >                 \
> >  
> >  #define lockdep_assert_irqs_disabled()                                     
> > \
> >  do {                                                                       
> > \
> > -   WARN_ON_ONCE(debug_locks && raw_cpu_read(hardirqs_enabled));    \
> > +   WARN_ON_ONCE(debug_locks && raw_cpu_read(hardirqs_enabled) &&   \
> > +           likely(!(current->lockdep_recursion & 
> > LOCKDEP_RECURSION_MASK)));\
> >  } while (0)  
> 
> Blergh, IIRC there's header hell that way. The sane fix is killing off
> that trace_*_rcuidle() disease.

Really?

I could run this through all my other tests to see if that is the case.
That is, to see if it stumbles across header hell.

> 
> But I think this will also cure it.
> 
> ---
> diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
> index 6a339ce328e0..4f90293d170b 100644
> --- a/arch/x86/kernel/unwind_orc.c
> +++ b/arch/x86/kernel/unwind_orc.c
> @@ -432,7 +432,7 @@ bool unwind_next_frame(struct unwind_state *state)
>               return false;
>  
>       /* Don't let modules unload while we're reading their ORC data. */
> -     preempt_disable();
> +     preempt_disable_notrace();
>  
>       /* End-of-stack check for user tasks: */
>       if (state->regs && user_mode(state->regs))
> @@ -612,14 +612,14 @@ bool unwind_next_frame(struct unwind_state *state)
>               goto err;
>       }
>  
> -     preempt_enable();
> +     preempt_enable_notrace();
>       return true;
>  
>  err:
>       state->error = true;
>  
>  the_end:
> -     preempt_enable();
> +     preempt_enable_notrace();
>       state->stack_info.type = STACK_TYPE_UNKNOWN;
>       return false;
>  }

I think you are going to play whack-a-mole with this approach. This will
happen anytime preempt_disable is being traced within lockdep internal code.

I just hit this:

register_lock_class
  assign_lock_key
    __is_module_percpu_address
      preempt_disable
         trace_preempt_disable
            rcu_irq_enter_irqson
              [..]


Same thing, different path.

-- Steve

Reply via email to