On Wed, Oct 21, 2020 at 11:27:57AM -0400, Steven Rostedt wrote: > On Wed, 21 Oct 2020 17:12:37 +0200 > Peter Zijlstra <pet...@infradead.org> wrote: > > > > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > > > > index 3e99dfef8408..9f818145ef7d 100644 > > > > --- a/kernel/locking/lockdep.c > > > > +++ b/kernel/locking/lockdep.c > > > > @@ -4057,9 +4057,6 @@ void lockdep_hardirqs_on_prepare(unsigned long ip) > > > > if (unlikely(in_nmi())) > > > > return; > > > > > > > > - if (unlikely(__this_cpu_read(lockdep_recursion))) > > > > - return; > > > > - > > > > if (unlikely(lockdep_hardirqs_enabled())) { > > > > > > Hmm, would moving the recursion check below the check of the > > > lockdep_hardirqs_enable() cause a large skew in the spurious enable stats? > > > May not be an issue, but something we should check to make sure that > > > there's not a path that constantly hits this. > > > > Anything that sets recursion will have interrupts disabled. > > It may have interrupts disabled, but does it have the hardirqs_enabled > per_cpu variable set? The above check only looks at that, and doesn't check > if interrupts are actually enabled. > > For example, if lockdep is processing a mutex, it would set the recursion > variable, but does it ever set the hardirqs_enabled variable to off?
Bah, I can't read. So I was looking at: if (DEBUG_LOCKS_WARN_ON(!irqs_disabled())) but that wasn't what I actually moved around. *sigh*.. A well, I'll just remove the __ here. It's not like we super care about performance here. Something like so then.. --- Subject: lockdep: Fix preemption WARN for spurious IRQ-enable From: Peter Zijlstra <pet...@infradead.org> Date: Thu Oct 22 12:23:02 CEST 2020 It is valid (albeit uncommon) to call local_irq_enable() without first having called local_irq_disable(). In this case we enter lockdep_hardirqs_on*() with IRQs enabled and trip a preemption warning for using __this_cpu_read(). Use this_cpu_read() instead to avoid the warning. Fixes: 4d004099a6 ("lockdep: Fix lockdep recursion") Reported-by: syzbot+53f8ce8bbc07924b6...@syzkaller.appspotmail.com Reported-by: kernel test robot <l...@intel.com> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> --- --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -4057,7 +4057,7 @@ void lockdep_hardirqs_on_prepare(unsigne if (unlikely(in_nmi())) return; - if (unlikely(__this_cpu_read(lockdep_recursion))) + if (unlikely(this_cpu_read(lockdep_recursion))) return; if (unlikely(lockdep_hardirqs_enabled())) { @@ -4126,7 +4126,7 @@ void noinstr lockdep_hardirqs_on(unsigne goto skip_checks; } - if (unlikely(__this_cpu_read(lockdep_recursion))) + if (unlikely(this_cpu_read(lockdep_recursion))) return; if (lockdep_hardirqs_enabled()) {