I'm able to reproduce a lockdep splat when CONFIG_PROVE_LOCKING=y and
CONFIG_PREEMPTIRQ_EVENTS=y.

$ echo 1 > /d/tracing/events/preemptirq/preempt_enable/enable

Cc: Steven Rostedt <rost...@goodmis.org>
Cc: Peter Zilstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Cc: Tom Zanussi <tom.zanu...@linux.intel.com>
Cc: Namhyung Kim <namhy...@kernel.org>
Cc: Thomas Glexiner <t...@linutronix.de>
Cc: Boqun Feng <boqun.f...@gmail.com>
Cc: Paul McKenney <paul...@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweis...@gmail.com>
Cc: Randy Dunlap <rdun...@infradead.org>
Cc: Masami Hiramatsu <mhira...@kernel.org>
Cc: Fenguang Wu <fengguang...@intel.com>
Cc: Baohong Liu <baohong....@intel.com>
Cc: Vedang Patel <vedang.pa...@intel.com>
Cc: kernel-t...@android.com
Signed-off-by: Joel Fernandes <joe...@google.com>
---
 kernel/softirq.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 24d243ef8e71..47e2f61938c0 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned int cnt)
 {
        lockdep_assert_irqs_disabled();
 
+       if (preempt_count() == cnt)
+               trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
+
        if (softirq_count() == (cnt & SOFTIRQ_MASK))
                trace_softirqs_on(_RET_IP_);
-       preempt_count_sub(cnt);
+
+       __preempt_count_sub(cnt);
 }
 
 /*
-- 
2.17.0.441.gb46fe60e1d-goog

Reply via email to