Now that account_hardirq_enter() is called after HARDIRQ_OFFSET has
been incremented, there is nothing left that prevents us from also
moving tick_irq_enter() after HARDIRQ_OFFSET is incremented.

The desired outcome is to remove the nasty hack that prevents softirqs
from being raised through ksoftirqd instead of the hardirq bottom half.
Also tick_irq_enter() then becomes appropriately covered by lockdep.

Signed-off-by: Frederic Weisbecker <frede...@kernel.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Tony Luck <tony.l...@intel.com>
Cc: Fenghua Yu <fenghua...@intel.com>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: Benjamin Herrenschmidt <b...@kernel.crashing.org>
Cc: Paul Mackerras <pau...@samba.org>
Cc: Heiko Carstens <h...@linux.ibm.com>
Cc: Vasily Gorbik <g...@linux.ibm.com>
Cc: Christian Borntraeger <borntrae...@de.ibm.com>
---
 kernel/softirq.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index b8f42b3ba8ca..d5bfd5e661fc 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -377,16 +377,12 @@ asmlinkage __visible void __softirq_entry 
__do_softirq(void)
  */
 void irq_enter_rcu(void)
 {
-       if (is_idle_task(current) && !in_interrupt()) {
-               /*
-                * Prevent raise_softirq from needlessly waking up ksoftirqd
-                * here, as softirq will be serviced on return from interrupt.
-                */
-               local_bh_disable();
+       __irq_enter_raw();
+
+       if (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET))
                tick_irq_enter();
-               _local_bh_enable();
-       }
-       __irq_enter();
+
+       account_hardirq_enter(current);
 }
 
 /**
-- 
2.25.1

Reply via email to