In order to consolidate and optimize generic softirq mask accesses, we first need to convert architectures to use per-cpu operations when possible.
Signed-off-by: Frederic Weisbecker <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: David S. Miller <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: James E.J. Bottomley <[email protected]> Cc: Helge Deller <[email protected]> Cc: Tony Luck <[email protected]> Cc: Fenghua Yu <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Yoshinori Sato <[email protected]> Cc: Rich Felker <[email protected]> --- arch/sparc/include/asm/hardirq_64.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/sparc/include/asm/hardirq_64.h b/arch/sparc/include/asm/hardirq_64.h index f565402..6aba904 100644 --- a/arch/sparc/include/asm/hardirq_64.h +++ b/arch/sparc/include/asm/hardirq_64.h @@ -11,7 +11,7 @@ #define __ARCH_IRQ_STAT #define local_softirq_pending() \ - (local_cpu_data().__softirq_pending) + (*this_cpu_ptr(&__cpu_data.__softirq_pending)) void ack_bad_irq(unsigned int irq); -- 2.7.4

