Commit 51d7d5205d338 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
added an smp_mb() to arch_spin_is_locked(), in order to ensure that

        Thread 0                        Thread 1

        spin_lock(A);                   spin_lock(B);
        r0 = spin_is_locked(B)          r1 = spin_is_locked(A);

never ends up with r0 = r1 = 0, and reported one example (in ipc/sem.c)
relying on such guarantee.

It's however understood (and undocumented) that spin_is_locked() is not
required to ensure such ordering guarantee, guarantee that is currently
_not_ provided by all implementations/arch, and that callers relying on
such ordering should instead use suitable memory barriers before acting
on the result of spin_is_locked().

Following a recent auditing[1] of the callers of {,raw_}spin_is_locked()
revealing that none of them are relying on this guarantee anymore, this
commit removes the leading smp_mb() from the primitive thus effectively
reverting 51d7d5205d338.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2

Signed-off-by: Andrea Parri <andrea.pa...@amarulasolutions.com>
Cc: Benjamin Herrenschmidt <b...@kernel.crashing.org>
Cc: Paul Mackerras <pau...@samba.org>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
---
 arch/powerpc/include/asm/spinlock.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/powerpc/include/asm/spinlock.h 
b/arch/powerpc/include/asm/spinlock.h
index b9ebc3085fb79..ecc141e3f1a73 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -67,7 +67,6 @@ static __always_inline int 
arch_spin_value_unlocked(arch_spinlock_t lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-       smp_mb();
        return !arch_spin_value_unlocked(*lock);
 }
 
-- 
2.7.4

Reply via email to