3.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Michael Ellerman <m...@ellerman.id.au>

commit 78e05b1421fa41ae8457701140933baa5e7d9479 upstream.

Similar to the previous commit which described why we need to add a
barrier to arch_spin_is_locked(), we have a similar problem with
spin_unlock_wait().

We need a barrier on entry to ensure any spinlock we have previously
taken is visibly locked prior to the load of lock->slock.

It's also not clear if spin_unlock_wait() is intended to have ACQUIRE
semantics. For now be conservative and add a barrier on exit to give it
ACQUIRE semantics.

Signed-off-by: Michael Ellerman <m...@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <b...@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 arch/powerpc/lib/locks.c |    4 ++++
 1 file changed, 4 insertions(+)

--- a/arch/powerpc/lib/locks.c
+++ b/arch/powerpc/lib/locks.c
@@ -70,12 +70,16 @@ void __rw_yield(arch_rwlock_t *rw)
 
 void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
+       smp_mb();
+
        while (lock->slock) {
                HMT_low();
                if (SHARED_PROCESSOR)
                        __spin_yield(lock);
        }
        HMT_medium();
+
+       smp_mb();
 }
 
 EXPORT_SYMBOL(arch_spin_unlock_wait);


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to