For x86 qspinlocks, no additional memory barrier is required in smp_mb__after_spin_lock:
Theoretically, for qspinlock we could define two barriers: - smp_mb__after_spin_lock: Free for x86, not free for powerpc - smp_mb__between_spin_lock_and_spin_unlock_wait(): Free for all archs, see queued_spin_unlock_wait for details. As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used in any hotpaths, the patch does not create that define yet. Signed-off-by: Manfred Spraul <manf...@colorfullife.com> --- arch/x86/include/asm/qspinlock.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index eaba080..04d26ed 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -61,6 +61,17 @@ static inline bool virt_spin_lock(struct qspinlock *lock) } #endif /* CONFIG_PARAVIRT */ +#ifndef smp_mb__after_spin_lock +/** + * smp_mb__after_spin_lock() - Provide smp_mb() after spin_lock + * + * queued_spin_lock() provides full memory barriers semantics, + * thus no further memory barrier is required. See + * queued_spin_unlock_wait() for further details. + */ +#define smp_mb__after_spin_lock() do { } while (0) +#endif + #include <asm-generic/qspinlock.h> #endif /* _ASM_X86_QSPINLOCK_H */ -- 2.5.5