On Thu, 21 Jul 2016 19:54:55 +0200 Manfred Spraul <[email protected]> wrote:
> Next update: > - switch to smp_store_mb() instead of WRITE_ONCE();smp_mb(); > - introduce SEM_GLOBAL_LOCK instead of magic -1. > - do not use READ_ONCE() for the unlocked&unordered test: > READ_ONCE doesn't make sense for unlocked&unordered code. > - document why smp_mb() is required after spin_lock(). I assume "ipc/sem.c: remove duplicated memory barriers" is still relevant? From: Manfred Spraul <[email protected]> Subject: ipc/sem.c: remove duplicated memory barriers With 2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more"), memory barriers were added into spin_unlock_wait(). Thus another barrier is not required. And as explained in 055ce0fd1b8 ("locking/qspinlock: Add comments"), spin_lock() provides a barrier so that reads within the critical section cannot happen before the write for the lock is visible. i.e. spin_lock provides an acquire barrier after the write of the lock variable, this barrier pairs with the smp_mb() in complexmode_enter(). Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Manfred Spraul <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> --- ipc/sem.c | 16 ---------------- 1 file changed, 16 deletions(-) diff -puN ipc/sem.c~ipc-semc-remove-duplicated-memory-barriers ipc/sem.c --- a/ipc/sem.c~ipc-semc-remove-duplicated-memory-barriers +++ a/ipc/sem.c @@ -290,14 +290,6 @@ static void complexmode_enter(struct sem sem = sma->sem_base + i; spin_unlock_wait(&sem->lock); } - /* - * spin_unlock_wait() is not a memory barriers, it is only a - * control barrier. The code must pair with spin_unlock(&sem->lock), - * thus just the control barrier is insufficient. - * - * smp_rmb() is sufficient, as writes cannot pass the control barrier. - */ - smp_rmb(); } /* @@ -363,14 +355,6 @@ static inline int sem_lock(struct sem_ar */ spin_lock(&sem->lock); - /* - * See 51d7d5205d33 - * ("powerpc: Add smp_mb() to arch_spin_is_locked()"): - * A full barrier is required: the write of sem->lock - * must be visible before the read is executed - */ - smp_mb(); - if (!smp_load_acquire(&sma->complex_mode)) { /* fast path successful! */ return sops->sem_num; _

