The current comment for the barrier that guarantees that waiter
increment is always before taking the hb spinlock (barrier (A))
needs to be fixed. We are obviously referring to hb_waiters_inc,
which is a full barrier.

Reported-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Davidlohr Bueso <dbu...@suse.de>
---
 kernel/futex.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/futex.c b/kernel/futex.c
index fdd312da0992..5ec2473a3497 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -2221,11 +2221,11 @@ static inline struct futex_hash_bucket 
*queue_lock(struct futex_q *q)
         * decrement the counter at queue_unlock() when some error has
         * occurred and we don't end up adding the task to the list.
         */
-       hb_waiters_inc(hb);
+       hb_waiters_inc(hb); /* implies smp_mb(); (A) */
 
        q->lock_ptr = &hb->lock;
 
-       spin_lock(&hb->lock); /* implies smp_mb(); (A) */
+       spin_lock(&hb->lock);
        return hb;
 }
 
-- 
2.16.4

Reply via email to