4.9-stable review patch.  If anyone has any objections, please let me know.

------------------

commit 53bf57fab7321fb42b703056a4c80fc9d986d170 upstream.

Flip the branch condition after atomic_fetch_or_acquire(_Q_PENDING_VAL)
such that we loose the indent. This also result in a more natural code
flow IMO.

Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Acked-by: Will Deacon <will.dea...@arm.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: andrea.pa...@amarulasolutions.com
Cc: long...@redhat.com
Link: https://lkml.kernel.org/r/20181003130257.156322...@infradead.org
Signed-off-by: Ingo Molnar <mi...@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bige...@linutronix.de>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 kernel/locking/qspinlock.c | 54 ++++++++++++++++++--------------------
 1 file changed, 26 insertions(+), 28 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index ba5dc86a4d83..f493a4fce624 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -440,38 +440,36 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, 
u32 val)
         * 0,0,1 -> 0,1,1 ; pending
         */
        val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
-       if (!(val & ~_Q_LOCKED_MASK)) {
-               /*
-                * We're pending, wait for the owner to go away.
-                *
-                * *,1,1 -> *,1,0
-                *
-                * this wait loop must be a load-acquire such that we match the
-                * store-release that clears the locked bit and create lock
-                * sequentiality; this is because not all
-                * clear_pending_set_locked() implementations imply full
-                * barriers.
-                */
-               if (val & _Q_LOCKED_MASK) {
-                       smp_cond_load_acquire(&lock->val.counter,
-                                             !(VAL & _Q_LOCKED_MASK));
-               }
-
-               /*
-                * take ownership and clear the pending bit.
-                *
-                * *,1,0 -> *,0,1
-                */
-               clear_pending_set_locked(lock);
-               return;
+       /*
+        * If we observe any contention; undo and queue.
+        */
+       if (unlikely(val & ~_Q_LOCKED_MASK)) {
+               if (!(val & _Q_PENDING_MASK))
+                       clear_pending(lock);
+               goto queue;
        }
 
        /*
-        * If pending was clear but there are waiters in the queue, then
-        * we need to undo our setting of pending before we queue ourselves.
+        * We're pending, wait for the owner to go away.
+        *
+        * 0,1,1 -> 0,1,0
+        *
+        * this wait loop must be a load-acquire such that we match the
+        * store-release that clears the locked bit and create lock
+        * sequentiality; this is because not all
+        * clear_pending_set_locked() implementations imply full
+        * barriers.
+        */
+       if (val & _Q_LOCKED_MASK)
+               smp_cond_load_acquire(&lock->val.counter, !(VAL & 
_Q_LOCKED_MASK));
+
+       /*
+        * take ownership and clear the pending bit.
+        *
+        * 0,1,0 -> 0,0,1
         */
-       if (!(val & _Q_PENDING_MASK))
-               clear_pending(lock);
+       clear_pending_set_locked(lock);
+       return;
 
        /*
         * End of pending bit optimistic spinning and beginning of MCS
-- 
2.19.1



Reply via email to