[tip:locking/core] locking/qspinlock: Use smp_cond_acquire() in pending code

2016-02-29 Thread tip-bot for Waiman Long
Commit-ID:  cb037fdad6772df2d49fe61c97d7c0d8265bc918
Gitweb: http://git.kernel.org/tip/cb037fdad6772df2d49fe61c97d7c0d8265bc918
Author: Waiman Long 
AuthorDate: Thu, 10 Dec 2015 15:17:44 -0500
Committer:  Ingo Molnar 
CommitDate: Mon, 29 Feb 2016 10:02:42 +0100

locking/qspinlock: Use smp_cond_acquire() in pending code

The newly introduced smp_cond_acquire() was used to replace the
slowpath lock acquisition loop. Similarly, the new function can also
be applied to the pending bit locking loop. This patch uses the new
function in that loop.

Signed-off-by: Waiman Long 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Douglas Hatch 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Scott J Norton 
Cc: Thomas Gleixner 
Link: 
http://lkml.kernel.org/r/1449778666-13593-1-git-send-email-waiman.l...@hpe.com
Signed-off-by: Ingo Molnar 
---
 kernel/locking/qspinlock.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 393d187..ce2f75e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -358,8 +358,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 
val)
 * sequentiality; this is because not all clear_pending_set_locked()
 * implementations imply full barriers.
 */
-   while ((val = smp_load_acquire(>val.counter)) & _Q_LOCKED_MASK)
-   cpu_relax();
+   smp_cond_acquire(!(atomic_read(>val) & _Q_LOCKED_MASK));
 
/*
 * take ownership and clear the pending bit.
@@ -435,7 +434,7 @@ queue:
 *
 * The PV pv_wait_head_or_lock function, if active, will acquire
 * the lock and return a non-zero value. So we have to skip the
-* smp_load_acquire() call. As the next PV queue head hasn't been
+* smp_cond_acquire() call. As the next PV queue head hasn't been
 * designated yet, there is no way for the locked value to become
 * _Q_SLOW_VAL. So both the set_locked() and the
 * atomic_cmpxchg_relaxed() calls will be safe.
@@ -466,7 +465,7 @@ locked:
break;
}
/*
-* The smp_load_acquire() call above has provided the necessary
+* The smp_cond_acquire() call above has provided the necessary
 * acquire semantics required for locking. At most two
 * iterations of this loop may be ran.
 */


[tip:locking/core] locking/qspinlock: Use smp_cond_acquire() in pending code

2016-02-29 Thread tip-bot for Waiman Long
Commit-ID:  cb037fdad6772df2d49fe61c97d7c0d8265bc918
Gitweb: http://git.kernel.org/tip/cb037fdad6772df2d49fe61c97d7c0d8265bc918
Author: Waiman Long 
AuthorDate: Thu, 10 Dec 2015 15:17:44 -0500
Committer:  Ingo Molnar 
CommitDate: Mon, 29 Feb 2016 10:02:42 +0100

locking/qspinlock: Use smp_cond_acquire() in pending code

The newly introduced smp_cond_acquire() was used to replace the
slowpath lock acquisition loop. Similarly, the new function can also
be applied to the pending bit locking loop. This patch uses the new
function in that loop.

Signed-off-by: Waiman Long 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Andrew Morton 
Cc: Douglas Hatch 
Cc: Linus Torvalds 
Cc: Paul E. McKenney 
Cc: Peter Zijlstra 
Cc: Scott J Norton 
Cc: Thomas Gleixner 
Link: 
http://lkml.kernel.org/r/1449778666-13593-1-git-send-email-waiman.l...@hpe.com
Signed-off-by: Ingo Molnar 
---
 kernel/locking/qspinlock.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 393d187..ce2f75e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -358,8 +358,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 
val)
 * sequentiality; this is because not all clear_pending_set_locked()
 * implementations imply full barriers.
 */
-   while ((val = smp_load_acquire(>val.counter)) & _Q_LOCKED_MASK)
-   cpu_relax();
+   smp_cond_acquire(!(atomic_read(>val) & _Q_LOCKED_MASK));
 
/*
 * take ownership and clear the pending bit.
@@ -435,7 +434,7 @@ queue:
 *
 * The PV pv_wait_head_or_lock function, if active, will acquire
 * the lock and return a non-zero value. So we have to skip the
-* smp_load_acquire() call. As the next PV queue head hasn't been
+* smp_cond_acquire() call. As the next PV queue head hasn't been
 * designated yet, there is no way for the locked value to become
 * _Q_SLOW_VAL. So both the set_locked() and the
 * atomic_cmpxchg_relaxed() calls will be safe.
@@ -466,7 +465,7 @@ locked:
break;
}
/*
-* The smp_load_acquire() call above has provided the necessary
+* The smp_cond_acquire() call above has provided the necessary
 * acquire semantics required for locking. At most two
 * iterations of this loop may be ran.
 */