On Wed, Jun 15, 2016 at 09:56:59AM -0700, Davidlohr Bueso wrote:
> On Tue, 14 Jun 2016, Waiman Long wrote:

> >+++ b/kernel/locking/osq_lock.c
> >@@ -115,7 +115,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
> >      * cmpxchg in an attempt to undo our queueing.
> >      */
> >
> >-    while (!READ_ONCE(node->locked)) {
> >+    while (!smp_load_acquire(&node->locked)) {
> 
> Hmm this being a polling path, that barrier can get pretty expensive and
> last I checked it was unnecessary:

I think he'll go rely on it later on.

In any case, its fairly simple to cure, just add
smp_acquire__after_ctrl_dep() at the end. If we bail because
need_resched() we don't need the acquire I think.

Reply via email to