On Mon, Oct 01, 2018 at 06:17:08PM +0100, Will Deacon wrote:
> On Wed, Sep 26, 2018 at 01:01:19PM +0200, Peter Zijlstra wrote:
> > +
> >     /*
> > -    * If we observe any contention; undo and queue.
> > +    * If we observe contention, there was a concurrent lock.
> 
> Nit: I think "concurrent lock" is confusing here, because that implies to
> me that the lock was actually taken behind our back, which isn't necessarily
> the case. How about "there is a concurrent locker"?

Yes, that's better.

> > +    *
> > +    * Undo and queue; our setting of PENDING might have made the
> > +    * n,0,0 -> 0,0,0 transition fail and it will now be waiting
> > +    * on @next to become !NULL.
> >      */
> 
> Hmm, but it could also fail another concurrent set of PENDING (and the lock
> could just be held the entire time).

Right. What I wanted to convey was that is we observe _any_ contention,
we must abort and queue, because of that above condition failing and
waiting on @next.

The other cases weren't as critical, but that one really does require us
to queue in order to make forward progress.

Or did I misunderstand your concern?

> >     if (unlikely(val & ~_Q_LOCKED_MASK)) {
> > +
> > +           /* Undo PENDING if we set it. */
> >             if (!(val & _Q_PENDING_MASK))
> >                     clear_pending(lock);
> > +
> >             goto queue;
> >     }
> >  
> > @@ -466,7 +473,7 @@ void queued_spin_lock_slowpath(struct qs
> >      * claim the lock:
> >      *
> >      * n,0,0 -> 0,0,1 : lock, uncontended
> > -    * *,*,0 -> *,*,1 : lock, contended
> > +    * *,0,0 -> *,0,1 : lock, contended
> 
> Pending can be set behind our back in the contended case, in which case
> we take the lock with a single byte store and don't clear pending. You
> mention this in the updated comment below, but I think we should leave this
> comment alone.

Ah, so the reason I write it like so is because when we get here,
val.locked_pending == 0, per the atomic_cond_read_acquire() condition.


Reply via email to