On Mon, Oct 01, 2018 at 10:00:28PM +0200, Peter Zijlstra wrote: > On Mon, Oct 01, 2018 at 06:17:00PM +0100, Will Deacon wrote: > > Thanks for chewing up my afternoon ;) > > I'll get you a beer in EDI ;-)
Just one?! > > But actually, > > consider this scenario with your patch: > > > > 1. CPU0 sees a locked val, and is about to do your xchg_relaxed() to set > > pending. > > > > 2. CPU1 comes in and sets pending, spins on locked > > > > 3. CPU2 sees a pending and locked val, and is about to enter the head of > > the waitqueue (i.e. it's right before xchg_tail()). > > > > 4. The locked holder unlock()s, CPU1 takes the lock() and then unlock()s > > it, so pending and locked are now 0. > > > > 5. CPU0 sets pending and reads back zeroes for the other fields > > > > 6. CPU0 clears pending and sets locked -- it now has the lock > > > > 7. CPU2 updates tail, sees it's at the head of the waitqueue and spins > > for locked and pending to go clear. However, it reads a stale value > > from step (4) and attempts the atomic_try_cmpxchg() to take the lock. > > > > 8. CPU2 will fail the cmpxchg(), but then go ahead and set locked. At this > > point we're hosed, because both CPU2 and CPU0 have the lock. > > Let me draw a picture of that.. > > > CPU0 CPU1 CPU2 CPU3 > > 0) lock > trylock -> (0,0,1) > 1)lock > trylock /* fail */ > > 2) lock > trylock /* fail */ > tas-pending -> (0,1,1) > wait-locked > > 3) lock > trylock /* fail */ > tas-pending /* fail */ > > 4) unlock -> (0,1,0) > clr_pnd_set_lck -> (0,0,1) > unlock -> (0,0,0) > > 5) tas-pending -> (0,1,0) > read-val -> (0,1,0) > 6) clr_pnd_set_lck -> (0,0,1) > 7) xchg_tail -> (n,0,1) > load_acquire <- (n,0,0) (from-4) > 8) cmpxchg /* fail */ > set_locked() > > > Is there something I'm missing that means this can't happen? I suppose > > cacheline granularity ends up giving serialisation between (4) and (7), > > but I'd *much* prefer not to rely on that because it feels horribly > > fragile. > > Well, on x86 atomics are fully ordered, so the xchg_tail() does in > fact have smp_mb() in and that should order it sufficient for that not > to happen I think. Hmm, does that actually help, though? I still think you're relying on the cache-coherence protocol to serialise the xchg() on pending before the xchg_tail(), which I think is fragile because they don't actually overlap. > But in general, yes ick. Alternatively, making xchg_tail an ACQUIRE > doesn't seem too far out.. > > > Another idea I was playing with was adding test_and_set_bit_acquire() > > for this, because x86 has an instruction for that, right? > > LOCK BTS, yes. So it can do a full 32bit RmW, but it cannot return the > old value of the word, just the old bit (in CF). > > I suppose you get rid of the whole mixed size thing, but you still have > the whole two instruction thing. I really think we need that set of pending to operate on the whole lock word. > > > + /* > > > + * Ensures the tail load happens after the xchg(). > > > + * > > > + * lock unlock (a) > > > + * xchg ---------------. > > > + * (b) lock unlock +----- fetch_or > > > + * load ---------------' > > > + * lock unlock (c) > > > + * > > > > I failed miserably at parsing this comment :( > > > > I think the main issue is that I don't understand how to read the little > > diagram you've got. > > Where fetch_or() is indivisible and has happens-before (a) and > happens-after (c), the new thing is in fact divisible and has > happens-in-between (b). > > Of the happens-in-between (b), we can either get a new concurrent > locker, or make progress of an extant concurrent locker because an > unlock happened. > > But the rest of the text might indeed be very confused. I think I wrote > the bulk of that when I was in fact doing a xchg16 on locked_pending, > but that's fundamentally broken. I did edit it afterwards, but that > might have just made it worse. Ok, maybe just remove it :) Will