On Sat, 2014-01-11 at 01:52 -0800, Paul E. McKenney wrote: [...] > On Sat, Jan 11, 2014 at 01:49:12AM -0800, Paul E. McKenney wrote: > > On Thu, Jan 02, 2014 at 07:05:20AM -0800, Davidlohr Bueso wrote: > > > - spin_lock(&hb->lock); > > > + spin_lock(&hb->lock); /* implies MB (A) */ > > > > You need smp_mb__before_spinlock() before the spin_lock() to get a > > full memory barrier.
Hmmm, the thing we need to guarantee here is that the ticket increment is visible (which is the same as the smp_mb__after_atomic_inc we used to have in the original atomic counter approach), so adding a barrier before the spin_lock call wouldn't serve that. I previously consulted this with Linus and we can rely on the fact that spin_lock calls already update the head counter, so spinners are visible even if the lock hasn't been acquired yet. > Actually, even that only gets you smp_mb(). I guess you mean smp_wmb() here. > Unless you are ordering a prior write against a later write here, you > will need an smp_mb(). Yep. Thanks for looking into this, Davidlohr -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/