On Mon, Dec 09, 2013 at 05:28:01PM -0800, Paul E. McKenney wrote: > From: "Paul E. McKenney" <[email protected]> > > Historically, an UNLOCK+LOCK pair executed by one CPU, by one task, > or on a given lock variable has implied a full memory barrier. In a > recent LKML thread, the wisdom of this historical approach was called > into question: http://www.spinics.net/lists/linux-mm/msg65653.html, > in part due to the memory-order complexities of low-handoff-overhead > queued locks on x86 systems. > > This patch therefore removes this guarantee from the documentation, and > further documents how to restore it via a new smp_mb__after_unlock_lock() > primitive. > > Signed-off-by: Paul E. McKenney <[email protected]> > Cc: Ingo Molnar <[email protected]> > Cc: Peter Zijlstra <[email protected]> > Cc: Oleg Nesterov <[email protected]> > Cc: Linus Torvalds <[email protected]> > Cc: Will Deacon <[email protected]> > Cc: Tim Chen <[email protected]> > Cc: Andrew Morton <[email protected]> > Cc: Thomas Gleixner <[email protected]> > Cc: Waiman Long <[email protected]> > Cc: Andrea Arcangeli <[email protected]> > Cc: Andi Kleen <[email protected]> > Cc: Michel Lespinasse <[email protected]> > Cc: Davidlohr Bueso <[email protected]> > Cc: Rik van Riel <[email protected]> > Cc: Peter Hurley <[email protected]> > Cc: "H. Peter Anvin" <[email protected]> > Cc: Arnd Bergmann <[email protected]> > Cc: Benjamin Herrenschmidt <[email protected]> > --- > Documentation/memory-barriers.txt | 51 > +++++++++++++++++++++++++++++++++------ > 1 file changed, 44 insertions(+), 7 deletions(-) > > diff --git a/Documentation/memory-barriers.txt > b/Documentation/memory-barriers.txt > index a0763db314ff..efb791d33e5a 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -1626,7 +1626,10 @@ for each construct. These operations all imply > certain barriers: > operation has completed. > > Memory operations issued before the LOCK may be completed after the LOCK > - operation has completed. > + operation has completed. An smp_mb__before_spinlock(), combined > + with a following LOCK, acts as an smp_wmb(). Note the "w", > + this is smp_wmb(), not smp_mb(). The smp_mb__before_spinlock() > + primitive is free on many architectures.
Gah. That seems highly error-prone; why isn't that "smp_wmb__before_spinlock()"? - Josh Triplett -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

