On Mon, Dec 09, 2013 at 05:28:01PM -0800, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paul...@linux.vnet.ibm.com>
> 
> Historically, an UNLOCK+LOCK pair executed by one CPU, by one task,
> or on a given lock variable has implied a full memory barrier.  In a
> recent LKML thread, the wisdom of this historical approach was called
> into question: http://www.spinics.net/lists/linux-mm/msg65653.html,
> in part due to the memory-order complexities of low-handoff-overhead
> queued locks on x86 systems.
> 
> This patch therefore removes this guarantee from the documentation, and
> further documents how to restore it via a new smp_mb__after_unlock_lock()
> primitive.
> 
> Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
> Cc: Ingo Molnar <mi...@redhat.com>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: Oleg Nesterov <o...@redhat.com>
> Cc: Linus Torvalds <torva...@linux-foundation.org>
> Cc: Will Deacon <will.dea...@arm.com>
> Cc: Tim Chen <tim.c.c...@linux.intel.com>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Cc: Thomas Gleixner <t...@linutronix.de>
> Cc: Waiman Long <waiman.l...@hp.com>
> Cc: Andrea Arcangeli <aarca...@redhat.com>
> Cc: Andi Kleen <a...@firstfloor.org>
> Cc: Michel Lespinasse <wal...@google.com>
> Cc: Davidlohr Bueso <davidlohr.bu...@hp.com>
> Cc: Rik van Riel <r...@redhat.com>
> Cc: Peter Hurley <pe...@hurleysoftware.com>
> Cc: "H. Peter Anvin" <h...@zytor.com>
> Cc: Arnd Bergmann <a...@arndb.de>
> Cc: Benjamin Herrenschmidt <b...@kernel.crashing.org>
> ---
>  Documentation/memory-barriers.txt | 51 
> +++++++++++++++++++++++++++++++++------
>  1 file changed, 44 insertions(+), 7 deletions(-)
> 
> diff --git a/Documentation/memory-barriers.txt 
> b/Documentation/memory-barriers.txt
> index a0763db314ff..efb791d33e5a 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -1626,7 +1626,10 @@ for each construct.  These operations all imply 
> certain barriers:
>       operation has completed.
>  
>       Memory operations issued before the LOCK may be completed after the LOCK
> -     operation has completed.
> +     operation has completed.  An smp_mb__before_spinlock(), combined
> +     with a following LOCK, acts as an smp_wmb().  Note the "w",
> +     this is smp_wmb(), not smp_mb().  The smp_mb__before_spinlock()
> +     primitive is free on many architectures.

Gah.  That seems highly error-prone; why isn't that
"smp_wmb__before_spinlock()"?

- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to