On Tue, 20 Feb 2018, Andrea Parri wrote:

> > This leaves us with a question: Do we want to change the kernel by
> > adding memory barriers after unsuccessful RMW operations on Alpha, or
> > do we want to change the model by excluding such operations from
> > address dependencies?
> 
> I'd like to continue to treat R[once] and R*[once] equally if possible.
> Given the (unconditional) smp_read_barrier_depends in READ_ONCE and in
> atomics, it seems reasonable to have it unconditionally in cmpxchg.
> 
> As with the following patch?

Yes, this seems reasonable to me.  If Will gives it his "Acked-by"  
to go with Peter's, you should submit it to Ingo Molnar.

And once this is made, there shouldn't be any trouble with the proposed 
patch for the memory model.

Alan

>   Andrea
> 
> ---
> diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
> index 68dfb3cb71454..e2660866ce972 100644
> --- a/arch/alpha/include/asm/xchg.h
> +++ b/arch/alpha/include/asm/xchg.h
> @@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned long x, int size)
>   * store NEW in MEM.  Return the initial value in MEM.  Success is
>   * indicated by comparing RETURN with OLD.
>   *
> - * The memory barrier should be placed in SMP only when we actually
> - * make the change. If we don't change anything (so if the returned
> - * prev is equal to old) then we aren't acquiring anything new and
> - * we don't need any memory barrier as far I can tell.
> + * The memory barrier is placed in SMP unconditionally, in order to
> + * guarantee that dependency ordering is preserved when a dependency
> + * is headed by an unsuccessful operation.
>   */
>  
>  static inline unsigned long
> @@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, 
> unsigned char new)
>       "       or      %1,%2,%2\n"
>       "       stq_c   %2,0(%4)\n"
>       "       beq     %2,3f\n"
> -             __ASM__MB
>       "2:\n"
> +             __ASM__MB
>       ".subsection 2\n"
>       "3:     br      1b\n"
>       ".previous"
> @@ -177,8 +176,8 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, 
> unsigned short new)
>       "       or      %1,%2,%2\n"
>       "       stq_c   %2,0(%4)\n"
>       "       beq     %2,3f\n"
> -             __ASM__MB
>       "2:\n"
> +             __ASM__MB
>       ".subsection 2\n"
>       "3:     br      1b\n"
>       ".previous"
> @@ -200,8 +199,8 @@ ____cmpxchg(_u32, volatile int *m, int old, int new)
>       "       mov %4,%1\n"
>       "       stl_c %1,%2\n"
>       "       beq %1,3f\n"
> -             __ASM__MB
>       "2:\n"
> +             __ASM__MB
>       ".subsection 2\n"
>       "3:     br 1b\n"
>       ".previous"
> @@ -223,8 +222,8 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, 
> unsigned long new)
>       "       mov %4,%1\n"
>       "       stq_c %1,%2\n"
>       "       beq %1,3f\n"
> -             __ASM__MB
>       "2:\n"
> +             __ASM__MB
>       ".subsection 2\n"
>       "3:     br 1b\n"
>       ".previous"
> 
> 
> > 
> > Note that operations like atomic_add_unless() already include memory 
> > barriers.
> > 
> > Alan

Reply via email to