On Tue, Jun 09, 2015 at 05:18:18PM +0530, Vineet Gupta wrote:

Please try and provide at least _some_ Changelog body.

<snip all atomic ops that return values>

> diff --git a/arch/arc/include/asm/spinlock.h b/arch/arc/include/asm/spinlock.h
> index b6a8c2dfbe6e..8af8eaad4999 100644
> --- a/arch/arc/include/asm/spinlock.h
> +++ b/arch/arc/include/asm/spinlock.h
> @@ -22,24 +22,32 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
>  {
>       unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
>  
> +     smp_mb();
> +
>       __asm__ __volatile__(
>       "1:     ex  %0, [%1]            \n"
>       "       breq  %0, %2, 1b        \n"
>       : "+&r" (tmp)
>       : "r"(&(lock->slock)), "ir"(__ARCH_SPIN_LOCK_LOCKED__)
>       : "memory");
> +
> +     smp_mb();
>  }
>  
>  static inline int arch_spin_trylock(arch_spinlock_t *lock)
>  {
>       unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
>  
> +     smp_mb();
> +
>       __asm__ __volatile__(
>       "1:     ex  %0, [%1]            \n"
>       : "+r" (tmp)
>       : "r"(&(lock->slock))
>       : "memory");
>  
> +     smp_mb();
> +
>       return (tmp == __ARCH_SPIN_LOCK_UNLOCKED__);
>  }
>  

Both these are only required to provide an ACQUIRE barrier, if all you
have is smp_mb(), the second is sufficient.

Also note that a failed trylock is not required to provide _any_ barrier
at all.

> @@ -47,6 +55,8 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
>  {
>       unsigned int tmp = __ARCH_SPIN_LOCK_UNLOCKED__;
>  
> +     smp_mb();
> +
>       __asm__ __volatile__(
>       "       ex  %0, [%1]            \n"
>       : "+r" (tmp)

This requires a RELEASE barrier, again, if all you have is smp_mb(),
this is indeed correct.

Describing some of this would make for a fine Changelog body :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to