Re: [PATCH] ARCv2: spinlock: remove the extra smp_mb before lock, after unlock

2019-03-08 Thread Vineet Gupta
On 3/8/19 12:37 AM, Peter Zijlstra wrote: > I'm thinking those assignments should be WRITE_ONCE() at the very least. Done !

Re: [PATCH] ARCv2: spinlock: remove the extra smp_mb before lock, after unlock

2019-03-08 Thread Peter Zijlstra
On Thu, Mar 07, 2019 at 05:35:46PM -0800, Vineet Gupta wrote: > - ARCv2 LLSC based spinlocks smp_mb() both before and after the LLSC >instructions, which is not required per lkmm ACQ/REL semantics. >smp_mb() is only needed _after_ lock and _before_ unlock. >So remove the extra barriers

Re: [PATCH] ARCv2: spinlock: remove the extra smp_mb before lock, after unlock

2019-03-08 Thread Peter Zijlstra
On Thu, Mar 07, 2019 at 05:35:46PM -0800, Vineet Gupta wrote: > @@ -68,8 +72,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock) > smp_mb(); > > lock->slock = __ARCH_SPIN_LOCK_UNLOCKED__; > - > - smp_mb(); > } > > /* > @@ -226,8 +218,6 @@ static inline void arch_w

Re: [PATCH] ARCv2: spinlock: remove the extra smp_mb before lock, after unlock

2019-03-08 Thread Peter Zijlstra
On Thu, Mar 07, 2019 at 05:35:46PM -0800, Vineet Gupta wrote: > - ARCv2 LLSC based spinlocks smp_mb() both before and after the LLSC >instructions, which is not required per lkmm ACQ/REL semantics. >smp_mb() is only needed _after_ lock and _before_ unlock. >So remove the extra barrier

[PATCH] ARCv2: spinlock: remove the extra smp_mb before lock, after unlock

2019-03-07 Thread Vineet Gupta
- ARCv2 LLSC based spinlocks smp_mb() both before and after the LLSC instructions, which is not required per lkmm ACQ/REL semantics. smp_mb() is only needed _after_ lock and _before_ unlock. So remove the extra barriers. The reason they were there was mainly historical. At the time of