On 3/8/19 12:37 AM, Peter Zijlstra wrote:
> I'm thinking those assignments should be WRITE_ONCE() at the very least.
Done !
On Thu, Mar 07, 2019 at 05:35:46PM -0800, Vineet Gupta wrote:
> - ARCv2 LLSC based spinlocks smp_mb() both before and after the LLSC
>instructions, which is not required per lkmm ACQ/REL semantics.
>smp_mb() is only needed _after_ lock and _before_ unlock.
>So remove the extra barriers
On Thu, Mar 07, 2019 at 05:35:46PM -0800, Vineet Gupta wrote:
> @@ -68,8 +72,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
> smp_mb();
>
> lock->slock = __ARCH_SPIN_LOCK_UNLOCKED__;
> -
> - smp_mb();
> }
>
> /*
> @@ -226,8 +218,6 @@ static inline void arch_w
On Thu, Mar 07, 2019 at 05:35:46PM -0800, Vineet Gupta wrote:
> - ARCv2 LLSC based spinlocks smp_mb() both before and after the LLSC
>instructions, which is not required per lkmm ACQ/REL semantics.
>smp_mb() is only needed _after_ lock and _before_ unlock.
>So remove the extra barrier
- ARCv2 LLSC based spinlocks smp_mb() both before and after the LLSC
instructions, which is not required per lkmm ACQ/REL semantics.
smp_mb() is only needed _after_ lock and _before_ unlock.
So remove the extra barriers.
The reason they were there was mainly historical. At the time of
5 matches
Mail list logo