On Thu, Nov 19, 2015 at 06:01:52PM +0000, Will Deacon wrote: > For completeness, here's what I've currently got. I've failed to measure > any performance impact on my 8-core systems, but that's not surprising.
> +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) > +{ > + unsigned int tmp; > + arch_spinlock_t lockval; > > + asm volatile( > +" sevl\n" > +"1: wfe\n" Using WFE here would lower the cacheline bouncing pressure a bit I imagine. Sure we still pull it over into S(hared) after every store but we don't keep banging on it making the initial e(X)clusive grab hard. > +"2: ldaxr %w0, %2\n" > +" eor %w1, %w0, %w0, ror #16\n" > +" cbnz %w1, 1b\n" > + ARM64_LSE_ATOMIC_INSN( > + /* LL/SC */ > +" stxr %w1, %w0, %2\n" > + /* Serialise against any concurrent lockers */ > +" cbnz %w1, 2b\n", > + /* LSE atomics */ > +" nop\n" > +" nop\n") I find these ARM64_LSE macro thingies aren't always easy to read, its fairly easy to overlook the ',' separating the v8 and v8.1 parts, esp. if you have further interleaving comments like in the above. > + : "=&r" (lockval), "=&r" (tmp), "+Q" (*lock) > + : > + : "memory"); > +} -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/