On Sun, 2015-08-02 at 00:29 +0200, Peter Zijlstra wrote: > That's just gibberish, even in the same cacheline stuff can get > reordered.
true dat > > So either we insert > > + * memory barrier here and in the corresponding pv_wait_head() > > + * function or we do an unconditional kick which is what is done here. > > why, why why ? You've added words, but you've not actually described > what the problem is you're trying to fix. > > AFAICT the only thing we really care about here is that the load in > question happens _after_ we observe SLOW, and that is still true. > > The order against the unlock is irrelevant. > > So we set ->state before we hash and before we set SLOW. Given that > we've seen SLOW, we must therefore also see ->state. > > If ->state == halted, this means the CPU in question is blocked and the > pv_node will not get re-used -- if it does get re-used, it wasn't > blocked and we don't care either. Right, if it does get re-used, we were burning SPIN_THRESHOLD and racing only wastes a few spins, afaict. In fact this is explicitly stated: /* * The unlocker should have freed the lock before kicking the * CPU. So if the lock is still not free, it is a spurious * wakeup and so the vCPU should wait again after spinning for * a while. */ The thing I like about this patch is that it simplifies the pv_kick/pv_wait flow, not having to depend on minutia like ->state checking. But the condition about spurious wakeups is already there, so really nothing changes. Thanks, Davidlohr -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/