This fixes some bugs reported against 128-bit atomic operations. Just a note that the ppc insns that uses this, LQ and STQ, do not require atomic operations if the address is unaligned, or if the address does not resolve to ram. So for some things we are working harder than required.
I've also had a good read of Power's atomicity requirements, for all instructions. It requires that the lsb of the address control the minimum atomicity. E.g. for (addr % size) == 2, each 2-byte component must be atomic. Which is certainly not what we're doing at the bottom of our memory model at present. I've also been reading up on Arm's FEAT_LSE2, which is mandatory for v8.4. This vastly strengthens the single-copy atomicity requirements for the whole system. Strikingly, any access that does not cross a 16-byte boundary -- aligned or unaligned -- is now single-copy atomic. In both cases, I would imagine that we should only allow the softmmu fast path for aligned accesses. That's single-copy atomic on all hosts. But then we need different handling for each platform at the bottom of cputlb... Suggestions on ways to approach this that aren't overwhelmingly ugly? r~ Richard Henderson (1): accel/tcg: Probe the proper permissions for atomic ops accel/tcg/atomic_template.h | 24 +++++----- accel/tcg/cputlb.c | 95 ++++++++++++++++++++++++++----------- accel/tcg/user-exec.c | 8 ++-- 3 files changed, 83 insertions(+), 44 deletions(-) -- 2.25.1