On 14 January 2016 at 18:00, alvise rigo <a.r...@virtualopensystems.com> wrote: > Forcing an unaligned LDREX access in aarch32, QEMU fails the following assert: > target-arm/helper.c:5921:regime_el: code should not be reached > > Running this snippet both baremetal and on top of Linux will trigger > the problem: > > static inline int cmpxchg(volatile void *ptr, unsigned int old, > unsigned int new) > { > unsigned int oldval, res; > > do { > asm volatile("@ __cmpxchg4\n" > " ldrex %1, [%2]\n" > " mov %0, #0\n" > " teq %1, %3\n" > " strexeq %0, %4, [%2]\n" > : "=&r" (res), "=&r" (oldval) > : "r" (ptr), "Ir" (old), "r" (new) > : "memory", "cc"); > } while (res); > > return oldval; > } > > void main(void) > { > int arr[2] = {0, 0}; > int *ptr = (int *)(((void *)&arr) + 1); > > cmpxchg(ptr, 0, 0xbeef); > } > > The following code seems to solve the problem, but I'm not really sure > this is the proper way to fix it.
> - if (arm_regime_using_lpae_format(env, cpu_mmu_index(env, false))) { > + int mmu_idx = cpu_mmu_index(env, false); > + if (!arm_feature(env, ARM_FEATURE_EL2)) { > + mmu_idx += ARMMMUIdx_S1NSE0; > + } > + if (arm_regime_using_lpae_format(env, mmu_idx)) { This won't work for (a) CPUs with EL2 (b) translation regimes other than the basic EL0/EL1. I think you want int mmu_idx = cpu_mmu_index(env, false); if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) { mmu_idx += ARMMMUIdx_S1NSE0; } if (arm_regime_using_lpae_format(env, mmu_idx)) { Perhaps we should also change the function name to arm_s1_regime_using_lpae_format() to indicate that it is specifically checking the stage 1 translation regime. thanks -- PMM