On 08/15/2017 01:32 PM, Richard Henderson wrote:
On 08/15/2017 08:56 AM, Philippe Mathieu-Daudé wrote:
@@ -1885,7 +1886,7 @@ static void gen_load_exclusive(DisasContext *s, int rt,
int rt2,
               tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
           }
       } else {
-        memop |= size;
+        memop |= size | MO_ALIGN;

MO_ALIGN_8 here too?

           tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
           tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);

Peter already pointed out that MO_ALIGN_N should be reserved for those cases
where an explicit size really needed.  You should note that using MO_ALIGN_8
would be actively wrong here -- it would incorrectly require 8 byte alignment
for the single byte access of LDXRB.

Indeed I didn't think of that, thanks!



r~


Reply via email to