On 1 January 2011 18:25, Aurelien Jarno <aurel...@aurel32.net> wrote: > SMMLA and SMMLS are broken on both in normal and thumb mode, that is > both (different) implementations are wrong. They try to avoid a 64-bit > add for the rounding, which is not trivial if you want to support both > SMMLA and SMMLS with the same code. > > The code below uses the same implementation for both modes, using the > code from the ARM manual. It also fixes the thumb decoding that was a > mix between normal and thumb mode. > > This fixes the issues reported in > https://bugs.launchpad.net/qemu/+bug/629298
I've tested this patch with my random-sequence-generator for SMMLA/SMMLS/SMMUL for ARM and Thumb, and it does fix the bug. I have a few minor nitpicks about some comments, though. > -/* Round the top 32 bits of a 64-bit value. */ > -static void gen_roundqd(TCGv a, TCGv b) > +/* Add a to the msw of b. Mark inputs as dead */ > +static TCGv_i64 gen_addq_msw(TCGv_i64 a, TCGv b) > { > - tcg_gen_shri_i32(a, a, 31); > - tcg_gen_add_i32(a, a, b); > + TCGv_i64 tmp64 = tcg_temp_new_i64(); > + > + tcg_gen_extu_i32_i64(tmp64, b); > + dead_tmp(b); > + tcg_gen_shli_i64(tmp64, tmp64, 32); > + tcg_gen_add_i64(a, tmp64, a); > + > + tcg_temp_free_i64(tmp64); > + return a; > +} Isn't this adding b to the msw of a, rather than the other way round as the comment claims? > +/* Subtract a from the msw of b. Mark inputs as dead. */ Ditto. > @@ -6953,23 +6958,25 @@ static void disas_arm_insn(CPUState * env, > DisasContext *s) > tmp = load_reg(s, rm); > tmp2 = load_reg(s, rs); > if (insn & (1 << 20)) { > - /* Signed multiply most significant [accumulate]. */ > + /* Signed multiply most significant [accumulate]. > + (SMMUL, SMLA, SMMLS) */ SMMLA, not SMLA. -- PMM