On Wed, Oct 31, 2012 at 09:29:30PM +0800, Jia Liu wrote: > Hi Richard Peter Jovanovic and Aurelien, > > On Wed, Oct 31, 2012 at 1:26 PM, Richard Henderson <r...@twiddle.net> wrote: > > On 2012-10-31 01:44, Peter Maydell wrote: > >> On 30 October 2012 15:34, Jia Liu <pro...@gmail.com> wrote: > >>> On Mon, Oct 29, 2012 at 9:40 PM, Jovanovic, Petar <pet...@mips.com> wrote: > >>>>> imm = (int16_t)(imm << 6) >> 6; > >>>> > >>>> result of a bitwise shift of a signed type and a negative vlaue is > >>>> implementation-defined, so you can not rely on that. > >>>> > >>> > >>> I think it will take a 10bits signed value sign extend into 16bits > >>> signed value, and I've tested it with negative values, it working > >>> well. > >> > >> You cannot rely on the behaviour of a specific compiler implementation > >> as evidence that a piece of code is correct. C has a standard which > >> defines what is and is not valid. > > > > Indeed. The only portable way is > > > > val = ((val & (sign | (sign - 1))) ^ sign) - sign > > > > with all unsigned types, and "sign" set to the sign bit. > > > > Thank you very much for the code. > > Is this time OK?
As said by Peter, I don't think the code should be modified, the previous code was fine. > static void gen_mipsdsp_bitinsn(CPUMIPSState *env, DisasContext *ctx, > uint32_t op1, uint32_t op2, > int ret, int val) > { > const char *opn = "mipsdsp Bit/ Manipulation"; > TCGv t0; > TCGv val_t; > > if (ret == 0) { > /* Treat as NOP. */ > MIPS_DEBUG("NOP"); > return; > } > > t0 = tcg_temp_new(); > val_t = tcg_temp_new(); > gen_load_gpr(val_t, val); > > #define SIGN_EXTEND10_16(val, sign) \ > val = (((val & (sign | (sign - 1))) ^ sign) - sign) > > switch (op1) { > case OPC_ABSQ_S_PH_DSP: > switch (op2) { > case OPC_BITREV: > check_dsp(ctx); > gen_helper_bitrev(cpu_gpr[ret], val_t); > break; > case OPC_REPL_QB: > check_dsp(ctx); > { > uint32_t imm; > target_long result; > > imm = (ctx->opcode >> 16) & 0xFF; > result = imm << 24 | imm << 16 | imm << 8 | imm; > result = (int32_t)result; > tcg_gen_movi_tl(cpu_gpr[ret], result); > } > break; > case OPC_REPLV_QB: > check_dsp(ctx); > tcg_gen_ext8u_tl(cpu_gpr[ret], val_t); > tcg_gen_shli_tl(t0, cpu_gpr[ret], 8); > tcg_gen_or_tl(cpu_gpr[ret], cpu_gpr[ret], t0); > tcg_gen_shli_tl(t0, cpu_gpr[ret], 16); > tcg_gen_or_tl(cpu_gpr[ret], cpu_gpr[ret], t0); > tcg_gen_ext32s_tl(cpu_gpr[ret], cpu_gpr[ret]); > break; > case OPC_REPL_PH: > check_dsp(ctx); > { > uint16_t imm; > imm = (ctx->opcode >> 16) & 0x03FF; > SIGN_EXTEND10_16(imm, 0x200); > tcg_gen_movi_tl(cpu_gpr[ret], > (target_long)(int32_t) > ((uint32_t)imm << 16 | imm)); > } > break; > case OPC_REPLV_PH: > check_dsp(ctx); > tcg_gen_ext16u_tl(cpu_gpr[ret], val_t); > tcg_gen_shli_tl(t0, cpu_gpr[ret], 16); > tcg_gen_or_tl(cpu_gpr[ret], cpu_gpr[ret], t0); > tcg_gen_ext32s_tl(cpu_gpr[ret], cpu_gpr[ret]); > break; > } > break; > #ifdef TARGET_MIPS64 > case OPC_ABSQ_S_QH_DSP: > switch (op2) { > case OPC_REPL_OB: > check_dsp(ctx); > { > target_ulong imm; > > imm = (ctx->opcode >> 16) & 0xFF; > imm = (imm << 8) | imm; > imm = (imm << 16) | imm; > imm = (imm << 32) | imm; > tcg_gen_movi_tl(cpu_gpr[ret], imm); > break; > } > case OPC_REPL_PW: > check_dsp(ctx); > { > target_long imm; > > imm = (ctx->opcode >> 16) & 0x03FF; > SIGN_EXTEND10_16(imm, 0x200); > tcg_gen_movi_tl(cpu_gpr[ret], > (imm << 32) | (imm & 0xFFFFFFFF)); > break; > } > case OPC_REPL_QH: > check_dsp(ctx); > { > target_ulong imm; > > imm = (ctx->opcode >> 16) & 0x03FF; > SIGN_EXTEND10_16(imm, 0x200); > > imm = imm & 0xFFFF; > imm = (imm << 48) | (imm << 32) | (imm << 16) | imm; > tcg_gen_movi_tl(cpu_gpr[ret], imm); > break; > } > case OPC_REPLV_OB: > check_dsp(ctx); > tcg_gen_ext8u_tl(cpu_gpr[ret], val_t); > tcg_gen_shli_tl(t0, cpu_gpr[ret], 8); > tcg_gen_or_tl(cpu_gpr[ret], cpu_gpr[ret], t0); > tcg_gen_shli_tl(t0, cpu_gpr[ret], 16); > tcg_gen_or_tl(cpu_gpr[ret], cpu_gpr[ret], t0); > tcg_gen_shli_tl(t0, cpu_gpr[ret], 32); > tcg_gen_or_tl(cpu_gpr[ret], cpu_gpr[ret], t0); > break; > case OPC_REPLV_PW: > check_dsp(ctx); > tcg_gen_ext32u_i64(cpu_gpr[ret], val_t); > tcg_gen_shli_tl(t0, cpu_gpr[ret], 32); > tcg_gen_or_tl(cpu_gpr[ret], cpu_gpr[ret], t0); > break; > case OPC_REPLV_QH: > check_dsp(ctx); > tcg_gen_ext16u_tl(cpu_gpr[ret], val_t); > tcg_gen_shli_tl(t0, cpu_gpr[ret], 16); > tcg_gen_or_tl(cpu_gpr[ret], cpu_gpr[ret], t0); > tcg_gen_shli_tl(t0, cpu_gpr[ret], 32); > tcg_gen_or_tl(cpu_gpr[ret], cpu_gpr[ret], t0); > break; > } > break; > #endif > } > > #undef SIGN_EXTEND10_16 > tcg_temp_free(t0); > tcg_temp_free(val_t); > > (void)opn; /* avoid a compiler warning */ > MIPS_DEBUG("%s", opn); > } > > > >> > >> Having said that, right shift of negative signed integers is one of > >> those bits of implementation defined behaviour which we allow ourselves > >> to rely on in QEMU because all the platforms we care about behave > >> that way. (That integers are 2s complement representation is another.) > > > > Also very true. I don't like seeing the code in question though. > > We've several implementations of sign-extend-to-N-bits functions > > throughout qemu; we ought to unify them. > > > > > > r~ > > Regards, > Jia. > > -- Aurelien Jarno GPG: 1024D/F1BCDB73 aurel...@aurel32.net http://www.aurel32.net