Richard Henderson <r...@twiddle.net> writes: > On 10/07/2016 01:57 PM, Nikunj A Dadhania wrote: >> +#define VSX_SCALAR_CMP_DP(op, cmp, exp, svxvc) >> \ >> +void helper_##op(CPUPPCState *env, uint32_t opcode) >> \ >> +{ >> \ >> + ppc_vsr_t xt, xa, xb; >> \ >> + >> \ >> + getVSR(xA(opcode), &xa, env); >> \ >> + getVSR(xB(opcode), &xb, env); >> \ >> + getVSR(xT(opcode), &xt, env); >> \ >> + >> \ >> + if (unlikely(float64_is_any_nan(xa.VsrD(0)) || >> \ >> + float64_is_any_nan(xb.VsrD(0)))) { >> \ >> + if (float64_is_signaling_nan(xa.VsrD(0), &env->fp_status) || >> \ >> + float64_is_signaling_nan(xb.VsrD(0), &env->fp_status)) { >> \ >> + float_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, 0); >> \ >> + } >> \ >> + if (svxvc) { >> \ >> + float_invalid_op_excp(env, POWERPC_EXCP_FP_VXVC, 0); >> \ >> + } >> \ >> + } else { >> \ >> + if (float64_##cmp(xb.VsrD(0), xa.VsrD(0), &env->fp_status) == exp) >> { \ >> + if (msr_le) { >> \ >> + xt.VsrD(0) = 0; >> \ >> + xt.VsrD(1) = -1; >> \ >> + } else { >> \ >> + xt.VsrD(0) = -1; >> \ >> + xt.VsrD(1) = 0; >> \ >> + } >> \ >> + } else { >> \ >> + xt.VsrD(0) = 0; >> \ >> + xt.VsrD(1) = 0; >> \ >> + } >> \ >> + } >> \ >> + >> \ >> + putVSR(xT(opcode), &xt, env); >> \ >> + helper_float_check_status(env); >> \ >> +} > > I think you should be checking for NaN after the compare, and seeing that > env->fp_status.float_exception_flags is non-zero. C.f. FPU_FCTI. Or in > general, the coding structure used by target-tricore: > > result = float*op(args...) > flags = get_float_exception_flags(&env->fp_status); > if (unlikely(flags)) { > set_float_exception_flags(&env->fp_status, 0); > // special cases for nans, sometimes modifying result > float_check_status(env, flags, GETPC()) > } > return result // or putVSR as appropriate > > Of course, the same can be said for other places in fpu_helper.c, where this > detail has been previously missed.
Yes, I had noticed that, but didn't want to change the behaviour as I am not expert here. I will update and send a new revision. > And, unrelated, but a reminder for future cleanup: the fmadd, fmsub, fnmadd, > and fnmsub helpers should be rewritten to use float64_muladd. To be fair, I > think these were written before we had proper fused multiply-add support > within > softfloat. Sure. Will add that to my todo list. Regards Nikunj