On 1/8/24 03:45, Richard Biener wrote:
On Tue, Jan 2, 2024 at 2:37 PM <pan2...@intel.com> wrote:

From: Pan Li <pan2...@intel.com>

According to the sematics of no-signed-zeros option, the backend
like RISC-V should treat the minus zero -0.0f as plus zero 0.0f.

Consider below example with option -fno-signed-zeros.

void
test (float *a)
{
   *a = -0.0;
}

We will generate code as below, which doesn't treat the minus zero
as plus zero.

test:
   lui  a5,%hi(.LC0)
   flw  fa5,%lo(.LC0)(a5)
   fsw  fa5,0(a0)
   ret

.LC0:
   .word -2147483648 // aka -0.0 (0x80000000 in hex)

This patch would like to fix the bug and treat the minus zero -0.0
as plus zero, aka +0.0. Thus after this patch we will have asm code
as below for the above sampe code.

test:
   sw zero,0(a0)
   ret

This patch also fix the run failure of the test case pr30957-1.c. The
below tests are passed for this patch.

We don't really expect targets to do this.  The small testcase above
is somewhat ill-formed with -fno-signed-zeros.  Note there's no
-0.0 in pr30957-1.c so why does that one fail for you?  Does
the -fvariable-expansion-in-unroller code maybe not trigger for
riscv?
Loop unrolling (and thus variable expansion) doesn't trigger on the VLA style architectures. aarch64 passes becuase its backend knows it can translate -0.0 into 0.0.

While we don't require that from ports, I'd just assume do the optimization similar to aarch64 rather than xfail or skip the test on RISC-V. We can load 0.0 more efficiently than -0.0.



I think we should go to PR30957 and see what that was filed originally
for, the testcase doesn't make much sense to me.
It's got more history than I'd like :(


jeff

Reply via email to