https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79341

--- Comment #41 from Dominik Vogt <vogt at linux dot vnet.ibm.com> ---
> The first loop loops until add is -1.000000E+12, at which point for the
> first time tem is -9.223373E+18 and thus different from -9.223372E+18, and
> -9.223373E+18 should not be representable in signed long.
> Do you perhaps use HW dfp rather than software emulation?

Well, just what the test driver used:

 ... -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects 
-fsanitize=float-cast-overflow -fsanitize-recover=float-cast-overflow
-DUSE_INT128 -DUSE_DFP -DBROKEN_DECIMAL_INT128  -lm   -m64 ...

When the comparison is done in main, the values "min" and "tem" have 64-Bit
precision.  The actual comparison is

  if (tem.0_1 != -9223372036854775808)

Which is true because that value doesn't fit in a _Decimal32.  The if body is
executed, and "tem" is converted to 32 bit format and stored in %f0.  Gdb says
that the converted value is exactly the same as the value of "min", and that
seems to be the cause of the test failure.

In assembly:
        ste     %f2,160(%r15) <---- store "tem" on stack
        le      %f2,160(%r15) <---- load "tem" from stack
        ldetr   %f2,%f2,0     <---- convert "short" dfp value to "long"
        cdtr    %f2,%f4       <---- compare with "min"
        je      .L33
        le      %f0,160(%r15) <---- reload "tem"
        brasl   %r14,cvt_sl_d32

This must look differently for you.  Now, why does the test fail for me but not
for you?

Reply via email to