https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109008

--- Comment #43 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
Created attachment 54622
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=54622&action=edit
gcc13-pr109008-2.patch

Above mentioned incremental patch.  It does actually 2 things.  One is not
widening to -inf or +inf, but to nextafter value in a hypothetical wider
floating point type with equal mantissa precision but wider exponent range.
And the other is, as the new test in that patch shows, that regardless whether
we do the above optimization or not, with -ffinite-math-only it would still
be miscompiled, as we limit the range to the maximum representable value and so
on the resulting range in this case we get with vanilla trunk
[frange] double
[-1.9958403095347198116563727130368385660674512604354575415e+292 (-0x0.8p+972),
0.0 (0x0.0p+0)] range.  That is incorrect, __DBL_MAX__ +
0x0.fffffffffffff8p+970 when rounding to nearest is still finite (__DBL_MAX__),
and so valid for -ffinite-math-only.

There is another issue.  For !MODE_HAS_INFINITIES (TYPE_MODE (type)) types
I'm afraid this is still broken and significantly more so.  Because the minimum
or maximum representable values in those cases (probably, haven't played with
such machines) act as saturation boundaries (like infinities normally),
WHATEVER_MAX + anything_positive is still WHATEVER_MAX etc.
So, wonder if we e.g. shouldn't just punt in float_binary_op_range_finish
for such modes if lhs range has at least one of the boundaries the maximum
representable one.  Though, maybe it isn't limited to the reverse ops, who
knows...

Reply via email to