https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85957

--- Comment #22 from Alexander Cherepanov <ch3root at openwall dot com> ---
(In reply to jos...@codesourcery.com from comment #11)
> Yes, I agree that any particular conversion to integer executed in the 
> abstract machine must produce some definite integer value for each 
> execution.
The idea that floating-point variables could be unstable but integer variables
have to be stable seems like an arbitrary boundary. But I guess this is deeply
ingrained in gcc: the optimizer just assumes that integers are stable (e.g.,
optimizes `if (x != y && y == z) use(x == z);` for integers to `if (x != y && y
== z) use(0);`) but it's ready for instability of FPs (e.g., doesn't do the
same optimization for FPs).

When the stability of integers is violated everything blows up. This bug report
show that instability of floating-point values extends to integers via casts.
Another way is via comparisons -- I've just filed bug 93681 for it. There is
also a testcase there that shows how such an instability can taint surrounding
code.

So, yeah, it seems integers have to be stable. OTOH, now that there is sse and
there is -fexcess-precision=standard floating-point values are mostly stable
too. Perhaps various optimizations done for integers could be enabled for FPs
too? Or the situation is more complicated?

Reply via email to