https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90248

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |glisse at gcc dot gnu.org

--- Comment #11 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
(In reply to Andrew Pinski from comment #10)
> So I am trying to understand, the semantics here.
> HONOR_SIGNED_ZEROS says -0.0 won't exist or that the sign of -0.0 and 0.0
> don't matter? and what are the semantics if -0.0 shows up?

That sign of -0.0 and 0.0 doesn't matter IMHO, we certainly can't guarantee
that -0.0 won't show up, that is what the hw computes in various cases.
My understanding is that -fno-signed-zeros is user saying that if the result is
+/-0, then the user is not going to use e.g. copysign or signbit on that that
would turn that insignificant sign difference into something that changes the
behavior of the program.
The docs say:
     Allow optimizations for floating-point arithmetic that ignore the
     signedness of zero.  IEEE arithmetic specifies the behavior of
     distinct +0.0 and -0.0 values, which then prohibits simplification
     of expressions such as x+0.0 or 0.0*x (even with
     '-ffinite-math-only').  This option implies that the sign of a zero
     result isn't significant.

Reply via email to