https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82349

            Bug ID: 82349
           Summary: float INFINITY issue with division by zero in
                    regression with compiler option '-ffast-math'
           Product: gcc
           Version: unknown
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: c
          Assignee: unassigned at gcc dot gnu.org
          Reporter: marc.pres at gmx dot net
  Target Milestone: ---

Given this code snippet:

#include <stdio.h>
#include <math.h>

int main(void) {

int denom=0;
printf("%d %d %d\n", (int)(1.0/denom), (int)(1.0/0), (int)INFINITY);

return 0;
}

bash> gcc infinity.c

bash> -2147483648 -2147483648 2147483647

bash> gcc infinity.c -ffast-math

bash> -2147483648 2147483647 2147483647

What is unclear regarding the IEEE standard and the supplement documentation of
option "-ffast-math", is why is 1. first and 2. term in are different in sign
AND why are the 2 outputs generally different.

In my opionion all the terms above should be positive like 2147483647.


Dr. Marcello Presulli

Reply via email to