https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113679

--- Comment #5 from Дилян Палаузов <dilyan.palauzov at aegee dot org> ---
gcc -m64 -fexcess-precision=fast -o diff diff.c && ./diff
0.000000
gcc -m32 -fexcess-precision=fast -o diff diff.c && ./diff
-2.000000
clang -m32 -fexcess-precision=fast -o diff diff.c && ./diff
0.000000
clang -m64 -fexcess-precision=fast -o diff diff.c && ./diff
0.000000
gcc -m64 -fexcess-precision=standard -o diff diff.c && ./diff
0.000000
gcc -m32 -fexcess-precision=standard -o diff diff.c && ./diff
0.000000
clang -m32 -fexcess-precision=standard -o diff diff.c && ./diff
0.000000
clang -m64 -fexcess-precision=standard -o diff diff.c && ./diff
0.000000

If this excess precision has justification, why are the results different for
32 and 64bit code?  With

  printf("%f\n", (double)l - d);
  printf("%f\n", (double)(l - d));

there is indeed a difference:
$ gcc -m32 -fexcess-precision=standard -o diff diff.c && ./diff
0.000000
-2.000000

Reply via email to