https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106165
--- Comment #5 from xeioex <xeioexception at gmail dot com> --- My question is more practical. For example while `-fexcess-precision=standard` fixes the problem in GCC. But, I am left with the same problem regarding other compilers. At least am looking for a way to detect excessive precision as soon as possible (during configure time). Tried to use FLT_EVAL_METHOD t.c ``` #include <stdio.h> #include <math.h> #include <float.h> #include <fenv.h> int main(){ printf("%d\n", (int) FLT_EVAL_METHOD); } ``` 1) gcc -o t t.c && ./t 2 2) gcc -fexcess-precision=standard -o t t.c && ./t 2 How am I expected to use FLT_EVAL_METHOD correctly?