https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94111
Bug ID: 94111 Summary: Wrong optimization: decimal floating-point infinity casted to double -> zero Product: gcc Version: 10.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: middle-end Assignee: unassigned at gcc dot gnu.org Reporter: ch3root at openwall dot com Target Milestone: --- Cast to double of a decimal floating-point infinity gives zero: ---------------------------------------------------------------------- #include <math.h> #include <string.h> #include <stdio.h> int main() { _Decimal32 d = (_Decimal32)INFINITY; unsigned u; memcpy(&u, &d, sizeof u); printf("repr: %08x\n", u); printf("cast: %g\n", (double)d); } ---------------------------------------------------------------------- $ gcc -std=c2x -pedantic -Wall -Wextra test.c && ./a.out repr: 78000000 cast: inf $ gcc -std=c2x -pedantic -Wall -Wextra -O3 test.c && ./a.out repr: 78000000 cast: 0 ---------------------------------------------------------------------- gcc x86-64 version: gcc (GCC) 10.0.1 20200305 (experimental) ---------------------------------------------------------------------- The representation is right for infinity in _Decimal32.