http://gcc.gnu.org/bugzilla/show_bug.cgi?id=52428
Janne Blomqvist <jb at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|UNCONFIRMED |NEW Last reconfirmed| |2012-04-24 CC| |jb at gcc dot gnu.org Ever Confirmed|0 |1 --- Comment #2 from Janne Blomqvist <jb at gcc dot gnu.org> 2012-04-24 06:37:16 UTC --- Confirmed. I agree that at runtime we should allow full use of the range that the hardware provides, the Fortran numerical model be damned. Also, since the overhead of range checking in string->character conversions is AFAICS insignificant, I see no reason why it couldn't be enabled all the time. Thus it's IMHO unnecessary to burden users with having to remember yet another option. In order to implement this, we'd need to consider MIN values of signed types separately from MAX values. Perhaps in mk-kinds-h.sh we should also generate GFC_INTEGER_X_{MAX,MIN} macros. Trying to figure out whether such macros are always available also on non-C99 targets, I got bogged down in a maze of GCC_HEADER_STDINT, glimits.h and whatnot, so I'm not sure actually. OTOH, currently we typedef the GFC_{U}INTEGER_X types to the C99 {u}intNN_t types, which are required to have two's complement representation, so we could just hardcode the values (except for __int128_t where the compiler apparently doesn't support large enough literals?).