https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111741
Andrew Pinski <pinskia at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|UNCONFIRMED |RESOLVED Resolution|--- |INVALID --- Comment #2 from Andrew Pinski <pinskia at gcc dot gnu.org> --- 80bit is the full precission and that 80bits includes 1 bit sign bit, 64bits for the mantissa and 15bits for the exponent. So anything above 64bits will start to lose precission in the last digits.