http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48488

--- Comment #1 from Dominique d'Humieres <dominiq at lps dot ens.fr> 2011-04-07 
09:08:01 UTC ---
> In write.c the intended default format for real numbers is documented as:
>
> /* Output a real number with default format.
>    This is 1PG14.7E2 for REAL(4), 1PG23.15E3 for REAL(8),
>    1PG28.19E4 for REAL(10) and 1PG43.34E4 for REAL(16).  */
> // FX -- FIXME: should we change the default format for __float128-real(16)?
>
> This is reasonable, since it reflects the rounded-down number of decimal
> significant digits for each format: 7, 15, 19, 34. Thus any number with less
> decimal digits than the maximum precision always retains its original decimal
> value which is a useful feature.

I think the error is in the documentation: the actual formats are 1PG16.8E2 for
REAL(4), 1PG26.17E3 for REAL(8), 1PG30.20E4 for REAL(10) and 1PG45.35E4 for
REAL(16) as shown by the following modified test

print "(1PG16.8E2)", .3_4 
print *,             .3_4 

print "(1PG26.17E3)", .3_8 
print *,              .3_8 

print "(1PG30.20E4)", .3_10 
print *,              .3_10
print *, x10

print "(1PG45.35E4)", .3_16
print *,              .3_16
end

that gives

  0.30000001    
  0.30000001    
  0.29999999999999999     
  0.29999999999999999     
  0.30000000000000000001      
  0.30000000000000000001      
  0.29999999999999999999999999999999999      
  0.29999999999999999999999999999999999      

The values 8, 17, 20, and 35 (?see FIXME) are chosen such that reading the
default output will always returns the original value up to the last bit. There
is a test in the test suite checking that, but AFAIK not for all supported
reals, in particular I don't think this has been tested for REAL(16).

Reply via email to