http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48488

--- Comment #2 from Thomas Henlich <thenlich at users dot sourceforge.net> 
2011-04-07 11:22:49 UTC ---
Ok, now I found that this was changed in
http://gcc.gnu.org/viewcvs?view=revision&revision=128967

The testcase says:

> ! This tests that the default formats for formatted I/O of reals are
> ! wide enough and have enough precision, by checking that values can
> ! be written and read back.

However, the selected width in the current version (8, 17, 20, 35) is not big
enough to guarantee that. See IEEE 754/2008:

For the purposes of discussing the limits on correctly rounded conversion,
define the following quantities:
...
―   for binary32, Pmin(binary32) = 9
―   for binary64, Pmin(binary64) = 17
―   for binary128, Pmin(binary128) = 36
―   for all other binary formats bf, Pmin(bf) = 1 + ceiling(p×log10(2)), where
p is the number of significant bits in bf
...
Conversions from a supported binary format bf to an external character sequence
and back again results in a copy of the original number so long as there are at
least Pmin(bf) significant digits specified and the rounding-direction
attributes in effect during the two conversions are round to nearest
rounding-direction attributes.

I see two possibilities:

A. We aim to make the default format useful for human readers, so that e.g. 0.3
is always represented as 0.3000...
In that case we must choose P(bf) = floor(p×log10(2))

P = {7, 15, 19, 34}

B. We aim to make the default format useful for round-trip conversion, so that
a binary value is always converted to the same binary value.
In that case we must choose P(bf) = 1 + ceiling(p×log10(2))

P = {9, 17, 21, 36}

I personally prefer A.

Currently the values are sort of in the middle (except for real64).

Reply via email to