On 5/13/2016 5:49 PM, Timon Gehr wrote:
Nonsense. That might be true for your use cases. Others might actually depend on
IEE 754 semantics in non-trivial ways. Higher precision for temporaries does not
imply higher accuracy for the overall computation.

Of course it implies it.

An anecdote: a colleague of mine was once doing a chained calculation. At every step, he rounded to 2 digits of precision after the decimal point, because 2 digits of precision was enough for anybody. I carried out the same calculation to the max precision of the calculator (10 digits). He simply could not understand why his result was off by a factor of 2, which was a couple hundred times his individual roundoff error.


E.g., correctness of double-double arithmetic is crucially dependent on correct
rounding semantics for double:
https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic

Double-double has its own peculiar issues, and is not relevant to this 
discussion.


Also, it seems to me that for e.g.
https://en.wikipedia.org/wiki/Kahan_summation_algorithm,
the result can actually be made less precise by adding casts to higher precision
and truncations back to lower precision at appropriate places in the code.

I don't see any support for your claim there.


And even if higher precision helps, what good is a "precision-boost" that e.g.
disappears on 64-bit builds and then creates inconsistent results?

That's why I was thinking of putting in 128 bit floats for the compiler 
internals.


Sometimes reproducibility/predictability is more important than maybe making
fewer rounding errors sometimes. This includes reproducibility between CTFE and
runtime.

A more accurate answer should never cause your algorithm to fail. It's like putting better parts in your car causing the car to fail.


Just actually comply to the IEEE floating point standard when using their
terminology. There are algorithms that are designed for it and that might stop
working if the language does not comply.

Conjecture. I've written FP algorithms (from Cody+Waite, for example), and none of them degraded when using more precision.


Consider that the 8087 has been operating at 80 bits precision by default for 30 years. I've NEVER heard of anyone getting actual bad results from this. They have complained about their test suites that tested for less accurate results broke. They have complained about the speed of x87. And Intel has been trying to get rid of the x87 forever. Sometimes I wonder if there's a disinformation campaign about more accuracy being bad, because it smacks of nonsense.

BTW, I once asked Prof Kahan about this. He flat out told me that the only reason to downgrade precision was if storage was tight or you needed it to run faster. I am not making this up.

Reply via email to