On 1/5/2021 9:57 PM, Timon Gehr wrote:
Anyway, I wouldn't necessarily say occasional accidental correctness is the only upside, you also get better performance and simpler code generation on the deprecated x87. I don't see any further upsides though, and for me, it's a terrible trade-off, because possibility of incorrectness and lack of portability are among the downsides.

There are algorithms in Phobos that can break when certain operations are computed at a higher precision than specified. Higher does not mean better; not all adjectives specify locations on some good/bad axis.

As far as I can tell, the only algorithms that are incorrect with extended precision intermediate values are ones specifically designed to tease out the roundoff to the reduced precision.

I don't know of straightforward algorithms, which is what most people write, being worse off because of more precision.

For example, if you're summing values in an array, the straightforward approach of simply summing them will not become incorrect with extended precision. In fact, it is likely to be more correct.

> I want to execute the code that I wrote, not what you think I should have
> instead written, because sometimes you will be wrong.

With programming languages, it does not matter what you think you wrote. What matters is how the language semantics are defined to work. In writing professional numerical code, one must carefully understand it, knowing that it does *not* work like 7th grade algebra. Different languages can and do behave differently, too.

Reply via email to