On 5/13/2016 10:46 PM, Ola Fosheim Grøstad wrote:
On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote:
BTW, I once asked Prof Kahan about this. He flat out told me that the only
reason to downgrade precision was if storage was tight or you needed it to run
faster. I am not making this up.
He should have been aware of reproducibility since people use fixed point to
achieve it, if he wasn't then shame on him.
Kahan designed the x87 and wrote the IEEE 754 standard, so I'd do my homework
before telling him he is wrong about basic floating point stuff.
In Java all compile time constants are done using strict settings and it
provides a keyword «strictfp» to get strict behaviour for a particular
class/function.
What happened with Java was interesting. The original spec required double
arithmetic to be done with double precision. This wound up failing all over the
place on x86 machines, which (as I explained) does temporaries to 80 bits.
Forcing the x87 to use doubles for intermediate values caused Java to run much
slower, and Sun was forced to back off on that requirement.