On 5/15/2016 6:49 AM, Joseph Rushton Wakeling wrote:
However, that's not the same as saying that the choice of precision should be in
the hands of the hardware, rather than the person building + running the
program.


I for one would not like to have to spend time working out why my
program was producing different results, just because I (say) switched from a
machine supporting maximum 80-bit float to one supporting 128-bit.

If you wrote it "to not break if the floating-point precision is enhanced, and to allow greater precision to be used when the hardware supports it" then what's the problem?

Can you provide an example of a legitimate algorithm that produces degraded results if the precision is increased?

Reply via email to