On Monday, 16 May 2016 at 10:29:02 UTC, Andrei Alexandrescu wrote:
On 5/16/16 2:46 AM, Walter Bright wrote:
I used to do numerics work professionally. Most of the troubles I had were catastrophic loss of precision. Accumulated roundoff errors when doing numerical integration or matrix inversion are major problems. 80
bits helps dramatically with that.

Aren't good algorithms helping dramatically with that?

Also, do you have a theory that reconciles your assessment of the importance of 80-bit math with the fact that the computing world is moving away from it? http://stackoverflow.com/questions/3206101/extended-80-bit-double-floating-point-in-x87-not-sse2-we-dont-miss-it


Andrei

Regardless of the compiler actually doing it or not, the argument that extra precision is a problem is self defeating. I don't think argument for speed have been raised so far.

Reply via email to