Am 01.07.2014 00:18, schrieb Andrei Alexandrescu:
On 6/30/14, 2:20 AM, Don wrote:
For me, a stronger argument is that you can get *higher* precision using
doubles, in many cases. The reason is that FMA gives you an intermediate
value with 128 bits of precision; it's available in SIMD but not on x87.

So, if we want to use the highest precision supported by the hardware,
that does *not* mean we should always use 80 bits.

I've experienced this in CTFE, where the calculations are currently done
in 80 bits, I've seen cases where the 64-bit runtime results were more
accurate, because of those 128 bit FMA temporaries. 80 bits are not
enough!!

Interesting. Maybe we should follow a simple principle - define
overloads and intrinsic operations such that real is only used if (a)
requested explicitly (b) it brings about an actual advantage.

gcc seems to use GMP for (all) its compiletime calculations - is this for cross-compile unification of calculation results or just for better result at all - or both?

Reply via email to