On Monday, 16 May 2016 at 10:25:33 UTC, Andrei Alexandrescu wrote:
I'm not sure about this. My understanding is that all SSE has hardware for 32 and 64 bit floats, and the the 80-bit hardware is pretty much cut-and-pasted from the x87 days without anyone really looking in improving it. And that's been the case for more than a decade. Is that correct?

Pretty much. On the OS side, Windows has officially deprecated x87 for the 64-bit version in desktop mode, and it's flat out forbidden in kernel mode. All development focus from Intel has been on improving the SSE/AVX instruction set and pipeline.

And on a gamedev side, we generally go for fast over precise. Or, more to the point, an acceptable loss in precision. The C++ codegen spits out SSE/AVX code by default in our builds, and I hand optimise with appropriate intrinsics certain functions that get inlined. SIMD is even more an appropriate point to bring up here - gaming is trending towards more parallel operations, operating on a single float at a time is not the correct way to get the best performance out of your system.

This is one of those things where I can see the point for the D compiler to do things its own way - but only when it expects to operate in a pure D environment. We have heavy interop between C++ and D. If simple functions can give different results at compile time without a way for me to configure the compiler on both sides, what actual benefits does that give me?

Reply via email to