On Tue, Jun 23, 2009 at 8:44 AM, Lars T. Kyllingstad<pub...@kyllingen.nospamnet> wrote: > Is there ever any reason to use float or double in calculations? I mean, > when does one *not* want maximum precision? Will code using float or double > run faster than code using real?
As Witold mentioned, float and double are the only types SSE (and similar SIMD instruction sets on other architectures) can deal with. Furthermore most 3D graphics hardware only uses single or even half-precision (16-bit) floats, so it makes no sense to use 64- or 80-bit floats in those cases. Also keep in mind that 'real' is simply defined as the largest supported floating-point type. On x86, that's an 80-bit real, but on most other architectures, it's the same as double anyway.