Hi, all.
Recently, I found that float (32bit folating point) is faster than double
(64bit) when calculating devision. More, division is extreamly slower than
multiplication.
750_um.pdf Table 6-7. Floating-Point Instructions (p.272-)
fadd 1-1-1fadds 1-1-1
fdiv 31 fdivs 17
But some math libraries may not support float, i.e, negligence.
Actually, ANSI says that math libraries must use double.
#define sinf(X) (float)sin((double)X)
Calling sin() in a tight loop is an indication of bigger problems :-) (OK, it
was just an example)
And pow(x,y) is much
In article [EMAIL PROTECTED],
Osamu Shigematsu [EMAIL PROTECTED] wrote:
[snip]
How does everyone think about unifing float size to 32bit? It has
merit to x86, or other CPU?
On StrongARM I tried REAL_IS_FLOAT, REAL_IS_LONG_DOUBLE and none of
both. It didn't make any difference in the
Osamu Shigematsu schrieb am Mit, 26 Jan 2000:
Hi, all.
Recently, I found that float (32bit folating point) is faster than double
(64bit) when calculating devision. More, division is extreamly slower than
multiplication.
snip
How does everyone think about unifing float size to 32bit? It
on 00.1.27 6:27 AM, Stefan Bellon at [EMAIL PROTECTED] wrote:
On StrongARM I tried REAL_IS_FLOAT, REAL_IS_LONG_DOUBLE and none of
both. It didn't make any difference in the resulting encoding time.
Greetings,
On my Mac, just retype typedef doublt FLOAT8 to typedef float FLOAT8, it
became
How does everyone think about unifing float size to 32bit? It has merit to
x86, or other CPU?
On x86, I found double to be not significantly slower than float with gcc.
Given C's preference for doubles, I tend to code for double, even (especially)
when perfomance is an issue.
Monty