Hi, all.

Recently, I found that float (32bit folating point) is faster than double
(64bit) when calculating devision. More, division is extreamly slower than
multiplication.

750_um.pdf Table 6-7. Floating-Point Instructions (p.272-)

 fadd  1-1-1    fadds   1-1-1
 fdiv   31      fdivs     17
 fmadd 2-1-1    fmadds  1-1-1
 fmul  2-1-1    fmuls   1-1-1
 fsub  1-1-1    fsubs   1-1-1

MPC7400UM_prel.pdf

 fadd  1-1-1    fadds   1-1-1
 fdiv   31      fdivs     17
 fmadd 1-1-1    fmadds  1-1-1
 fmul  1-1-1    fmuls   1-1-1
 fsub  1-1-1    fsubs   1-1-1

With Metroworks Codewarrior, compiler does not convert division with
constants to multiplication, so we had better optimize souce code level.
I think, some other C compilers may not optimize in this topic, too.

And if we don't add "f" on the end of constants, it will be considered as
double, and this caused speed down.

    float x, y, z;
    x = y / z / 2.0; /* bad! */
    x = y * 0.5f / z; /* good */
    x = y / ( z * 2.0f );
 
But some math libraries may not support float, i.e, negligence.

    #define sinf(X) (float)sin((double)X)

So, without using ANSI <math.h> in loop heavy, float <-> double has large
overhead, I think using double is no merit. Data size is larger than float
so I wonder it may cause cache miss-hit, lower loading/saving RAM <->
register. And PowerPC G4's vector unit does not support double.

And pow(x,y) is much slower than exp(y*log(x)) on Macintosh (with MathLib on
OS 9 + G4, and G3).

How does everyone think about unifing float size to 32bit? It has merit to
x86, or other CPU?

-- 
Osamu Shigematsu
mailto:[EMAIL PROTECTED]

--
MP3 ENCODER mailing list ( http://geek.rcc.se/mp3encoder/ )

Reply via email to