>If our hardware can do 128-bit
>floating point operations in twice the time needed for similar 64-bit
>operations, then our overall time has improved by a factor of 2.

Strange calculation. But I dont`t know how you came to this result.

What I was always asking me is how fast fpu operation can be done.

For normal integer addition, there is an algorithm in O(n) [with n as the bit 
lenght of number] and even one in O(log n to a base x) (i saw it in my book of 
digital engeniering). For integer multiplication you can prepare the n*n 
multiplication matrix in one step and add the numbers in 2*n*log n [O(n*log n)]. 
Maybe there are faster methods of doing this both.

But for FPU addition you need shifts and comparing and so on. And for FPU 
multiplication you need truncating. Floatingpoint multiplication needs fewer 
steps but the first step (the pure multiplication) needs a lot of time.

So it`s difficult to say what time a 128 bit FPU operation to perform, but with 
twice as fast ...?

I will ask my assistent of system programming this question.

Bojan 

Reply via email to