At 04:01 PM 11/3/98 +0100, Bojan Antonovic wrote:
>>If our hardware can do 128-bit
>>floating point operations in twice the time needed for similar 64-bit
>>operations, then our overall time has improved by a factor of 2.
>
>Strange calculation. But I dont`t know how you came to this result.

Actually it is more than a factor of 2.  I think this has been explained, but
I'll try again.  First multiply two general 5-digit numbers by hand, then
multiply two 10-digit numbers.  It will take you about 4 times as long to do
the second multiplication because the manual method of multiplication takes
O(n^2) time, where n is the number of digits.

Testing Mersenne primes involves a lot of multiplication (and division) of
very
large numbers, but better algorithms are used so that the multiplication time
is something line O( n^2 log(n) log(log(n)) ).

32-bit processors handle 32-bit arithmetic as quickly as they handle 16-bit or
8-bit arithmetic.  128-bit processors will handle 128-bit arithmetic as easily
as 64-bit arithmetic.  However, using 128-bit arithmetic instead of 64-bit
arithmetic means that you cut the number of digits in the multiple-precision
arithmetic in half.  Stick in n/2 in place of n in (n^2)*log(n)*log(log(n))
and
you will see that the ratio is more than 2, so using the larger number of bits
will be more than twice as fast for multiplication and division.  (It will be
twice as fast on addition and subtraction.)



+----------------------------------------------------------+
| Jud McCranie  [EMAIL PROTECTED] or @camcat.com |
|                                                          |
| Where a calculator on the ENIAC is equipped with 19,000  |
| vacuum tubes and weighs 30 tons, computers in the future |
| may have only 1,000 vacuum tubes and perhaps only weigh  |
| 1.5 tons.    -- Popular Mechanics, March 1949.           |
+----------------------------------------------------------+

Reply via email to