On 15 May 2001, at 18:22, Daran wrote:
> GIMPS clients use the spare capacity of the primary processing resource within
> any computer:- the CPU(s). But most modern PCs have another component capable
> of performing rapid and sophisticated calculations:- the GPU on the graphics
> accelerator. Is there any way that the GPU can be programmed to perform GIMPS
> processing when otherwise not in use?
There are several practical difficulties:
(a) GPUs are usually 32-bit, single-precision hardware. Nothing more
is needed for graphics displays. Running multiple precision
arithmetic using floating-point FFT in single-precision hardware is
not very effective; you need about 20 guard bits; with a single-
precision manitssa being 24 bits long, you don't have very much left
to work with.
(b) GPUs run code from ROM. The system as a whole transfers data but
not instructions from the main CPU to the GPU through the video card
bus connectors. There is no easy way to change the code which runs
the processor(s) on the graphics card.
(c) Even if the GPU was programmable from the system, there are so
many different graphics card controller architectures that writing
code that would work nearly universally would involve something close
to an impossible amount of effort. And graphics cards keep changing,
fast!
(d) Don't forget that running a LL test on a 10M exponent involves
about 8 MBytes of data storage. This is not a serious problem given
the amount of system memory available to most systems capable of
completing a LL test on a 10M exponent in a reasonable time, but
"robbing" 8 MBytes from the more limited memory available on most
graphics cards might well be critical.
Regards
Brian Beesley
_________________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers