Daran,
This is an interesting piece of lateral thinking that deserves to go further than I
think it actually does.
Essentially, I'm not sure how the operations that a graphics card can provide, such as
line drawing, texture overlaying, raytraced light effects etc, could be made to
implement a LL test or FFT etc which would require things like bit tests, conditioning
branches and loops etc.
Conceivably additions could be done by superimposing textures and reading back the
resulting frame buffer, but these wouldn't be 64-bit precision additions! Maybe some
form of matrix multiplication could be done by rotating textures before superimposing?
However, I think the resulting calculation efficiency would be very poor, and may
never achieve useful precision.
Also, any code would be very hardware specific, and may only work if the display was
not displaying, say, a desktop.
However, if someone could implement it, it could provide the *ultimate* in Mersenne
related screen savers! What you'd see on the screen would be the actual calculations
themselves taking place before your eyes, and with no overheads for displaying it
either!
Yours,
======= Gareth Randall =======
Daran wrote:
>
> I know very little about computer architecture, so please feel free to shoot
> me down if what follow is complete nonsense.
>
> GIMPS clients use the spare capacity of the primary processing resource within
> any computer:- the CPU(s). But most modern PCs have another component capable
> of performing rapid and sophisticated calculations:- the GPU on the graphics
> accelerator. Is there any way that the GPU can be programmed to perform GIMPS
> processing when otherwise not in use? If this could be done, then it would
> have the effect of turning every client computer into an multi-processor
> system.
_________________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers