-----Original Message-----
From: Gareth Randall <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: 15 May 2001 23:36
Subject: Re: Mersenne: GIMPS accelerator?

>Daran,
>
>This is an interesting piece of lateral thinking that deserves to go further
>than I think it actually does.

Thank you for taking me seriously.

>Essentially, I'm not sure how the operations that a graphics card can
>provide, such as line drawing, texture overlaying, raytraced light effects
>etc, could be made to implement a LL test or FFT etc which would require
>things like bit tests, conditioning branches and loops etc.

What you've listed are the functions of a graphics card, each of which will
have been implemented through the application of one or more primitive
operations.  For example, the function of mapping a set of co-ordinates in
3-space onto screen pixels will be implemented by a linear transformation,
which will itself be implemented through a number of scalar multiplications.

I'm wondering if it might be possible to access any of the available primitive
operations without having to invoke a specific card function.

AFAICS the problem requires affirmative answers to all of the following
questions.

1.    Can the hardware theoretically do work useful to GIMPS?
2.    Could this be done efficiently enough to be worthwhile?
3.    Is it possible to program the hardware to do this work?
4.    Would it be possible to read the results back from the card?
5.    Is the available technical documentation sufficient for a programmer to
be able to implement this?
6.    Would the implementation be acceptable to the user?
7.    Are the prospective gains to the project worth the programming effort?

I suspect the answer to 1 is yes, given how simple the requirements are for a
set of primitive operations to be able to universally compute - the Turing
machine and Conway's life spring to mind.  But we wouldn't waste time
programming a hardware Turing machine to do LL tests, even if we had one.

An example of a user issue would be if the only way to program the card is to
'flash upgrade' the GPU's on-card firmware.  I wouldn't be willing to do that,
although I might consider installing a special GIMPS driver, so long as I
could uninstall again.

>Conceivably additions could be done by superimposing textures and reading
>back the resulting frame buffer, but these wouldn't be 64-bit precision
>additions!

That's all you get with CPU integer arithmetic, but you can build longer ones
out of the shorter.

>Maybe some form of matrix multiplication could be done by rotating textures
>before superimposing? However, I think the resulting calculation efficiency
>would be very poor, and may never achieve useful precision.

Could you not build an FFT out of Discrete Cosine Transforms?  Or build a
multiplication from DCTs in some other way?  Some cards have hardware support
for this.

>Also, any code would be very hardware specific, and may only work if the
>display was not displaying, say, a desktop.

Which would hit 'prospective gains' question hard, since it would not then be
useful on Windows machines.

>However, if someone could implement it, it could provide the *ultimate* in
>Mersenne related screen savers! What you'd see on the screen would be the
>actual calculations themselves taking place before your eyes, and with no
>overheads for displaying it either!

That I did not think of.

>Yours,

>======= Gareth Randall =======

Daran G.


_________________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to