On Nov 23, 1:57 am, "Alex Ghitza" <[EMAIL PROTECTED]> wrote:
> Hi folks,
>
> I remember that the question of whether Sage could be ported to CUDA and run
> on nvidia's GPUs was already brought up on this list.  Here's another
> incentive to think about it:
>
> http://tech.slashdot.org/tech/08/11/23/068234.shtml

Well, those numbers are mostly not attainable in real life and it is 4
GFLops single precision. Double precision is already a whole different
game and you would be lucky to hit 1 GFlop double precision - not that
this is anything to sneeze at :)

Anyway, Clement and I have gotten FFPACK (which is the core of what
drives LinBox for example) to run on top of CUDABLAS, but we are
currently missing Tesla hardware to get the ball rolling in earnest. I
am also looking into libflame which has the potential to be used as
GPU scheduler so that all BLAS operations in Sage can be moved utilize
multiple GPU units. In the end that would accelerate everything that
uses matrixes of GP(p), charpoly and so on and we are certainly
working in that direction. Once the new Sage hardware is there we will
also stuff some GPU hardware in there.

> OK, I'll stop drooling now.

It isn't as powerful as it has been suggested, but for certain
applications it beats the living crap out of anything else (when
compared at Flops/Watt or Flops/$)

> Best,
> Alex

Cheers,

Michael
>
> --
> Alex Ghitza -- Lecturer in Mathematics -- The University of Melbourne --
> Australia --http://www.ms.unimelb.edu.au/~aghitza/
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to