[sage-devel] Re: CUDA and Sage

2008-07-25 Thread Simon Beaumont

Can someone please outline for me the process/thread structure of the
sage workbook server - given multiple connections (user sessions) and
multiple concurrent notebooks and the degree of scope sharing - I did
think that each notebook had its own python process but this is
mistaken - how is context (variable scope and extent) shared? This is
really important if I am to understand how the CUDA device can be
timeshared. I do have multiple notebooks sharing the device (with
initialisation occurring once in the worksheet it would appear). What
does it mean to restart the worksheet? It would help me a lot if I had
a better understanding of sage architecture. Meanwhile I'm back to the
CUDA docs.

Thanks,

Simon


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-25 Thread William Stein

On Fri, Jul 25, 2008 at 1:43 PM, Simon Beaumont <[EMAIL PROTECTED]> wrote:
>
> Can someone please outline for me the process/thread structure of the
> sage workbook server - given multiple connections (user sessions) and
> multiple concurrent notebooks and the degree of scope sharing - I did
> think that each notebook had its own python process but this is
> mistaken - how is context (variable scope and extent) shared?

You're right actually -- every single worksheet process is run in its
own copy of Python.  This copy of Python is controlled asynchronously
via pexpect.  It's started up via the code in
 devel/sage/sage/server/notebook/worksheet.py
Look at the top there.

There is also an experimental mode you can set the notebook to
(by changing the variable multisession in worksheet.py) so that all
worksheet share exactly the same Python process -- same variables,
etc., -- like Mathematica does.

>  This is
> really important if I am to understand how the CUDA device can be
> timeshared. I do have multiple notebooks sharing the device (with
> initialisation occurring once in the worksheet it would appear). What
> does it mean to restart the worksheet?

The pexpect spawned python process where computations occur
is quit when you click "restart worksheet".

> It would help me a lot if I had
> a better understanding of sage architecture. Meanwhile I'm back to the
> CUDA docs.

Please continue to ask questions!

William

>
> Thanks,
>
> Simon
>
>
> >
>



-- 
William Stein
Associate Professor of Mathematics
University of Washington
http://wstein.org

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-22 Thread Simon Beaumont



On Jul 22, 11:18 pm, mabshoff <[EMAIL PROTECTED]> wrote:
> On Jul 22, 2:55 pm, Simon Beaumont <[EMAIL PROTECTED]> wrote:
>
> Hi Simon,
>
> > I decided to have a play with the pycuda-0.90.2 kit for which I needed
> > boost_1_35_0
>
> Mhh, is 1.35.0 mandatory? We might have to upgrade boost in PolyBoRi
> then.
>
From the pycuda docs this would appear to be the case:

"...You may already have a working copy of the Boost C++ libraries. If
so, make sure that it’s version 1.35.0 or newer..."


> Ok, please let us know what you find out.

I'll keep this topic posted.

> Yeah, even sgemm could be useful and it would be nice if you could get
> some numbers of sgemm on the CPU vs. the GPU. Once the IEEE conform
> hardware is out things will become a lot more interesting.

I'm digging out the custom kernel now - so I should get something very
soon.

(I did integrate the CUDA BLAS library sgemm into mathematica ealier
this year - so I have some numbers on that also of course mathematica
suffers badly from the mlink serialisation)

Cheers,

Simon

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-22 Thread Simon Beaumont

I decided to have a play with the pycuda-0.90.2 kit for which I needed
boost_1_35_0 - the main caveat is to make sure boost is using the sage
python include, lib and executable rather than any system installed
python. Similarly configure pycuda. My sage server runs as user sage -
so I had to run the x server under sage. (I am looking into how to get
the nvidia drivers loaded without x).

Upshot is low level api works like a charm from sage notebook (sage
3.0.4 built under gentoo on amd64 h/w). Since I don't need the full
BLAS library and I have some optimal kernels that do sgemm better than
the library version then I might have a play with this for a while and
see what I run into.

Cheers,

Simon

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-22 Thread mabshoff

On Jul 22, 2:55 pm, Simon Beaumont <[EMAIL PROTECTED]> wrote:

Hi Simon,

> I decided to have a play with the pycuda-0.90.2 kit for which I needed
> boost_1_35_0

Mhh, is 1.35.0 mandatory? We might have to upgrade boost in PolyBoRi
then.

> - the main caveat is to make sure boost is using the sage
> python include, lib and executable rather than any system installed
> python. Similarly configure pycuda. My sage server runs as user sage -
> so I had to run the x server under sage. (I am looking into how to get
> the nvidia drivers loaded without x).

Ok, please let us know what you find out.

> Upshot is low level api works like a charm from sage notebook (sage
> 3.0.4 built under gentoo on amd64 h/w). Since I don't need the full
> BLAS library and I have some optimal kernels that do sgemm better than
> the library version then I might have a play with this for a while and
> see what I run into.

Yeah, even sgemm could be useful and it would be nice if you could get
some numbers of sgemm on the CPU vs. the GPU. Once the IEEE conform
hardware is out things will become a lot more interesting.

> Cheers,
>
> Simon

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-22 Thread mabshoff



On Jul 22, 1:29 am, Thierry Dumont <[EMAIL PROTECTED]> wrote:
> Francesco Biscani a écrit :
>
>
>
> > With the non-header-only Boost libraries (such as Boost.Python), a
> > possible approach could be that of modifying the build system of a
> > package that uses them to compile and link the needed Boost libraries
> > together with the package's own library. I.e., add the Boost libraries'
> > .cpp files directly in the project's Makefile.

The solution I imagine is the following: PolyBoRi has a boost::python
interface, but we do not install it since we do not use it. It could
easily be added back. That way we do not have two boost installations
fighting for supremacy. Since PolyBoRi is in Sage and considered very
important Cuda breaking PolyBoRi via its boost requirement is a big no
no and will not happen. But the way suggested above will make both
camps happy. This might happen in the next week, i.e. we would likely
have an optional PyCuda.spkg by Dev1 at SFU.

> > Another possibility (especially if the use of Boost is widespread among
> > Sage packages) would be to compile shared library versions of the needed
> > Boost libraries when building Sage and use them when building the
> > packages that need them.
>
> As a Boost user, I just want to say that the Boost installation process
> is not standard (not using automake, but Jam) and taht this a very long
> task.
> Actually, most people use packaged versions (.deb in Debian...) to avoid
> compilation, but this seems precisely what Sage do not want to do.

Yeah, and many times one needs to use boost CVS in order to work
around bugs in less common OSes like some BSD flavors. So depending on
the system provided boost install is something I would prefer to
avoid.

Re cwitty: I do not mind if you redid the boost::python code in
Cython, I am just saying that I do not have the time to do it myself.
But I would be more than happy if you did it :)

> Yours,
>
> t.d.

Cheers,

Michael

> --
>
> Thierry Dumont. Institut Camille Jordan -- Mathematiques--
> Univ. Lyon I,43 Bd du 11 Novembre 1918, 69622
>  - Villeurbanne Cedex - France.
> [EMAIL PROTECTED]  web:http://math.univ-lyon1.fr/~tdumont
>
>  tdumont.vcf
> 1KDownload
>
>  smime.p7s
> 5KDownload
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-22 Thread Thierry Dumont
Francesco Biscani a écrit :
>
> 
> With the non-header-only Boost libraries (such as Boost.Python), a
> possible approach could be that of modifying the build system of a
> package that uses them to compile and link the needed Boost libraries
> together with the package's own library. I.e., add the Boost libraries'
> .cpp files directly in the project's Makefile.
> 
> Another possibility (especially if the use of Boost is widespread among
> Sage packages) would be to compile shared library versions of the needed
> Boost libraries when building Sage and use them when building the
> packages that need them.
> 
>

As a Boost user, I just want to say that the Boost installation process
is not standard (not using automake, but Jam) and taht this a very long
task.
Actually, most people use packaged versions (.deb in Debian...) to avoid
compilation, but this seems precisely what Sage do not want to do.

Yours,

t.d.
-- 

Thierry Dumont. Institut Camille Jordan -- Mathematiques--
Univ. Lyon I,43 Bd du 11 Novembre 1918, 69622
 - Villeurbanne Cedex - France.
[EMAIL PROTECTED]  web: http://math.univ-lyon1.fr/~tdumont

begin:vcard
fn:Thierry Dumont
n:Dumont;Thierry
org;quoted-printable:CNRS - Universit=C3=A9 Lyon 1.;Institut Camille Jordan
adr:;;43 Bd du 11 Novembre;Villeurbanne Cedex;F;69621;France
email;internet:[EMAIL PROTECTED]
title;quoted-printable:Ing=C3=A9nieur de Recherche/Research Ingeneer
x-mozilla-html:FALSE
url:http://math.univ-lyon1.fr/~tdumont
version:2.1
end:vcard



smime.p7s
Description: S/MIME Cryptographic Signature


[sage-devel] Re: CUDA and Sage

2008-07-22 Thread Francesco Biscani

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Michael,

mabshoff wrote:
| Various people have started looking into this, but so far no one has
| produced code. One big issue (at least for me) with pycuda is the
| requirement for boost, but I am not sure how that could be overcome.
| [...]
| The main issue with boost I see is that PolyBoRi ships with a subset
| of boost and installs it into $SAGE_LOCAL/include/boost. I assume that
| it will not be enough of boost, i.e. boost python is not part of it.
| Since PolyBoRi also has an interface to Python using boost Python we
| might be able to add the bits needed to the polybori.spkg, otherwise I
| see potentially huge trouble by colliding boost versions in the tree.
| And shipping boost itself is not really an option due to its rather
| large size.

If the Boost headers are all that is needed (in many cases they are,
since most Boost libraries are header-only), it may be worth to ship
them together with Sage. I just checked and the complete Boost headers
amount to a compressed tarbz2ball of 2.9 MB, so packing them in Sage
should not be a big deal.

With the non-header-only Boost libraries (such as Boost.Python), a
possible approach could be that of modifying the build system of a
package that uses them to compile and link the needed Boost libraries
together with the package's own library. I.e., add the Boost libraries'
.cpp files directly in the project's Makefile.

Another possibility (especially if the use of Boost is widespread among
Sage packages) would be to compile shared library versions of the needed
Boost libraries when building Sage and use them when building the
packages that need them.

Just a couple of thoughts :)

Best regards,

~  Francesco.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkiFltUACgkQFCtI0YdDCEs8DACdG93ZTG4wpyPxXMJMyl5bJqU7
zh0AnRfhIZ+wF+AOOpUVMp/6s8Qi2N6S
=T7jJ
-END PGP SIGNATURE-

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-21 Thread Carl Witty

On Jul 21, 8:53 pm, Carl Witty <[EMAIL PROTECTED]> wrote:
> CUDA includes 2 programming languages (a C dialect and a low-level
> assembly language), and a library to load programs into the graphics
> card, send data back and forth, call the programs, etc.  (There's also
> a mode where you write your program in a combination of regular C and
> CUDA's dialect; the CUDA tools compile the CUDA part themselves, pass
> the regular parts to your regular C compiler, and automatically
> construct glue code to tie the two together.)
>
> Actually, the above is a simplification: CUDA includes 2 separate
> libraries to load programs/exchange data/call the programs, and you
> apparently cannot mix and match.  CUDA includes fast BLAS and FFT
> implementations that run on the GPU; to use these, you must use the
> "high-level" API, but pycuda is based on the "low-level" API.
...
> mabshoff doesn't like the idea of recreating pycuda using Cython, but
> I think it's reasonable.  pycuda is actually pretty small (650 lines
> of Python, 1325 lines of C++; the 1325 lines of C++ would probably be
> replaced by a much smaller number of lines of Cython).  Doing the
> rewrite would also give a chance to switch from the low-level to the
> high-level API, which would make it much easier (possible?) to use the
> CUDA BLAS and FFT.

I've been doing some more reading, and using the high-level API is not
as easy as I was guessing.  The problem is that from the high-level
API, there seems to be no documented way to load a ".cubin" CUDA
object file from disk; instead, the object file must be linked into
the program.  Building a new Python extension for every new CUDA
program probably works (like how %cython works in the notebook), but
it's pretty annoying.

Carl

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-21 Thread Carl Witty

On Jul 20, 1:12 pm, "William Stein" <[EMAIL PROTECTED]> wrote:
> What is CUDA?  Why should the typical read of sage-devel or user
> of Sage care?  Any chance you could write a paragraph or two and
> about this?  It might get a lot more Sage developers excited about
> what you're doing (which is I'm sure extremely exciting).

I'm going to try to respond to the entire thread in this message.

First, a disclaimer: I've read a lot about CUDA, but I've never
actually written any CUDA programs.  (Hopefully this will change very
soon... my new computer with a CUDA-capable graphics card should be
ready later this week.)

CUDA is NVidia's programming environment to expose the computational
power of their graphics card for general-purpose computation.  Current
graphics cards are immensely powerful; for instance, the current top-
of-the-line NVidia card has (basically) 30 cores, each of which can do
8 single-precision floating-point operations (including a multiply-and-
add) per cycle at about 1.3 GHz, and is available for $450.  (To get
this speed, you need to be doing "the same" computation on lots of
different data; it's somewhat similar to programming for SSE/Altivec/
etc., although you end up with code that looks quite different.)

So for the sorts of things the graphics card can do well, it's
actually much faster than a CPU.

CUDA includes 2 programming languages (a C dialect and a low-level
assembly language), and a library to load programs into the graphics
card, send data back and forth, call the programs, etc.  (There's also
a mode where you write your program in a combination of regular C and
CUDA's dialect; the CUDA tools compile the CUDA part themselves, pass
the regular parts to your regular C compiler, and automatically
construct glue code to tie the two together.)

Actually, the above is a simplification: CUDA includes 2 separate
libraries to load programs/exchange data/call the programs, and you
apparently cannot mix and match.  CUDA includes fast BLAS and FFT
implementations that run on the GPU; to use these, you must use the
"high-level" API, but pycuda is based on the "low-level" API.

Although CUDA is best known for fast single-precision floating point,
it does have a full complement of integer operations, so it should
also be useful for arbitrary-precision arithmetic, modular arithmetic
(preferably with a modulus <2^24), computations over GF(2), etc.

Until very recently, CUDA could only handle single-precision floating
point.  The most recent products (the GTX 260, the GTX 280, the Tesla
C1060, and the Tesla S1070) support double-precision floating point,
but each core only has one double-precision FPU (so double-precision
operations happen at 1/8 the rate of single-precision).

mabshoff doesn't like the idea of recreating pycuda using Cython, but
I think it's reasonable.  pycuda is actually pretty small (650 lines
of Python, 1325 lines of C++; the 1325 lines of C++ would probably be
replaced by a much smaller number of lines of Cython).  Doing the
rewrite would also give a chance to switch from the low-level to the
high-level API, which would make it much easier (possible?) to use the
CUDA BLAS and FFT.

Note that the CUDA single-precision FPU is not quite IEEE-compliant...
denormal numbers (very small numbers) are not handled correctly,
division is slightly inaccurate, and there are a few other issues.

I was actually planning to start incorporating CUDA into Sage myself
sometime in the next few months, probably starting by rewriting pycuda
in Cython.

Carl

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-20 Thread Thierry Dumont
William Stein a écrit :
>
> 
> What is CUDA?  Why should the typical read of sage-devel or user
> of Sage care?  Any chance you could write a paragraph or two and
> about this?  It might get a lot more Sage developers excited about
> what you're doing (which is I'm sure extremely exciting).
> 
>  -- William
>

Using Graphical processors (GPU) for computations:

http://www.nvidia.com/object/cuda_home.html#state=home

As far as I know, but may be I'm wrong now, GPUs can only compute with 
floats (single precision)... This will not be very convenient for, say, 
number theory. But may be I'm wrong.

t.d.

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---

begin:vcard
fn:Thierry  Dumont
n:Dumont;Thierry 
org;quoted-printable;quoted-printable:Universit=C3=A9 Lyon 1 & CNRS.;Institut Camille Jordan -- Math=C3=A9matiques / Mathematics.
adr:;;43 Bd du 11 Novembre.;Villeurbanne;;69621;France
email;internet:[EMAIL PROTECTED]
title;quoted-printable:Ing=C3=A9nieur de Recherche / Research Engineer.
tel;work:04 72 44 85 23.
tel;fax:04 72 44 80 53
x-mozilla-html:FALSE
url:http://math.univ-lyon1.fr/~tdumont
version:2.1
end:vcard



[sage-devel] Re: CUDA and Sage

2008-07-20 Thread Glenn Tarbox

On Sun, 20 Jul 2008 09:54:39 -0700, mabshoff <[EMAIL PROTECTED]>  
wrote:

> On Jul 20, 9:21 am, Simon Beaumont <[EMAIL PROTECTED]> wrote:
>
> Hi Simon,
>
>> I am just about to embark on integrating come CUDA libraries into
>> sage. I was not sure of the best route to go - I am considering the
>> pycuda libraries as a starting point - this a pure kernel approach - I
>> but would also like to get the CUDA blas and fft libraries integrated.
>
> Various people have started looking into this, but so far no one has
> produced code. One big issue (at least for me) with pycuda is the
> requirement for boost, but I am not sure how that could be overcome.
> Since I am personally only interested in the CUDA BLAS the suggested
> way to hook it into Sage was directly via Cython, but recreating
> pycuda via Cython seems like a large waste of effort and should be
> avoided at all costs.

This will boil down to what you're trying to accomplish.  Exposing the  
full CUDA api to python might be a large task.  OTOH, building a  
capability using C/C++ CUDA and exposing a simplified API to SAGE through  
Cython might be straightforward and avoid some of the issues below.

Given the problems CUDA targets and the performance characteristics of the  
GPU/CPU/memory integration, its almost certain that a great deal of that  
interface needn't and shouldn't be pushed up through Cython into Python.

>
> The main issue with boost I see is that PolyBoRi ships with a subset
> of boost and installs it into $SAGE_LOCAL/include/boost. I assume that
> it will not be enough of boost, i.e. boost python is not part of it.
> Since PolyBoRi also has an interface to Python using boost Python we
> might be able to add the bits needed to the polybori.spkg, otherwise I
> see potentially huge trouble by colliding boost versions in the tree.
> And shipping boost itself is not really an option due to its rather
> large size.
>
>> (I think cuda-python) can do this. I'm sure I'm not the first down
>> this road and wondered which would be the most useful. I'd also
>> appreciate some tips and pointers into integration of sage arrays and
>> matrices to make this as native as possible.
>
> I think using numpy arrays here for now might be the way to go, but
> that limits you to numpy data types. Since we are talking about
> numerical computations it seems that we will not lose functionality
> here at all.

There's a GSOC activity looking at Python's buffer interface which is  
coordinated with the Cython activity

http://wiki.cython.org/enhancements/buffer

and is related to PEP 3118: http://www.python.org/dev/peps/pep-3118/

Travis Oliphant of Numpy fame is the author.

Putting a buffer api on top of CUDA might be a good place to start and  
given its broad use integrating C/C++ features throughout Python, you  
might get a lot of capability for free.  For example, mmap files expose a  
buffer interface.  The new buffer interface will likely provide those  
characteristics you seek from Numpy and Cython looks to be very interested  
in how to make it all work.

>
>> Of course this work would
>> be shared with the community. I have plans to make some CUDA hardware
>> available over the web using sage and have also have some longer term
>> plans for a modeling environment based on it.
>
> Cool. There are various people waiting to but the next generation of
> Tesla hardware, i.e. the one that actually provides IEEE doubles. I am
> basically waiting for it to become available since the things I am
> interested in do require IEEE precision and close to IEEE is not
> enough.
>
>> Any advice and pointers most welcome.
>
> Please keep us up to date how things are going and let us know about
> any problems you have. This is an area we should definitely make some
> progress this year.
>
>> Regards,
>>
>> Simon
>
> Cheers,
>
> Michael
> >

-- 
Glenn H. Tarbox, PhD || 206-494-0819 || [EMAIL PROTECTED]
"Don't worry about people stealing your ideas. If your ideas are any
  good you'll have to ram them down peoples throats" -- Howard Aiken

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-20 Thread Simon Beaumont


Thanks Michael - that's a very good heads up - I'll take some time to
digest the issues and  get working on an approach that makes sense for
the widest community - we want a well supported capability with a
future.

It is my opinion (in spite of current 32bit floats  - though I think
the Tesla h/w has 64bit float and watch this space) is that GPU
programming in general and CUDA in particular is a disruptive
technology. (A Teraflop for a few thousand dollars!) I think that Sage
is also disruptive and has huge potential beyond pure mathematics
research.  I'll try and elaborate on this in due course as I have
quite a bit of work to do to demonstrate this - but I have a hunch
about both.

Cheers,

Simon

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-20 Thread William Stein

On Sun, Jul 20, 2008 at 6:21 PM, Simon Beaumont <[EMAIL PROTECTED]> wrote:
>
> I am just about to embark on integrating come CUDA libraries into
> sage. I was not sure of the best route to go - I am considering the
> pycuda libraries as a starting point - this a pure kernel approach - I
> but would also like to get the CUDA blas and fft libraries integrated.
> (I think cuda-python) can do this. I'm sure I'm not the first down
> this road and wondered which would be the most useful. I'd also
> appreciate some tips and pointers into integration of sage arrays and
> matrices to make this as native as possible. Of course this work would
> be shared with the community. I have plans to make some CUDA hardware
> available over the web using sage and have also have some longer term
> plans for a modeling environment based on it.
>
> Any advice and pointers most welcome.
>

What is CUDA?  Why should the typical read of sage-devel or user
of Sage care?  Any chance you could write a paragraph or two and
about this?  It might get a lot more Sage developers excited about
what you're doing (which is I'm sure extremely exciting).

 -- William

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: CUDA and Sage

2008-07-20 Thread mabshoff

On Jul 20, 9:21 am, Simon Beaumont <[EMAIL PROTECTED]> wrote:

Hi Simon,

> I am just about to embark on integrating come CUDA libraries into
> sage. I was not sure of the best route to go - I am considering the
> pycuda libraries as a starting point - this a pure kernel approach - I
> but would also like to get the CUDA blas and fft libraries integrated.

Various people have started looking into this, but so far no one has
produced code. One big issue (at least for me) with pycuda is the
requirement for boost, but I am not sure how that could be overcome.
Since I am personally only interested in the CUDA BLAS the suggested
way to hook it into Sage was directly via Cython, but recreating
pycuda via Cython seems like a large waste of effort and should be
avoided at all costs.

The main issue with boost I see is that PolyBoRi ships with a subset
of boost and installs it into $SAGE_LOCAL/include/boost. I assume that
it will not be enough of boost, i.e. boost python is not part of it.
Since PolyBoRi also has an interface to Python using boost Python we
might be able to add the bits needed to the polybori.spkg, otherwise I
see potentially huge trouble by colliding boost versions in the tree.
And shipping boost itself is not really an option due to its rather
large size.

> (I think cuda-python) can do this. I'm sure I'm not the first down
> this road and wondered which would be the most useful. I'd also
> appreciate some tips and pointers into integration of sage arrays and
> matrices to make this as native as possible.

I think using numpy arrays here for now might be the way to go, but
that limits you to numpy data types. Since we are talking about
numerical computations it seems that we will not lose functionality
here at all.

> Of course this work would
> be shared with the community. I have plans to make some CUDA hardware
> available over the web using sage and have also have some longer term
> plans for a modeling environment based on it.

Cool. There are various people waiting to but the next generation of
Tesla hardware, i.e. the one that actually provides IEEE doubles. I am
basically waiting for it to become available since the things I am
interested in do require IEEE precision and close to IEEE is not
enough.

> Any advice and pointers most welcome.

Please keep us up to date how things are going and let us know about
any problems you have. This is an area we should definitely make some
progress this year.

> Regards,
>
> Simon

Cheers,

Michael
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---