On Mon, 8 Mar 2010 18:10:20 -0800 (PST), Vinzent Steinberg
<vinzent.steinb...@googlemail.com> wrote:
> On 9 Mrz., 00:49, Ondrej Certik <ond...@certik.cz> wrote:
>> On Mon, Mar 8, 2010 at 3:05 PM, yavor...@mail.bg <gerund...@gmail.com>
>> wrote:
>> > Hi there, I kind of wanted to participate in this year's GSOC, and I
>> > looked through the suggestions and I couldn't help noticing there are
>> > no suggestions for optimizations for a particular architecture that
>> > could make SymPy faster. I am speaking particularly of GPU usage (e.g.
>> > CUDA) and parallel distributed memory architectures. As much as I've
>> > seen SymPy doesn't include any optimizations for that kind of usage
>> > and it would be nice (particularly now that there is PyCUDA & etc.).
>> > So I was thinking it might be a good suggestion to participate with
>> > this kind of idea... or is it against the idea of portability and
>> > therefore not acceptable for SymPy? The way I see it it could greatly
>> > increase performance in many algorithms if done properly... Thanks
>>
>> If it can speedup some things, then sure (we'll include some cython
>> core, as an optional module too). But I have the same concerns as
>> Robert. Do you have some particular things in mind?
>>
>> As far as I know, we don't use numeric matrices much.
> 
> And I think CUDA currently only supports 32 bit floats, if I recall it
> correctly, which would not be very useful for symbolic computations.

CUDA supports integers and doubles as well. Indeed the performance offered
by the GPU for certain types of computations is staggering --- some months
back I used a CUDA build of msieve to aid in the prime factorisation of a
512-bit number. Using CUDA with a modern GPU (GTX275) worked out to be
around 12 times faster than a 3.0Ghz Core 2 Quad (64-bit). The specific
calculations required working with either 128- or 196-bit integers.

However, despite its potential I would consider CUDA unsuitable. This is
because it is a proprietary language supported and maintained only by a
single vendor (nVidia). The future so far as open standards go is without a
doubt OpenCL. (The computational version of OpenGL.)

So far as symbolic manipulation goes I am unsure how useful a GPU (or
similar device) would be. However, the mpmath project may very well be
interested (and is very closely related to SymPy and equally as awesome!).
Of course there is only really a benefit when large data-sets are being
processed (as the is a not-insignificant overhead).

Polemically yours, Freddie.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to