Well... anything that uses matrices is welcome for CUDA. Also many of
the combinatorial algorithms have parallel versions

On 9 Март, 01:20, Robert Kern <robert.k...@gmail.com> wrote:
> On Mon, Mar 8, 2010 at 17:05, yavor...@mail.bg <gerund...@gmail.com> wrote:
> > Hi there, I kind of wanted to participate in this year's GSOC, and I
> > looked through the suggestions and I couldn't help noticing there are
> > no suggestions for optimizations for a particular architecture that
> > could make SymPy faster. I am speaking particularly of GPU usage (e.g.
> > CUDA) and parallel distributed memory architectures. As much as I've
> > seen SymPy doesn't include any optimizations for that kind of usage
> > and it would be nice (particularly now that there is PyCUDA & etc.).
> > So I was thinking it might be a good suggestion to participate with
> > this kind of idea... or is it against the idea of portability and
> > therefore not acceptable for SymPy? The way I see it it could greatly
> > increase performance in many algorithms if done properly... Thanks
>
> I doubt that many of SymPy's algorithms are particularly amenable to
> hardware acceleration. Do you have something specific in mind?
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
>   -- Umberto Eco

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to