On a more upbeat note: the only motivation for all this grief was to
build pycuda - against boost - against a framework based python. I
have not been so impressed with CUDA for basic matrix operations
relative to a fast muti-core CPU on relatively small matrices (though
I really like the pycuda  approach in exposing the full CUDA kernel
api in python for application development) it's not that important to
me.

I can develop whatever complex kernels I like (in c) and plug them in
wherever they are needed (sage or R or whatever). On the other hand ad-
hoc I'd like to do basic linear algebra - so numpy works for me and if
required (cublas) or better transparent access to a cublas
implementation where pragma or a mop might opportunistically implement
numpy would makes much more sense to me.

How I now see this panning out (in my case)  is two distinct use
cases:

1.  Specialized simulations or sophisticated applications (physics
engines, itransformation engines etc.) - build a custom kernel in  c
and plug it in as a monolith with application specific api.

2. Routine linear algebra on an ad-hoc basis - use pycuda and have
blas implement pragmatically or have a pragma/mop provide a numpy
specialization.




--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to