Received from Keith Brown on Wed, Nov 18, 2015 at 10:12:13PM EST:
> I am trying to calculate the dot product.
> 
> something like this,
> 
> A=np.array(([1,2,3],[4,5,6])).astype(np.float64)
> print np.dot(A,A.T)
> 
> Instead, I would like to use GEMM (not batched I suppose).
> 
> My A can be large. Something like (800000,3). So, it would seem GPU
> could help me a lot here.

The skcuda.linalg.dot() function in scikit-cuda uses the CUBLAS GEMM functions 
when both
arguments have more than one dimension and sufficient GPU memory is available.
-- 
Lev Givon
Bionet Group | Neurokernel Project
http://lebedov.github.io/
http://neurokernel.github.io/


_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to