Am 24.02.2014 22:12, schrieb Lev Givon:
Received from Evgeny Lazutkin on Mon, Feb 24, 2014 at 02:56:52PM EST:
Dear all,

sorry for the delayed answer, I have problem with installation. But
now everything is just fine.

So, I have installed Scikit (as it was proposed from GitHub) and CULA.

I am confused. I'd like to solve very simple system A*X = B, but it raises the
error: *TypeError: only length-1 arrays can be converted to python scalars.*
Could you please tell me, what is going wrong?

I suppose, that I do everything wrong. Even if it works...how to
obtain parallelization? From the example by Andreas, he used
The CULA library takes advantage of parallelization internally; you don't need
to write any CUDA kernel to use it.

SourceModule with C language and for me it is obvious, what is
happen there.

But here, I cannot understand. I have tried to write "own"
SourceModule and call functions from CULA - but when I try to
manipulate with memory or write function - comes error - that I
cannot do that from __device__ /__global__.

Oh...I am stuck (

Could you please make a code corrections and  give me an answers!
Find please py-file in attach.
You need to copy the data you wish to process to GPU memory using
pycuda.gpuarray and pass the GPU memory pointers to the CULA function
wrapper. You can access the pointer associated with a GPUArray instance using
the ptr attribute.
So, I did the following:
# Transfer to GPU
a_gpu = pycuda.gpuarray.to_gpu(A)
b_gpu = pycuda.gpuarray.to_gpu(B)
#pointer
p1 = pycuda.gpuarray.GPUArray(a_gpu, shape(A)).ptr()

and ot raises the error - in gpuarray.py in __init__
dtype = np.dtype(dtype)
TypeError: data type not understood

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to