Hi

    consider this code fragment

    shape  = np.array(gpuarr.shape,dtype=np.int16)
    another_gpuarr  = gpuarray.zeros(shape, np.uint8)

    this code faults at

    pycuda/gpuarray.py, line 81

    where a call is made to

    self.gpudata = self.allocator(self.size * self.dtype.itemsize)

     this is because now type(self.size * self.dtype.itemsize) =
np.int64 and not int.  This messes up the call to Boost.python
allocate.

     I'm sure there are many places where this kind of error happens.

Regards

Nithin

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to