On Montag 02 Februar 2009, Dan Goodman wrote:
> So my question is basically, how can I allocate space on the global
> memory using PyCuda, and then copy from this space. I couldn't decide
> how to do this from the docs (or even if its possible).

From the Python side: not a problem, just use pycuda.driver.mem_alloc().
From the GPU code side (in a CUDA C function): you can't--allocating memory 
would require global locking, and that would be pretty awfully slow.

> Of course if anyone has another idea for a parallel way to do my
> thresholding operation that would also be great! :-)

For the given sizes, I think that Ian's idea involving atomics is a really 
nice one.

Andreas

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
PyCuda mailing list
[email protected]
http://tiker.net/mailman/listinfo/pycuda_tiker.net

Reply via email to