Hi,

* Chris Heuser <[email protected]> [2009-03-09 00:30:58 -0400]:

Is there a way for me to use multiple python threads in order to run cuda
code on multiple GPUs?

I have created several threads, and in each I attempt to create a context
for a different cuda device, but I am getting an "invalid context" error
when I try to copy an array over.
Any suggestions?

I use pp (http://www.parallelpython.com/) to run different python
instances. It is very easy and even allows execution on different
machines (though I have not tried this with CUDA code). Essentially it
spawns a new python instance, so each CUDA call runs in a different
process instead of a different thread and the Global Interpreter Lock is
avoided.

There could be ways that involve less overhead, but this works fine for
me.

Hope that helps!
Nicholas

_______________________________________________
PyCuda mailing list
[email protected]
http://tiker.net/mailman/listinfo/pycuda_tiker.net

Reply via email to