On Tue, 29 May 2012 19:02:38 -0400
Andreas Kloeckner <li...@buster.tiker.net> wrote:

> Right. IOW, it's a mess. Multiple threads fighting over one GPU context
> is not a good idea, and if you have *any* other way to design your
> program, use that.

Thanks Anreas,

The core of the program has been developed for 5 year and the application for 
>2 years now.
The PyCuda part is very promising, especially because I only spent 6 hours on 
it. But moving to a pure ctypes interface or a Cython interface is still an 
option for now. I liked pretty much the scikit.cuda interface to cuda-fft.

> If you *need* to use this design, there's a way to prevent the leaks:
> Also manage all your memory manually (see, it's getting prettier by the
> minute). The problem is that it's not guaranteed that PyCUDA can
> activate an object's home context at garbage collection time.

I understand the problem.

> Alternatively, PyOpenCL (and PyFFT) make threading, even for a single
> context across multiple host threads, completely painless.

I would have preferred PyOpenCL and PyFFT but the later is limited to power of 
2 array size and we have have a PCO-edge camera witch does not fulfil this 
requirement.

> Nearly all of
> the CL API is thread-safe by definition. End of story. (CL 1.2 standard,
> Section A.2)

great to know.

> No idea. When I wrote GPUArray, I thought of .ptr as a read-only
> attribute. Seems scikits.cuda has a different opinion.

OK it is then in scikits.cuda. Sorry for the mistake.

Thanks a lot for all precision.

Cheers,
-- 
Jérôme Kieffer
Data analysis unit - ESRF

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to