Dnia 2010-10-31, nie o godzinie 17:00 -0400, Andreas Kloeckner pisze:
> Hi Tomasz,
> 
> first of all, thanks for sending this patch! Much appreciated. Comments
> below.

No problem.
My responses and some rationale below.
I use Reference Manual 3.2rc, and all chapter numbers, etc. come from
this document.

Rationale.
In CUDA 3.0 NVIDIA introduced new function for mapping OpenGL
objects to CUDA, deprecating old functions. I believe that old
functions will remain in libraries for long time; they work,
and PyCUDA code using those bindings also works.
At the same time it is important that PyCUDA uses new functions,
as most code examples and discussions (e.g. on NVIDIA forums)
are using those new functions:

http://forums.nvidia.com/index.php?showtopic=183347
http://forums.nvidia.com/index.php?showtopic=184208
http://forums.nvidia.com/index.php?showtopic=183711

I know maintaining API compatibility of code is important,
and I am not proposing breaking PyCUDA.
At the same time I would like for the members of the list
to discuss their usage of OpenGL mappings, to know how to
deal with this transition.

> 
> On Fri, 29 Oct 2010 19:20:17 +0200, Tomasz Rybak <bogom...@post.pl> wrote:
> > The biggest problem is that IMO (I might be wrong - do
> > not know PyCUDA to such intimate details) it introduces
> > non-backward-compatible change.
> 
> Why do this? Why not introduce a new interface and leave the old one in
> place? That way, we can compile both interfaces as long as they're
> available, and it's up to the user which one to use. We shouldn't be in
> the business of imposing API use restrictions on PyCUDA users.

OK. This was rather intended as "proof of concept",
or to show intent. Besides - please come with better name
for new class?
We already have BufferObject in PyCUDA, and I would like
to avoid situation knows from Java ;-) (NewIO, NIO, ...) 

Is RegisteredBuffer/RegisteredImage OK as name for new classes?
Or better GraphicsBuffer/GraphicsImage?

> 
> > I have split BufferObject into two:
> > BufferObject responsible for buffers (VBO, etc.)
> > ImageObject responsible for textures, etc.
> > 
> > Previously BufferObject was responsible for both those
> > cases - now anyone who wants to use CUDA on images
> > must use ImageObject (which is API-breaking change).
> 
> Can you explain the reasoning for this split? A brief glance over the
> code didn't seem to reveal many differences. If nothing else, they
> should probably inherit from a common base that absorbs the shared code.

This comes from NVIDIA libraries.
According to CUDA Reference Manual 4.40.2 there are two functions
for mapping OpenGL objects into CUDA:
4.40.2.2 cuGraphicsGLRegisterBuffer
4.40.2.3 cuGraphicsGLRegisterImage
The former is for buffers, the latter for images, textures,
render buffers.
So this calls for two sets of classes - one for buffers,
one for images; their code would look very similarly,
but they would have different parameters for constructors
(like dimension of texture for ImageObject).

After creation of those mappings the rest of code is the same;
basically call
cuGraphicsMapResource
cuGraphicsResourceGetMappedPointer
...
cuGraphicsUnmapResources

I guess it should be possible to use only one mapper object/function
for both Image and Buffer mappings.



> 
> Also, I noticed that your use of cudaGrahpicsXXX flags with
> cuGrahpicsGLRegisterBuffer() is likely erroneous, as is visible in this
> bit of documentation. There are equivalent CU_... flags which we should
> use instead. See docs here (or in the header):
> 
> http://developer.download.nvidia.com/compute/cuda/3_0-Beta1/toolkit/docs/online/group__CUGL_ge148ed92fe0728be19c9281302b5922c.html
> 
> Also fishy: image_object, if it is different code, should probably be
> using cuGraphicsGLRegisterImage() (rather than ... Buffer()).

(mumbles about copy-paste inducing errors...)
Thanks - fixed.

> 
> > I also attach problem sample code (Qt+OpenGL+PyCUDA)
> > that uses new BufferObject.
> > I have checked and it works after applying my patch;
> > I only had to change initialisation to get program
> > using VBO running after applying my patch.
> 
> I still don't like that we need to change the init sequence--why is this
> necessary?

The only required thing is ensuring that GL-aware CUDA context
is created (cuGLCtxCreate) instead of ordinary context (cuCtxCreate).
This is done by import pycuda.gl.autoinit - but in my program
I had to call it in method of QGLWidget that initializes OpenGL,
not in the beginning of Python program.
If we leave "import pycuda.autoinit" and then add
"import pycuda.gl.autoinit" later the only disadvantage would be
two contexts - but I am not sure how CUDA would behave having
one ordinary and one GL-aware context.

There is of course problem with selecting GPU for usage - but I do
not know how to solve it for now.

> 
> > I have not checked ImageObject.
> > 
> > I have also started writing documentation for new pycuda.gl
> > functions.
> > 
> > I have removed gl_init as documentation says that cuGLInit()
> > is deprecated and is not doing anything (Reference 4.41.4.1).
> 
> Again, as above, we shouldn't be making API use decisions for our
> users. I assume it did do something at some point in the past, so as
> long as feasible we should still wrap it.

So if I understand you correctly, we should leave all old functions
and classes, adding deprecation warnings to them, and add new classes?
And only remove old classes when NVIDIA removes functions they use
from CUDA libraries?
I agree.

> 
> > Instead I have created gl_select_device; this function however
> > must be called as the first (before all other CUDA functions),
> > is cuda*, not cu* function, does not cooperate with
> > CUDAPP_CALL_GUARDED - so I do not know what to do with it.
> > I was not able to test it - I do not have setup with two devices.
> 
> As above, I don't like mixing in cuda* (run-time API) functionality with
> cu* (driver API) functionality.

Yeah, I understand - but this is the only way of selecting which GPU
to use with OpenGL in case of multi-GPU setup.
I guess we might leave this change for the end.

> 
> > Other problems:
> > 
> > Function pycuda.tools.get_default_device is depreciated,
> > but pycuda.gl.autoinit uses it. I have changed autoinit
> > to use Device(0), but it is not elegant and should be changed.
> 
> 

Should here be some answer, or is my email client behaving
strangely again?

> 
> > 
> > Function pycuda.tools.make_default_context contains
> > repeated code checking for presence of CUDA device:
> > 
> > def make_default_context():
> >     ndevices = cuda.Device.count()
> >     if ndevices == 0:
> >         errmsg = "No CUDA enabled device found. Please check your
> > installation."
> >         raise RuntimeError, errmsg
> > 
> >     ndevices = cuda.Device.count()
> >     if ndevices == 0:
> >         raise RuntimeError("No CUDA enabled device found. "
> >                 "Please check your installation.")
> > 
> > Why? Is that omission, or does this double checking serve
> > some purpose?
> 
> Whoops. Fixed.
> 

Thanks.

My idea or order of sending patches and making changes in PyCUDA is:
1. mark buffer_object and gl_init as deprecated
2. add new data types map_flags and target_flags
3. add new class for image mapping (and helper classes/functions),
test it
4. add new class for buffer mapping, mark in documentation that it
is recommended way of mapping OpenGL object in PyCUDA



-- 
Tomasz Rybak <bogom...@post.pl> GPG/PGP key ID: 2AD5 9860
Fingerprint A481 824E 7DD3 9C0E C40A  488E C654 FB33 2AD5 9860
http://member.acm.org/~tomaszrybak

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to