Mark Vojkovich wrote:
On Fri, 30 May 2003, Ian Romanick wrote:

Mark Vojkovich wrote:

  I'd like to propose adding a XvMCCopySurfaceToGLXPbuffer function
to XvMC.  I have implemented this in NVIDIA's binary drivers and
am able to do full framerate HDTV video textures on the higher end
GeForce4 MX cards by using glCopyTexSubImage2D to copy the Pbuffer
contents into a texture.

This shoulds like a good candidate for a GLX extension. I've been wondering when someone would suggest somthing like this. :) Although, I did expect it to come from someone doing video capture work first.

I wanted to avoid something from the GLX side. Introducing the
concept of an XFree86 video extension buffer to GLX seemed like a hard
sell. Introducting a well establish GLX drawable type to XvMC seemed more reasonable.

Right. I thought about this a bit more last night. A better approach might be to expose this functionality as an XFree86 extension, then create a GLX extension on top of it. I was thinking of an extension where you would bind a "magic" buffer to a pbuffer, then take a snapshot from the input buffer to the pbuffer. Doing that we could create layered extensions for binding v4l streams to pbuffers. This would be like creating a subclass in C++ and just over-riding the virtual CaptureImage method. I think that would be much nicer for application code.


At the same time, all of the real work would still be done in the X extension (or v4l). Only some light-weight bookkeeping would live in GLX.

Over the years there have been a couple extensions for doing things this, both from SGI. They both work by streaming video data into a new type of GLX drawable and use that to source pixel / texel data.

  http://oss.sgi.com/projects/ogl-sample/registry/SGIX/video_source.txt
  http://oss.sgi.com/projects/ogl-sample/registry/SGIX/dmbuffer.txt

The function that you're suggesting here is a clear break from that. I don't think that's a bad thing. I suspect that you designed it this way so that the implementation would not have to live in the GLX subsystem or in the 3D driver, correct?

That was one of the goals. I generally view trying to bind a video-specific buffer to an OpenGL buffer as a bad idea since they
always end up as second class. While there have been implementations
that could use video buffers as textures, etc... they've always had
serious limitations like the inability to have mipmaps, or repeat, or
limited filtering ability or other disapointing things that people
are sad to learn about. I opted instead for an explicit copy from
a video-specific surface to a first-class OpenGL drawable. Being
able to do HDTV video textures on a P4 1.2 Gig PC with a $100 video
card has show this to be a reasonable tradeoff.

The reason you would lose mipmaps and most of the texture wrap modes is because video streams rarely have power-of-two dimensions. In the past, hardware couldn't do mipmapping or GL_WRAP on non-power-of-two textures. For the most part, without NV_texture_rectangle, you can't even use npot textures. :(


With slightly closer integration between XvMC and the 3D driver, we ought to be able to do something along those lines. Basically, bind a XvMCSurface to a pbuffer. Then, each time a new frame of video is rendered the pbuffer would be automatically updated. Given the way the XvMC works, I'm not sure how well that would work, though. I'll have to think on it some more.


   Mpeg frames are displayed in a different order than they are
rendered.  It's best if the decoder has full control over what goes
where and when.

Oh. That does change things a bit.


Status
XvMCCopySurfaceToGLXPbuffer (
 Display *display,
 XvMCSurface *surface,
 XID pbuffer_id,
 short src_x,
 short src_y,
 unsigned short width,
 unsigned short height,
 short dst_x,
 short dst_y,
 int flags
);

One quick comment. Don't use 'short', use 'int'. On every existing and future platform that we're likely to care about the shorts will take up as much space as an int on the stack anyway, and slower / larger / more instructions will need to be used to access them.

This is an X-window extension. It's limited to the signed 16 bit coordinate system like the X-window system itself, all of Xlib and the rest of XvMC.

So? Just because the values are limited to 16-bit doesn't necessitate that they be stored in a memory location that's only 16-bits. If X were being developed from scratch today, instead of calling everything short, it would likely be int_fast16_t. On IA-32, PowerPC, Alpha, SPARC, and x86-64, this is int. Maybe using the C99 types is the right answer anyway.


This function copies the rectangle specified by src_x, src_y, width,
and height from the XvMCSurface denoted by "surface" to offset dst_x, dst_y within the pbuffer identified by its GLXPbuffer XID "pbuffer_id".
Note that while the src_x, src_y are in XvMC's standard left-handed
coordinate system and specify the upper left hand corner of the
rectangle, dst_x and dst_y are in OpenGL's right-handed coordinate system and denote the lower left hand corner of the destination rectangle in the pbuffer.

This conceptually concerns me. Mixing coordinate systems is usually a bad call, and is likely to confuse developers. I assume this means that the image is implicitly inverted? Hmm...

X is left handed. OpenGL is right handed. Addressing a pbuffer in anything other than a right handed coordinate system is perverse. Mixed coordinate systems seems a necessity here.

There is no inversion, it's just a remap of the origins.

Uh...so what happens if you do something like this:


        glXMakeCurrentContext( dpy, my_window, my_pbuffer, my_ctx );
        glRasterPos3i( 0, 0, 0 );
        glCopyPixels( 0, 0, width, height, GL_COLOR );

Unless XvMCCopySurfaceToGLXPbuffer invertes (the memory representation) of the image, the result of the glCopyPixels will be upside-down. In the XvMC coordinate system, (0,0) is the top of the image. In the GL coordinate system, (0,0) is the bottom of the image. It's not really a problem if you only use the pbuffer as a texture. In that case you just need to set your texture coordinates to get the result you want. :)

"Flags" may be XVMC_TOP_FIELD, XVMC_BOTTOM_FIELD or XVMC_FRAME_PICTURE.
If flags is not XVMC_FRAME_PICTURE, the src_y and height are in field
coordinates, not frame. That is, the total copyable height is half
the height of the XvMCSurface.


XvMCCopySurfaceToGLXPbuffer does not return until the copy to the
pbuffer has completed. XvMCCopySurfaceToGLXPbuffer is pipelined
with XvMCRenderSurface so no explicit synchronization between XvMCRenderSurface and XvMCCopySurfaceToGLXPbuffer is needed.
The pbuffer must be of type GLX_RGBA, and the destination of the
copy is the left front buffer of the pbuffer. Success is returned
if no error occured, the error code is returned otherwise.

This goes against common practice. The copy should obey the setting of glDrawBuffer. I assume you must have had some reason for doing this...I just can't imagine what it was. :)


The glDrawBuffer has nothing to do with the Pbuffer. It is part of
OpenGL context, not the Pbuffer. XvMC has no knowlege of internal
OpenGL context state.

If the pbuffer isn't the currently bound context, you're right. Since you're able to pass in an arbitrary pbuffer, that may not (and probably won't be) the case. Something about this part still just feels wrong to me. Forcing the copy to always go to the left front buffer seems like it will cause developers problems. Afterall, if you're using a double-buffered pbuffer in a "usual" double-buffered way, you always draw to the *back* buffer. How much more difficult would it be to allow the caller to specify where the data should go?


_______________________________________________
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel

Reply via email to