Mark Vojkovich wrote:
On Sat, 31 May 2003, Ian Romanick wrote:

Mark Vojkovich wrote:

On Fri, 30 May 2003, Ian Romanick wrote:

Mark Vojkovich wrote:

 I'd like to propose adding a XvMCCopySurfaceToGLXPbuffer function
to XvMC.  I have implemented this in NVIDIA's binary drivers and
am able to do full framerate HDTV video textures on the higher end
GeForce4 MX cards by using glCopyTexSubImage2D to copy the Pbuffer
contents into a texture.

This shoulds like a good candidate for a GLX extension. I've been wondering when someone would suggest somthing like this. :) Although, I did expect it to come from someone doing video capture work first.

I wanted to avoid something from the GLX side. Introducing the
concept of an XFree86 video extension buffer to GLX seemed like a hard
sell. Introducting a well establish GLX drawable type to XvMC seemed more reasonable.

Right. I thought about this a bit more last night. A better approach might be to expose this functionality as an XFree86 extension, then create a GLX extension on top of it. I was thinking of an extension where you would bind a "magic" buffer to a pbuffer, then take a snapshot from the input buffer to the pbuffer. Doing that we could create layered extensions for binding v4l streams to pbuffers. This would be like creating a subclass in C++ and just over-riding the virtual CaptureImage method. I think that would be much nicer for application code.


   This isn't capture.  It's decode.  XvMC is a video acceleration
architecture not a capture architecture.  Even with capture, HW capture
buffer formats don't always line up nicely with pbuffer or texture formats.

I understand that it's not capture. However, it's conceptually similar. You have some opaque source that's generating frames of video. You take whatever is the current frame and slurp it over to your pbuffer. I doen't really matter if the video source is a file on your hard drive, a webcam on your PC, or a Dept. of Transportation traffic camera over the web.


That's why I'm saying that this new XvMC functionality could be used at the basis of a more general GLX extension. I think a more abstract GLX extension is going to be far more useful to application developers. However, without something like XvMCCopySurfaceToGLXPbuffer in each thing that we want to use as a video source, the GLX extension can't be implemented.

The fact that buffer formats don't match is why I propsed having a CaptureImage (or something) function in the GLX extension. That's the explicit copy of the next frame (either from the MPEG file or from the video camera) to the pbuffer. It's just a name that would redirect to XvMCCopySurfaceToGLXPbuffer in this case. The reason for the "bind" call is so that CaptureImage (which could probably use a better name, but it was late when I thought of it) knows what the video source is and what type of source (i.e., XvMC, v4l, etc.) it is. That way it knows how to do the copy. Then the application developer doesn't have to bother with that in their code. That way if they want to switch from a video file to live video, they just make a different bind call, their code path doesn't have to change.

The really cool thing is that if all the real work (i.e., copying the data) is done in the XFree86 extensions, then all of the code for the GLX extension is *completely* device independent. That ends up being a *HUGE* win.

hardware couldn't do mipmapping or GL_WRAP on non-power-of-two textures. For the most part, without NV_texture_rectangle, you can't even use npot textures. :(

And NV_texture_rectangle are still second class compared to normal textures. No video formats are powers of two, unfortunately.

Fair enough. But that should change soon. :)


Status
XvMCCopySurfaceToGLXPbuffer (
Display *display,
XvMCSurface *surface,
XID pbuffer_id,
short src_x,
short src_y,
unsigned short width,
unsigned short height,
short dst_x,
short dst_y,
int flags
);

One quick comment. Don't use 'short', use 'int'. On every existing and future platform that we're likely to care about the shorts will take up as much space as an int on the stack anyway, and slower / larger / more instructions will need to be used to access them.

This is an X-window extension. It's limited to the signed 16 bit coordinate system like the X-window system itself, all of Xlib and the rest of XvMC.

So? Just because the values are limited to 16-bit doesn't necessitate that they be stored in a memory location that's only 16-bits. If X were being developed from scratch today, instead of calling everything short, it would likely be int_fast16_t. On IA-32, PowerPC, Alpha, SPARC, and x86-64, this is int. Maybe using the C99 types is the right answer anyway.


   XvMC is already using shorts.  No reason to be inconsistent now.
It reflects the underlying protocol limitations anyhow.  Something
to keep in mind for future extensions though.

True. It should be pretty easy to write a perl script that replaces 'unsigned short' with 'uint_fast16_t' and 'short' with 'int_fast16_t', though. :) You are right, though. Consistency is probably the right way to go in this case.


This function copies the rectangle specified by src_x, src_y, width,
and height from the XvMCSurface denoted by "surface" to offset dst_x, dst_y within the pbuffer identified by its GLXPbuffer XID "pbuffer_id".
Note that while the src_x, src_y are in XvMC's standard left-handed
coordinate system and specify the upper left hand corner of the
rectangle, dst_x and dst_y are in OpenGL's right-handed coordinate system and denote the lower left hand corner of the destination rectangle in the pbuffer.

This conceptually concerns me. Mixing coordinate systems is usually a bad call, and is likely to confuse developers. I assume this means that the image is implicitly inverted? Hmm...

X is left handed. OpenGL is right handed. Addressing a pbuffer in anything other than a right handed coordinate system is perverse. Mixed coordinate systems seems a necessity here.

There is no inversion, it's just a remap of the origins.

Uh...so what happens if you do something like this:


        glXMakeCurrentContext( dpy, my_window, my_pbuffer, my_ctx );
        glRasterPos3i( 0, 0, 0 );
        glCopyPixels( 0, 0, width, height, GL_COLOR );

Unless XvMCCopySurfaceToGLXPbuffer invertes (the memory representation) of the image, the result of the glCopyPixels will be upside-down. In the XvMC coordinate system, (0,0) is the top of the image. In the GL coordinate system, (0,0) is the bottom of the image. It's not really a problem if you only use the pbuffer as a texture. In that case you just need to set your texture coordinates to get the result you want. :)

Perhaps it wasn't worded clearly. There is no inversion. It's just that the location of the rectangle is described differently in the source and the destination. src_x,src_y describes the upper left hand corner of the rectangle in the XvMCSurface. dst_x,dst_y describes the lower left hand corner of the rectangle in the pbuffer.

   I should remove coordinate system references and state it just
like that.

Actually, I would do the opposite. You need *more* description. What you've just described to me *is* an inversion. The pixel at (0,0) in the source coordinate system is at (0,height) in the destination coordinate system. I believe that this is the right behavior, and it is what apps will most likely want. However, it really does need to be explicitly spelled out in the description.


I should have worded things differently in my initial message. No matter what, there is some inversion happening. Either the image coordinate are inverted and the memory representation is not (what we have here), or the image coordinates are not inverted but the memory representation is (if (0,0) in the source was the same pixel as (0,0) in the destination). You can't escape it! Wuhahahah! :)

won't be) the case. Something about this part still just feels wrong to me. Forcing the copy to always go to the left front buffer seems like it will cause developers problems. Afterall, if you're using a double-buffered pbuffer in a "usual" double-buffered way, you always draw to the *back* buffer. How much more difficult would it be to allow the caller to specify where the data should go?

I suppose I could take an argument that's the same as glDrawBuffer's. That's not too difficult to implement. I'm wondering what the error conditions should be. I think OpenGL doesn't care if you've specified draw buffers that don't exist. It just draws to the ones that do.

Okay. Adding that parameter would be very helpful for making the layer GLX extension from earlier in this message.


The man page for glDrawBuffer says, "GL_INVALID_OPERATION" is generate if none of the buffers indicated by mode exists." I think that XvMCCopySurfaceToGLXPbuffer could do whatever it wanted in this case. Either silently fail or return an error. The layered GLX extension could do this error checking. The documentation just has to be very clear about what it will do. :)

_______________________________________________
Devel mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/devel

Reply via email to