Thanks for the responces, but before going into too much detail, I first 
wanted to check if a video API makes sense in the Gallium architecture 
(sitting along side OpenGL, OpenVG, etc). From what I've read about the 
VAAPI, it uses the TTM video memory manager to manage buffers, so can 
probably exist along side Gallium3D drivers? But it seems like there'd be 
quite a bit of code duplication, especially if part of the video decode is 
done by shaders? There's a nice fluffy diagram of VAAPI's architecture here: 
https://wiki.ubuntu.com/mobile-hw-decode

I can think of lots of use cases where it makes sense to include a video API 
developers can use along side OpenGL. EGL already allows developers to use 
OpenVG surfaces as 3D textures. It would just be nice to provide the ability 
to use video in a similar way.


On Monday 18 February 2008 17:41:44 Andreas Setterlind wrote:
> On 2/18/08, Tom Cooksey <[EMAIL PROTECTED]> wrote:
> > I posed this on the Mesa3D user mailing list & Brian sugested I tried the
> > dev mailing list instead - so here goes (I've also CC'd in the VAAPI
> > contact)!
>
> As support for this is needed at the decoder (software codec) level as
> well may I also suggest that you send a copy of this to the FFmpeg
> ffmpeg-devel developers-mailing list (as GStreamer uses a wrapper
> around FFmpeg libavcodec/libavformat to decode video):
> http://ffmpeg.mplayerhq.hu/mailinglists.html
>
> Read this thread before you post though ("[FFmpeg-devel] Regarding
> Video Acceleration API"):
> http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/2007-August/thread.html#34
>637

I'm not very familiar with ffmpeg's architecture, but I believe the project's 
focus is on software decoders. I also don't want to start a flame war, which 
seems to happen every time someone talks about GStreamer/xine/ffmpeg/blah.

What I'm confused about is which way round the APIs go. Either:

a) The GStreamer/ffmpeg/blah *decode* elements make calls into the VAAPI to 
accelerate various stages of the decode and then output them to whatever 
(probably a surface).

_or_

b) A gstreamer/ffmpeg/blah *demux* element pipe video data to VAAPI, which 
then does the decoding using a mixture of hardware & software and outputs to 
a surface. 

I'm pretty sure option (a) is what's intended, but it seems to me that it 
could be done both ways.


>
> On 2/18/08, Tom Cooksey <[EMAIL PROTECTED]> wrote:
> > there is an OpenGL GStreamer sink element which accepts YUV data from
> > GStreamer & uses a shader to do the YUV->RGB conversion in the GPU (It
> > also does scaling... as you would imagine). This is (IMO) quite a
> > nifty use of a shader, but I think extending it's use further would
> > start to get painful. <edit> I'm a bit new to shaders - could they be
> > used to do more of the video decoding than just colour conversion? Could
> > a shader be written to decode h.264 for example?
>
> As I understand it you could not offload the complete decoding process
> to the GPU, however you could offload portions of the video decoding
> process by executing specific code algorithms on the GPU
> video-hardware, like for instance motion compensation (mo comp) and
> inverse discrete cosine transform (iDCT) to start with, it may be
> possible to other video decoding processes as well but the question is
> if the GPU may be slower than a CPU to decode them making it less
> effective for the total process to move the decoding of them to the
> GPU, process portion such as; bitstream processing (CAVLC/CABAC),
> in-loop deblocking, inverse quantization (IQ), Variable-Length
> Decoding (VLD), and Spatial-temporal deinterlacing. In theory
> offloading some portions of the video decoding process to the GPU
> should also reduce the total bus bandwidth requirements needed to
> decode the video stream.
>
> Information mostly taken from Wikipedia's VaAPI article:
> http://en.wikipedia.org/wiki/VaAPI
>
> So the primary decoding will always be done on the CPU and then
> portions could be separated and offloaded to the GPU, similar to how
> some video decoding processes can be accelerated with MMX or SSE
> (Streaming SIMD Extensions) or XvMC while others can not (or can but
> would actually run slower rather than faster using MMX or SSE).
> http://en.wikipedia.org/wiki/XvMC
> http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions

This is fine, but my understanding of graphics hardware is that moving buffers 
between main memory & video memory can be an expensive opperation? If buffers 
have to be copied from main memory after the CPU performs part of the decode 
into VRAM for the GPU to do some more, then copied back again, etc. etc. 
surely performance is going to be hit quite badly?( I guess the solution 
would be to only use main memory for all buffers and map those buffers into a 
region the GPU can access.) It would also be nice if the finished frames end 
up in some buffer which could be used as a GL texture.




Cheers,

Tom

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Mesa3d-dev mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to