On 2/18/08, Tom Cooksey <[EMAIL PROTECTED]> wrote:
> I posed this on the Mesa3D user mailing list & Brian sugested I tried the dev
> mailing list instead - so here goes (I've also CC'd in the VAAPI contact)!

As support for this is needed at the decoder (software codec) level as
well may I also suggest that you send a copy of this to the FFmpeg
ffmpeg-devel developers-mailing list (as GStreamer uses a wrapper
around FFmpeg libavcodec/libavformat to decode video):
http://ffmpeg.mplayerhq.hu/mailinglists.html

Read this thread before you post though ("[FFmpeg-devel] Regarding
Video Acceleration API"):
http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/2007-August/thread.html#34637

On 2/18/08, Tom Cooksey <[EMAIL PROTECTED]> wrote:
> there is an OpenGL GStreamer sink element which accepts YUV data from
> GStreamer & uses a shader to do the YUV->RGB conversion in the GPU (It
> also does scaling... as you would imagine). This is (IMO) quite a
> nifty use of a shader, but I think extending it's use further would
> start to get painful. <edit> I'm a bit new to shaders - could they be used to
> do more of the video decoding than just colour conversion? Could a shader be
> written to decode h.264 for example?
As I understand it you could not offload the complete decoding process
to the GPU, however you could offload portions of the video decoding
process by executing specific code algorithms on the GPU
video-hardware, like for instance motion compensation (mo comp) and
inverse discrete cosine transform (iDCT) to start with, it may be
possible to other video decoding processes as well but the question is
if the GPU may be slower than a CPU to decode them making it less
effective for the total process to move the decoding of them to the
GPU, process portion such as; bitstream processing (CAVLC/CABAC),
in-loop deblocking, inverse quantization (IQ), Variable-Length
Decoding (VLD), and Spatial-temporal deinterlacing. In theory
offloading some portions of the video decoding process to the GPU
should also reduce the total bus bandwidth requirements needed to
decode the video stream.

Information mostly taken from Wikipedia's VaAPI article:
http://en.wikipedia.org/wiki/VaAPI

So the primary decoding will always be done on the CPU and then
portions could be separated and offloaded to the GPU, similar to how
some video decoding processes can be accelerated with MMX or SSE
(Streaming SIMD Extensions) or XvMC while others can not (or can but
would actually run slower rather than faster using MMX or SSE).
http://en.wikipedia.org/wiki/XvMC
http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions

As far as I know, no one as yet volunteered to implement VaAPI support
in FFmpeg, (could be a good Google Summer of Code project for FFmpeg
this year if you ask me).
http://wiki.multimedia.cx/index.php?title=SOC


Best regards / Andreas (a.k.a. Gamester17)
XBMC Project Manager (which also uses FFmpeg and Mesa3D under Linux)
http://xbmc.org

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Mesa3d-dev mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to