Tom Cooksey wrote:
> I posed this on the Mesa3D user mailing list & Brian sugested I tried the dev 
> mailing list instead - so here goes (I've also CC'd in the VAAPI contact)!
> 
> 
> -----------------------------------------------------------------------------------------
> 
> Are there any plans for adding a video acceleration API to the
> Gallium3D architecture?
> 
> I'm trying to wrap my head round video acceleration, specifically how
> GStreamer can fit in with the Gallium architecture. At the moment,
> there is an OpenGL GStreamer sink element which accepts YUV data from
> GStreamer & uses a shader to do the YUV->RGB conversion in the GPU (It
> also does scaling... as you would imagine). This is (IMO) quite a
> nifty use of a shader, but I think extending it's use further would
> start to get painful. <edit> I'm a bit new to shaders - could they be used to 
> do more of the video decoding than just colour conversion? Could a shader be 
> written to decode h.264 for example? </edit>
> 
> While YUV->RGB conversion is expensive & good to off-load to the GPU,
> there's lots more which can be achieved by the GPU. Full h.264
> decoding for example. I'm aware of Freedesktop's VAAPI - which looks
> exactly what I'm thinking about - my question is weather this is going
> to be implemented/ported to the Gallium3D architecture? Also, are
> there any plans to implement GStreamer sink elements for this API?
> 
> If there's no hardware suport for e.g. h.264, what happens? Does
> GStreamer fall back to other (software) decode elements, or will
> Gallium's software fallbacks be used instead (which presumably would
> be slower)?
> 
> Also, there doesn't seem to be a way of using the VAAPI to accelerate
> other unsupported codecs (I.e. a way to pass in raw YUV data). <edit> I've 
> looked a bit closer at the VAAPI spec and now think there may be a way to 
> pass in YUV data buffers </edit>.
> 
> 
> Any help understanding this would be greatly welcomed. :-)


Regarding if you could do more with a shader than just colorspace
conversion, yes definitely. For mpeg-2 you can do idct and motion
compensation. The entropy decoding (some sort of Huffman coding is used)
is not suitable for pixel shaders.
The same should be be true for h.264 - everything but entropy decoding
(CABAC) should be doable as a pixel shader I think, though it might
require a fairly powerful GPU (inverse transform, intra-frame
prediction, motion compensation, deblocking).
For full decoding of h.264 on the gpu, nvidia and ati use dedicated
hardware blocks. I don't know what parts of the decoding exactly these
blocks do, but certainly the entropy decode. The rest might still be
done with pixel shaders.

Roland

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Mesa3d-dev mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to