Hi,

2016-06-10 10:49 GMT+02:00 Mark Thompson <s...@jkqxz.net>:
> On 10/06/16 08:06, Anton Khirnov wrote:
>> Quoting Mark Thompson (2016-06-10 01:25:58)
>>> Adds a general mechanism for setting driver quirks, and detects the
>>> driver name to set them.
>>>
>>> When the i965 driver is being used, destroy all parameter buffers
>>> explicitly after use despite passing them to vaRenderPicture().
>>> ---
>>> An alternative approach to solving the same problem as the previous patch.  
>>> Maybe this is actually better once we start detecting the driver anyway?
>>
>> Yes, this looks much better to me.
>
> Hmm, ok.  Thinking a bit further, maybe this should be available a bit more
> generally because
> <https://git.libav.org/?p=libav.git;a=blob;f=libavcodec/vaapi.c;hb=HEAD#l180>
> should also use it - maybe a property of the device rather than inside the 
> encoder?
>
> A flags field in AVVAAPIDeviceContext would seem to do the right thing, but it
> is unclear who would initialise it - the user, or inside 
> av_hwdevice_ctx_init().

Since most VA drivers implemented the incorrect behaviour, IMHO I
believe a better approach is to use that flag to actually enable the
correct behaviour. That way, you can automatically flag correct VA
drivers, after testing, and let the user flag itself if he
has/develops a driver. I believe the gallium VA driver implements the
correct behaviour though.

> I should probably acquire a suitable AMD card so that I can actually test this
> on more than one target.
>
>> Btw, do the intel driver devs know about this problem?
>
> Yes.  I don't know if it's been considered recently, but see
> <https://lists.freedesktop.org/archives/libva/2012-July/000961.html> for some
> hand-wringing a while ago.

Updates: that was further discussed during the Linux "port" of the
proprietary VA driver. It sounded the trend was at least to make the
VA buffer live until vaEndPicture(). A rewording of vaRenderPicture()
was also considered. There are also VA buffers that could be used for
statistics, so an implicit destroy of everything was not much useful,
as it would have needed the users users to know whether to destroy
explicitly or implicitly either kind of buffer. That was not a
convincing approach.

> Also compare Intel's libyami behaviour:
> <https://github.com/01org/libyami/blob/0aef8be5122f7ef917258aafcdb44a22bdb0c1c8/vaapi/vaapipicture.cpp#L65>
> <https://github.com/01org/libyami/blob/0aef8be5122f7ef917258aafcdb44a22bdb0c1c8/vaapi/VaapiBuffer.cpp#L86>

The reference and original implementation remains libgstvaapi. And, VA
buffers were destroyed just after vaRenderPicture() because that was
the least dirty solution, and worked for all drivers, at that time.

> with use in the official encoding example:
> <https://cgit.freedesktop.org/libva/tree/test/encode/h264encode.c#n1764>.
>
> - Mark
>
> _______________________________________________
> libav-devel mailing list
> libav-devel@libav.org
> https://lists.libav.org/mailman/listinfo/libav-devel

Regards,
Gwenole.
_______________________________________________
libav-devel mailing list
libav-devel@libav.org
https://lists.libav.org/mailman/listinfo/libav-devel

Reply via email to