On 11/11/2016 12:27 PM, Randy Li wrote:


On 11/11/2016 12:31 PM, Zhao Yakui wrote:
On 11/11/2016 12:02 PM, Randy Li wrote:


On 11/11/2016 11:45 AM, Xiang, Haihao wrote:
On Wed, 2016-11-02 at 19:32 +0800, Randy Li wrote:

On 10/29/2016 04:09 AM, Sean V Kelley wrote:

On Fri, 2016-10-28 at 10:05 +0800, Randy Li wrote:

On 10/27/2016 11:03 PM, Xiang, Haihao wrote:
gstreamer, and I have
no idea about where should I implement something functions
like
eglExportDRMImageMESA().

VADriverVTableEGL is deprecated in libva, we has a more
efficient
way to use vaapi and egl.
You can refer to the examples in libyami-utils (https://github.
com/
01org/libyami-utils.git) for
how to use vaapi and egl.

I see, thank you.
I looks like the currently VA-API only need vaDeriveImage() and
vaAcquireBufferHandle(), leaving the rendering output buffer to
display
to VA-API client. vaapisink from gstreamer would play no role in
this?

The DRM seems more complex, the reason I want to use the
DRM is
that,
GPU would not work with the 4K video rendering, so the DRM
means
that
The guys from algorithm department told me that the main problem is
VA-API can't process video parallelly and it won't process the next
frame until vaEndPicture() and a output function(like
vaDeriveImage(),
vaGetImage and vaPutSurface()).


What do you mean when you said 'VA-API' can't process video parallelly
? The examples/grid.cpp can processes multiple videoes parallelly.
If I understand the process sequence correctly, the next frame won't be
processed at vaBeginPicture() until the previous frame is end with
vaEndPicture()?
What I mean is about frame.

Yes. Before calling the vaEndPicture, we can't handle next frame.
Only after calling vaBeginPicture, a new frame is started.
The decoding process can be considered as state-machine.
The first step is to call vaBeginPicture
Call vaRenderPicture to prepare required parameter
call vaEndPicture

If a new frame can be started before calling vaEndPicture, the decoding
process will be in corruption state. That is wrong.
Yes, I know that. But our video IP is not as power as Intel's, I need to
render the next frame into queue of v4l2. Also I need to reconstruct the
frame to extract some codec information that VA-API don't offer to me,
it would cost some extra time.

I think the decoder in VA-API had better introduce a sync mechanism like
encoder.

The vaEndPicture is not a blocked API call. In such case it will return after it tells the driver that all the required info is ready and the decode command can be submitted to the HW. Then a new frame can be prepared.

If you need the sync, you can call the explicit sync operation to wait for the completion of decoding.

Thanks
  yakui



directly output the video into video controller in our
platform.
But still have no
idea what kind of thing I should implement in the VA-API
driver.
It seems that
the VA-API base library would open a DRM instance for the
driver,
but leaving
those configure for connector, encoder, planes to VA-API
driver?

About the DRM, I have implemented a version which pretends a X
output, I
would like to know a better way.

Connector properties and their configuration are entirely display
port
oriented. "Pretending an X output" is independent of KMS
configuration
for your display port.
No, what I do is pretended as a VA_DISPLAY_X11 display. I have read
the
examples/grid.cpp from libyami-utils. It seems that those job have
been
moved into VA-API client. The whole display system looks like
departured.
So the element plugin vaapisink from gstreamer would be drop in the
future? As those job have been done by kmssink and glimagesink.

Sean


configure for connector, encoder, planes aren't a part of va-
api
driver. You should check libdrm and drm/i915.
You can refer to the test case of modetest in libdrm
(git.freedesktop.org/git/mesa/drm)





_______________________________________________
Libva mailing list
Libva@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/libva

Reply via email to