On 27/12/2021 20:31, Soft Works wrote:>> -----Original Message-----
From: ffmpeg-devel <ffmpeg-devel-boun...@ffmpeg.org> On Behalf Of Mark
Thompson
Sent: Monday, December 27, 2021 7:51 PM
To: ffmpeg-devel@ffmpeg.org
Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug
for mapping qsv frame to opencl

On 16/11/2021 08:16, Wenbin Chen wrote:
From: nyanmisaka <nst799610...@gmail.com>

mfxHDLPair was added to qsv, so modify qsv->opencl map function as well.
Now the following commandline works:

ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \
-init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device
ocl \
-hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv \
-i input.264 -vf "hwmap=derive_device=opencl,format=opencl,avgblur_opencl,
\
hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \
-c:v h264_qsv output.264

Signed-off-by: nyanmisaka <nst799610...@gmail.com>
Signed-off-by: Wenbin Chen <wenbin.c...@intel.com>
---
   libavutil/hwcontext_opencl.c | 3 ++-
   1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c
index 26a3a24593..4b6e74ff6f 100644
--- a/libavutil/hwcontext_opencl.c
+++ b/libavutil/hwcontext_opencl.c
@@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext
*dst_fc, AVFrame *dst,
   #if CONFIG_LIBMFX
       if (src->format == AV_PIX_FMT_QSV) {
           mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3];
-        va_surface = *(VASurfaceID*)mfx_surface->Data.MemId;
+        mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId;
+        va_surface = *(VASurfaceID*)pair->first;
       } else
   #endif
           if (src->format == AV_PIX_FMT_VAAPI) {

Since these frames can be user-supplied, this implies that the user-facing
API/ABI for AV_PIX_FMT_QSV has changed.

It looks like this was broken by using HDLPairs when D3D11 was introduced,
which silently changed the existing API for DXVA2 and VAAPI as well.

Could someone related to that please document it properly (clearly not all
possible valid mfxFrameSurface1s are allowed), and note in APIchanges when
the API change happened?

Hi Mark,

QSV contexts always need to be backed by a child context, which can be DXVA2,
D3D11VA or VAAPI. You can create a QSV context either by deriving from one of
those contexts or when create a new QSV context, it automatically creates an
appropriate child context - either implicitly (auto mode) or explicitly, like
the ffmpeg implementation does in most cases.

... or by using the one the user supplies when they create it.

When working with "user-supplied" frames on Linux, you need to create a VAAPI
context with those frames and derive a QSV context from that context.

There is no way to create or supply QSV frames directly.

???  The ability for the user to set up their own version of these things is 
literally the whole point of the split alloc/init API.


// Some user stuff involving libmfx - has a D3D or VAAPI backing, but this code 
doesn't need to care about it.

// It has a session and creates some surfaces to use with MemId filled 
compatible with ffmpeg.
user_session = ...;
user_surfaces = ...;

// No ffmpeg involved before this, now we want to pass these surfaces we've got 
into ffmpeg.

// Create a device context using the existing session.

mfx_ctx = av_hwdevice_ctx_alloc(MFX);

dc = mfx_ctx->data;
mfx_dc = dc->hwctx;
mfx_dc->session = user_session;

av_hwdevice_ctx_init(mfx_ctx);

// Create a frames context out of the surfaces we've got.

mfx_frames = av_hwframes_ctx_alloc(mfx_ctx);

fc = mfx_frames->data;
fc.pool = user_surfaces.allocator;
fc.width = user_surfaces.width;
// etc.

mfx_fc = fc->hwctx;
mfx_fc.surfaces = user_surfaces.array;
mfx_fc.nb_surfaces = user_surfaces.count;
mfx_fc.frame_type = user_surfaces.memtype;

av_hwframe_ctx_init(frames);

// Do stuff with frames.

                                                         Looking at the code:

*mfx_surface = (mfxFrameSurface1*)src->data[3];

A QSV frames context is using the mfxFrameSurface1 structure for describing
the individual frames and mfxFrameSurface1 can only come from the MSDK runtime,
it cannot be user-supplied.

I don't think that there's something that needs to be documented because
whatever user-side manipulation an API consumer would want to perform, it would
always need to derive the context either from QSV to D3D/VAAPI or from D3D to
VAAPI in order to access and manipulate individual frames.

- Mark
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to