> -----Original Message-----
> From: Fu, Linjie
> Sent: Friday, July 19, 2019 05:35
> To: ffmpeg-devel@ffmpeg.org
> Cc: Fu, Linjie <linjie...@intel.com>
> Subject: [PATCH] lavc/phtread_frame: update hwaccel_priv_data in time for
> multithread
> 
> When resolution/format changes, hwaccel_uninit/hwaccel_init will
> be called to destroy and re-create the hwaccel_priv_data. When output
> frame number meets the constraints for vframes, the hwaccel_priv_data
> modified in decoding thread won't be update to user-thread due to the
> delay mechanism. It will lead to:
>     1. memory leak in child-thread.
>     2. double free in user-thread while calling avcodec_close().
> 
> Can be reproduced with a resolution change case, and use -vframes
> to terminate the decode during dynamic resolution changing:
> 
> ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -v verbose
> -i ./reinit-large_420_8-to-small_420_8.h264 -pix_fmt nv12 -f rawvideo
> -vsync passthrough -vframes 45 -y out.yuv
> 
> The root cause is the conflict between delay mechanism and -vframes.
> FFmpeg won't output a frame if it's still receiving the initial packets,
> so there is async between decode process and output. hwaccel_priv_data
> in user thread won't be updated until the resolution changing
> frame is output.
> 
> As user context should reflect the state of the last frame that
> was output to the user, hwaccel_priv_data should be updated
> separately from decoding thread in time.
> 
> Signed-off-by: Linjie Fu <linjie...@intel.com>
> ---
>  libavcodec/pthread_frame.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/libavcodec/pthread_frame.c b/libavcodec/pthread_frame.c
> index 36ac0ac..cf7a575 100644
> --- a/libavcodec/pthread_frame.c
> +++ b/libavcodec/pthread_frame.c
> @@ -282,7 +282,6 @@ static int
> update_context_from_thread(AVCodecContext *dst, AVCodecContext *src,
>          dst->sample_rate    = src->sample_rate;
>          dst->sample_fmt     = src->sample_fmt;
>          dst->channel_layout = src->channel_layout;
> -        dst->internal->hwaccel_priv_data = src->internal->hwaccel_priv_data;
> 
>          if (!!dst->hw_frames_ctx != !!src->hw_frames_ctx ||
>              (dst->hw_frames_ctx && dst->hw_frames_ctx->data != src-
> >hw_frames_ctx->data)) {
> @@ -410,6 +409,7 @@ static int submit_packet(PerThreadContext *p,
> AVCodecContext *user_avctx,
>              pthread_mutex_unlock(&prev_thread->progress_mutex);
>          }
> 
> +        p->avctx->internal->hwaccel_priv_data = prev_thread->avctx-
> >internal->hwaccel_priv_data;
>          err = update_context_from_thread(p->avctx, prev_thread->avctx, 0);
>          if (err) {
>              pthread_mutex_unlock(&p->mutex);
> @@ -476,7 +476,7 @@ int ff_thread_decode_frame(AVCodecContext
> *avctx,
>      FrameThreadContext *fctx = avctx->internal->thread_ctx;
>      int finished = fctx->next_finished;
>      PerThreadContext *p;
> -    int err;
> +    int err, cur_decoding;
> 
>      /* release the async lock, permitting blocked hwaccel threads to
>       * go forward while we are in this function */
> @@ -544,6 +544,13 @@ int ff_thread_decode_frame(AVCodecContext
> *avctx,
> 
>      if (fctx->next_decoding >= avctx->thread_count) fctx->next_decoding = 0;
> 
> +    /* update hwaccel_priv_data from decoding thread */
> +    cur_decoding = fctx->next_decoding - 1;
> +    if (cur_decoding < 0) cur_decoding += avctx->thread_count;
> +
> +    p = &fctx->threads[cur_decoding];
> +    avctx->internal->hwaccel_priv_data = p->avctx->internal-
> >hwaccel_priv_data;
> +
>      fctx->next_finished = finished;
> 
>      /* return the size of the consumed packet if no error occurred */
> --
> 2.7.4


Since previous concerns in https://patchwork.ffmpeg.org/patch/13723/
could be  addressed in this version, ping for comments.

- linjie
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to