Help!

I am trying to write an ffmpeg video filter that creates a new output
buffer, because it may resize the frame.  I'm using vf_scale and vf_pad as
models, and my logic looks to me like theirs, however I get no video in the
output file (a few hundred K B of something, but no picture).   Evidently I
don't understand something basic about avfilter buffers, but I cannot guess
what that might be.

vf_scale implements only  start_frame and draw_slice.  My code that tries to
emulate those is quoted below.  My draw_slice calls an external fn to
accumulate the slices (into its own private buffer) and after all slices are
seen, calls another fn to return the transformed image to the avfilter outut
buffer that was created in start_frame.   Those fn are getting called and
are copying data.  But nothing comes through to the output.  Adding an
end_frame routine, like the one in vf_pad, that dereferences the outlink
buffer, does not help.

static void start_frame(AVFilterLink *link, AVFilterBufferRef *picref)
> {
>   /* get our context pointers */
>     AVFilterContext *ctx = link->dst;
>     vfpParams *pfp = ctx->priv;
>   /* get link to next stage */
>     AVFilterLink *outlink = link->dst->outputs[0];
>     AVFilterBufferRef *outpicref;
>   /* get a buffer for the output image */
>     outpicref = avfilter_get_video_buffer(outlink, AV_PERM_WRITE,
> outlink->w, outlink->h);
>     avfilter_copy_buffer_ref_props(outpicref, picref);
>   /* post actual o/p size */
>     outpicref->video->w = outlink->w;
>     outpicref->video->h = outlink->h;
>   /* attach buffer to link */
>     outlink->out_buf = outpicref;
>   /* zero the input row counter */
>     pfp->rcnt = 0;
>   /* notify the next stage */
>     avfilter_start_frame(outlink, outpicref);
> }
>
> static void draw_slice(AVFilterLink *inlink, int y, int h, int slice_dir)
> {
>   /* get our context pointers */
>     AVFilterContext *ctx = inlink->dst;
>     vfpParams *pfp = ctx->priv;
>   /* get in and out buffers */
>     AVFilterBufferRef * ibf = inlink->cur_buf;
>     AVFilterLink *outlink = ctx->outputs[0];
>     AVFilterBufferRef * obf = outlink->out_buf;
>    /* load this strip */
>     QVFP_load( y, h, ibf->data );
>   /* when whole image is loaded... */
>     if((pfp->rcnt += h) >= pfp->hgt ){
>       /* transform image into output buffer */
>         QVFP_transform( obf->data );
>       /* pass transformed image to next stage */
>         avfilter_draw_slice( outlink, 0, pfp->hgt, 1 );
>     }
> }
>
> Can anyone see what I'm doing wrong?   If you think it could be in some
other part of the code, I'll be happy to post it all.

Thanks, Tom
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user

Reply via email to