Le quartidi 14 fructidor, an CCXXIV, Paul B Mahol a écrit :
> the filter frame multithreading would just internally, in filter context
> cache frames, once enough frames are in cache - call workers and be done,
> repeat. At eof call workers on remaining frames in cache.

I have no idea how much thought you have already given to it, but I am
pretty sure it is not as simple as that with the current architecture. By
far.

In the meantime, I finally got the non-recursive version passing FATE. Here
are the raw patch, so that people can have an idea what this is all about.
There is still a lot of cleanup and documentation to do, as you can see.

Regards,

-- 
  Nicolas George
From b73206d61b94f5b3c2cd854d901c2a59c423bcde Mon Sep 17 00:00:00 2001
From: Nicolas George <geo...@nsup.org>
Date: Tue, 30 Aug 2016 20:12:20 +0200
Subject: [PATCH 1/4] fate/colorkey: disable audio stream.

The test is not supposed to cover audio.
Also, using -vframes along with an audio stream depends on
the exact order the frames are processed by filters, it is
too much constraint to guarantee.

Signed-off-by: Nicolas George <geo...@nsup.org>
---
 tests/fate/ffmpeg.mak                 | 2 +-
 tests/ref/fate/ffmpeg-filter_colorkey | 9 ---------
 2 files changed, 1 insertion(+), 10 deletions(-)

diff --git a/tests/fate/ffmpeg.mak b/tests/fate/ffmpeg.mak
index 3b91c12..60f1303 100644
--- a/tests/fate/ffmpeg.mak
+++ b/tests/fate/ffmpeg.mak
@@ -20,7 +20,7 @@ fate-ffmpeg-filter_complex: CMD = framecrc -filter_complex color=d=1:r=5 -fflags
 
 FATE_SAMPLES_FFMPEG-$(CONFIG_COLORKEY_FILTER) += fate-ffmpeg-filter_colorkey
 fate-ffmpeg-filter_colorkey: tests/data/filtergraphs/colorkey
-fate-ffmpeg-filter_colorkey: CMD = framecrc -idct simple -fflags +bitexact -flags +bitexact  -sws_flags +accurate_rnd+bitexact -i $(TARGET_SAMPLES)/cavs/cavs.mpg -fflags +bitexact -flags +bitexact -sws_flags +accurate_rnd+bitexact -i $(TARGET_SAMPLES)/lena.pnm -filter_complex_script $(TARGET_PATH)/tests/data/filtergraphs/colorkey -sws_flags +accurate_rnd+bitexact -fflags +bitexact -flags +bitexact -qscale 2 -vframes 10
+fate-ffmpeg-filter_colorkey: CMD = framecrc -idct simple -fflags +bitexact -flags +bitexact  -sws_flags +accurate_rnd+bitexact -i $(TARGET_SAMPLES)/cavs/cavs.mpg -fflags +bitexact -flags +bitexact -sws_flags +accurate_rnd+bitexact -i $(TARGET_SAMPLES)/lena.pnm -an -filter_complex_script $(TARGET_PATH)/tests/data/filtergraphs/colorkey -sws_flags +accurate_rnd+bitexact -fflags +bitexact -flags +bitexact -qscale 2 -vframes 10
 
 FATE_FFMPEG-$(CONFIG_COLOR_FILTER) += fate-ffmpeg-lavfi
 fate-ffmpeg-lavfi: CMD = framecrc -lavfi color=d=1:r=5 -fflags +bitexact
diff --git a/tests/ref/fate/ffmpeg-filter_colorkey b/tests/ref/fate/ffmpeg-filter_colorkey
index 9fbdfeb..effc13b 100644
--- a/tests/ref/fate/ffmpeg-filter_colorkey
+++ b/tests/ref/fate/ffmpeg-filter_colorkey
@@ -3,17 +3,8 @@
 #codec_id 0: rawvideo
 #dimensions 0: 720x576
 #sar 0: 0/1
-#tb 1: 1/48000
-#media_type 1: audio
-#codec_id 1: pcm_s16le
-#sample_rate 1: 48000
-#channel_layout 1: 3
 0,          0,          0,        1,   622080, 0x4e30accb
-1,          0,          0,     1152,     4608, 0x00000000
-1,       1152,       1152,     1152,     4608, 0xbca29063
 0,          1,          1,        1,   622080, 0x7d941c14
-1,       2304,       2304,     1152,     4608, 0x6e70df10
-1,       3456,       3456,     1152,     4608, 0x95e6a535
 0,          2,          2,        1,   622080, 0xf7451c5b
 0,          3,          3,        1,   622080, 0xb2c74319
 0,          4,          4,        1,   622080, 0xc9b80b79
-- 
2.9.3

From b55d3b23665663ef61435c19b4a722740e048284 Mon Sep 17 00:00:00 2001
From: Nicolas George <geo...@nsup.org>
Date: Tue, 30 Aug 2016 15:28:41 +0200
Subject: [PATCH 2/4] lavfi: split frame_count between input and output.

AVFilterLink.frame_count is supposed to count the number of frames
that were passed on the link, but with min_samples, that number is
not always the same for the source and destination filters.
With the addition of a FIFO on the link, the difference will become
more significant.

Split the variable in two: frame_count_in counts the number of
frames that entered the link, frame_count_out counts the number
of frames that were sent to the destination filter.

Signed-off-by: Nicolas George <geo...@nsup.org>
---
 libavfilter/af_ashowinfo.c   |  2 +-
 libavfilter/af_volume.c      |  2 +-
 libavfilter/asrc_sine.c      |  2 +-
 libavfilter/avf_showfreqs.c  |  4 ++--
 libavfilter/avfilter.c       |  5 +++--
 libavfilter/avfilter.h       |  2 +-
 libavfilter/f_loop.c         |  2 +-
 libavfilter/f_metadata.c     |  4 ++--
 libavfilter/f_select.c       |  2 +-
 libavfilter/f_streamselect.c |  2 +-
 libavfilter/vf_bbox.c        |  2 +-
 libavfilter/vf_blackdetect.c |  2 +-
 libavfilter/vf_blend.c       |  2 +-
 libavfilter/vf_crop.c        |  2 +-
 libavfilter/vf_decimate.c    |  2 +-
 libavfilter/vf_detelecine.c  |  2 +-
 libavfilter/vf_drawtext.c    |  4 ++--
 libavfilter/vf_eq.c          |  2 +-
 libavfilter/vf_fade.c        |  8 ++++----
 libavfilter/vf_fieldhint.c   | 14 +++++++-------
 libavfilter/vf_fieldmatch.c  |  6 +++---
 libavfilter/vf_framestep.c   |  2 +-
 libavfilter/vf_geq.c         |  2 +-
 libavfilter/vf_hue.c         |  2 +-
 libavfilter/vf_overlay.c     |  2 +-
 libavfilter/vf_paletteuse.c  |  2 +-
 libavfilter/vf_perspective.c |  4 ++--
 libavfilter/vf_rotate.c      |  2 +-
 libavfilter/vf_showinfo.c    |  2 +-
 libavfilter/vf_swaprect.c    |  2 +-
 libavfilter/vf_telecine.c    |  2 +-
 libavfilter/vf_tinterlace.c  |  4 ++--
 libavfilter/vf_vignette.c    |  2 +-
 libavfilter/vf_zoompan.c     |  6 +++---
 libavfilter/vsrc_mptestsrc.c |  2 +-
 35 files changed, 55 insertions(+), 54 deletions(-)

diff --git a/libavfilter/af_ashowinfo.c b/libavfilter/af_ashowinfo.c
index ca33add..ba600cb 100644
--- a/libavfilter/af_ashowinfo.c
+++ b/libavfilter/af_ashowinfo.c
@@ -206,7 +206,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *buf)
            "n:%"PRId64" pts:%s pts_time:%s pos:%"PRId64" "
            "fmt:%s channels:%d chlayout:%s rate:%d nb_samples:%d "
            "checksum:%08"PRIX32" ",
-           inlink->frame_count,
+           inlink->frame_count_out,
            av_ts2str(buf->pts), av_ts2timestr(buf->pts, &inlink->time_base),
            av_frame_get_pkt_pos(buf),
            av_get_sample_fmt_name(buf->format), av_frame_get_channels(buf), chlayout_str,
diff --git a/libavfilter/af_volume.c b/libavfilter/af_volume.c
index 4d6b916..6813403 100644
--- a/libavfilter/af_volume.c
+++ b/libavfilter/af_volume.c
@@ -393,7 +393,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *buf)
     }
     vol->var_values[VAR_PTS] = TS2D(buf->pts);
     vol->var_values[VAR_T  ] = TS2T(buf->pts, inlink->time_base);
-    vol->var_values[VAR_N  ] = inlink->frame_count;
+    vol->var_values[VAR_N  ] = inlink->frame_count_out;
 
     pos = av_frame_get_pkt_pos(buf);
     vol->var_values[VAR_POS] = pos == -1 ? NAN : pos;
diff --git a/libavfilter/asrc_sine.c b/libavfilter/asrc_sine.c
index 2a2f3c3..ff77526 100644
--- a/libavfilter/asrc_sine.c
+++ b/libavfilter/asrc_sine.c
@@ -219,7 +219,7 @@ static int request_frame(AVFilterLink *outlink)
     SineContext *sine = outlink->src->priv;
     AVFrame *frame;
     double values[VAR_VARS_NB] = {
-        [VAR_N]   = outlink->frame_count,
+        [VAR_N]   = outlink->frame_count_in,
         [VAR_PTS] = sine->pts,
         [VAR_T]   = sine->pts * av_q2d(outlink->time_base),
         [VAR_TB]  = av_q2d(outlink->time_base),
diff --git a/libavfilter/avf_showfreqs.c b/libavfilter/avf_showfreqs.c
index e2a923b..21735ed 100644
--- a/libavfilter/avf_showfreqs.c
+++ b/libavfilter/avf_showfreqs.c
@@ -326,12 +326,12 @@ static inline void plot_freq(ShowFreqsContext *s, int ch,
 
     switch (s->avg) {
     case 0:
-        y = s->avg_data[ch][f] = !outlink->frame_count ? y : FFMIN(avg, y);
+        y = s->avg_data[ch][f] = !outlink->frame_count_in ? y : FFMIN(avg, y);
         break;
     case 1:
         break;
     default:
-        s->avg_data[ch][f] = avg + y * (y - avg) / (FFMIN(outlink->frame_count + 1, s->avg) * y);
+        s->avg_data[ch][f] = avg + y * (y - avg) / (FFMIN(outlink->frame_count_in + 1, s->avg) * y);
         y = s->avg_data[ch][f];
         break;
     }
diff --git a/libavfilter/avfilter.c b/libavfilter/avfilter.c
index b236535..ccbe4d9 100644
--- a/libavfilter/avfilter.c
+++ b/libavfilter/avfilter.c
@@ -1120,7 +1120,7 @@ static int ff_filter_frame_framed(AVFilterLink *link, AVFrame *frame)
     pts = out->pts;
     if (dstctx->enable_str) {
         int64_t pos = av_frame_get_pkt_pos(out);
-        dstctx->var_values[VAR_N] = link->frame_count;
+        dstctx->var_values[VAR_N] = link->frame_count_out;
         dstctx->var_values[VAR_T] = pts == AV_NOPTS_VALUE ? NAN : pts * av_q2d(link->time_base);
         dstctx->var_values[VAR_W] = link->w;
         dstctx->var_values[VAR_H] = link->h;
@@ -1132,7 +1132,7 @@ static int ff_filter_frame_framed(AVFilterLink *link, AVFrame *frame)
             filter_frame = default_filter_frame;
     }
     ret = filter_frame(link, out);
-    link->frame_count++;
+    link->frame_count_out++;
     ff_update_link_current_pts(link, pts);
     return ret;
 
@@ -1221,6 +1221,7 @@ int ff_filter_frame(AVFilterLink *link, AVFrame *frame)
     }
 
     link->frame_wanted_out = 0;
+    link->frame_count_in++;
     /* Go directly to actual filtering if possible */
     if (link->type == AVMEDIA_TYPE_AUDIO &&
         link->min_samples &&
diff --git a/libavfilter/avfilter.h b/libavfilter/avfilter.h
index 15d00f7..d21b144 100644
--- a/libavfilter/avfilter.h
+++ b/libavfilter/avfilter.h
@@ -533,7 +533,7 @@ struct AVFilterLink {
     /**
      * Number of past frames sent through the link.
      */
-    int64_t frame_count;
+    int64_t frame_count_in, frame_count_out;
 
     /**
      * A pointer to a FFVideoFramePool struct.
diff --git a/libavfilter/f_loop.c b/libavfilter/f_loop.c
index 00e0215..69bfb10 100644
--- a/libavfilter/f_loop.c
+++ b/libavfilter/f_loop.c
@@ -298,7 +298,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
     LoopContext *s = ctx->priv;
     int ret = 0;
 
-    if (inlink->frame_count >= s->start && s->size > 0 && s->loop != 0) {
+    if (inlink->frame_count_out >= s->start && s->size > 0 && s->loop != 0) {
         if (s->nb_frames < s->size) {
             if (!s->nb_frames)
                 s->start_pts = frame->pts;
diff --git a/libavfilter/f_metadata.c b/libavfilter/f_metadata.c
index 188f0b6..ef9f077 100644
--- a/libavfilter/f_metadata.c
+++ b/libavfilter/f_metadata.c
@@ -315,14 +315,14 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
     case METADATA_PRINT:
         if (!s->key && e) {
             s->print(ctx, "frame:%-4"PRId64" pts:%-7s pts_time:%-7s\n",
-                     inlink->frame_count, av_ts2str(frame->pts), av_ts2timestr(frame->pts, &inlink->time_base));
+                     inlink->frame_count_out, av_ts2str(frame->pts), av_ts2timestr(frame->pts, &inlink->time_base));
             s->print(ctx, "%s=%s\n", e->key, e->value);
             while ((e = av_dict_get(metadata, "", e, AV_DICT_IGNORE_SUFFIX)) != NULL) {
                 s->print(ctx, "%s=%s\n", e->key, e->value);
             }
         } else if (e && e->value && (!s->value || (e->value && s->compare(s, e->value, s->value)))) {
             s->print(ctx, "frame:%-4"PRId64" pts:%-7s pts_time:%-7s\n",
-                     inlink->frame_count, av_ts2str(frame->pts), av_ts2timestr(frame->pts, &inlink->time_base));
+                     inlink->frame_count_out, av_ts2str(frame->pts), av_ts2timestr(frame->pts, &inlink->time_base));
             s->print(ctx, "%s=%s\n", s->key, e->value);
         }
         return ff_filter_frame(outlink, frame);
diff --git a/libavfilter/f_select.c b/libavfilter/f_select.c
index 52f474e..03c1c0f 100644
--- a/libavfilter/f_select.c
+++ b/libavfilter/f_select.c
@@ -318,7 +318,7 @@ static void select_frame(AVFilterContext *ctx, AVFrame *frame)
     if (isnan(select->var_values[VAR_START_T]))
         select->var_values[VAR_START_T] = TS2D(frame->pts) * av_q2d(inlink->time_base);
 
-    select->var_values[VAR_N  ] = inlink->frame_count;
+    select->var_values[VAR_N  ] = inlink->frame_count_out;
     select->var_values[VAR_PTS] = TS2D(frame->pts);
     select->var_values[VAR_T  ] = TS2D(frame->pts) * av_q2d(inlink->time_base);
     select->var_values[VAR_POS] = av_frame_get_pkt_pos(frame) == -1 ? NAN : av_frame_get_pkt_pos(frame);
diff --git a/libavfilter/f_streamselect.c b/libavfilter/f_streamselect.c
index 03cedba..1a517bf 100644
--- a/libavfilter/f_streamselect.c
+++ b/libavfilter/f_streamselect.c
@@ -72,7 +72,7 @@ static int process_frame(FFFrameSync *fs)
                 AVFrame *out;
 
                 if (s->is_audio && s->last_pts[j] == in[j]->pts &&
-                    ctx->outputs[i]->frame_count > 0)
+                    ctx->outputs[i]->frame_count_in > 0)
                     continue;
                 out = av_frame_clone(in[j]);
                 if (!out)
diff --git a/libavfilter/vf_bbox.c b/libavfilter/vf_bbox.c
index e92c3b4..86054b2 100644
--- a/libavfilter/vf_bbox.c
+++ b/libavfilter/vf_bbox.c
@@ -80,7 +80,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
     h = box.y2 - box.y1 + 1;
 
     av_log(ctx, AV_LOG_INFO,
-           "n:%"PRId64" pts:%s pts_time:%s", inlink->frame_count,
+           "n:%"PRId64" pts:%s pts_time:%s", inlink->frame_count_out,
            av_ts2str(frame->pts), av_ts2timestr(frame->pts, &inlink->time_base));
 
     if (has_bbox) {
diff --git a/libavfilter/vf_blackdetect.c b/libavfilter/vf_blackdetect.c
index fbe3d10..0f6adf4 100644
--- a/libavfilter/vf_blackdetect.c
+++ b/libavfilter/vf_blackdetect.c
@@ -155,7 +155,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *picref)
 
     av_log(ctx, AV_LOG_DEBUG,
            "frame:%"PRId64" picture_black_ratio:%f pts:%s t:%s type:%c\n",
-           inlink->frame_count, picture_black_ratio,
+           inlink->frame_count_out, picture_black_ratio,
            av_ts2str(picref->pts), av_ts2timestr(picref->pts, &inlink->time_base),
            av_get_picture_type_char(picref->pict_type));
 
diff --git a/libavfilter/vf_blend.c b/libavfilter/vf_blend.c
index 2731ec8..a3235e6 100644
--- a/libavfilter/vf_blend.c
+++ b/libavfilter/vf_blend.c
@@ -352,7 +352,7 @@ static int filter_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs)
     uint8_t *dst    = td->dst->data[td->plane];
     double values[VAR_VARS_NB];
 
-    values[VAR_N]  = td->inlink->frame_count;
+    values[VAR_N]  = td->inlink->frame_count_out;
     values[VAR_T]  = td->dst->pts == AV_NOPTS_VALUE ? NAN : td->dst->pts * av_q2d(td->inlink->time_base);
     values[VAR_W]  = td->w;
     values[VAR_H]  = td->h;
diff --git a/libavfilter/vf_crop.c b/libavfilter/vf_crop.c
index bcdbb8c..85ea892 100644
--- a/libavfilter/vf_crop.c
+++ b/libavfilter/vf_crop.c
@@ -255,7 +255,7 @@ static int filter_frame(AVFilterLink *link, AVFrame *frame)
     frame->width  = s->w;
     frame->height = s->h;
 
-    s->var_values[VAR_N] = link->frame_count;
+    s->var_values[VAR_N] = link->frame_count_out;
     s->var_values[VAR_T] = frame->pts == AV_NOPTS_VALUE ?
         NAN : frame->pts * av_q2d(link->time_base);
     s->var_values[VAR_POS] = av_frame_get_pkt_pos(frame) == -1 ?
diff --git a/libavfilter/vf_decimate.c b/libavfilter/vf_decimate.c
index 39c3331..1fb242a 100644
--- a/libavfilter/vf_decimate.c
+++ b/libavfilter/vf_decimate.c
@@ -223,7 +223,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
                 av_frame_free(&frame);
                 frame = dm->clean_src[i];
             }
-            frame->pts = av_rescale_q(outlink->frame_count, dm->ts_unit, (AVRational){1,1}) +
+            frame->pts = av_rescale_q(outlink->frame_count_in, dm->ts_unit, (AVRational){1,1}) +
                          (dm->start_pts == AV_NOPTS_VALUE ? 0 : dm->start_pts);
             ret = ff_filter_frame(outlink, frame);
             if (ret < 0)
diff --git a/libavfilter/vf_detelecine.c b/libavfilter/vf_detelecine.c
index 9a7b462..0d5f88d 100644
--- a/libavfilter/vf_detelecine.c
+++ b/libavfilter/vf_detelecine.c
@@ -335,7 +335,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *inpicref)
 
         av_frame_copy_props(frame, inpicref);
         frame->pts = ((s->start_time == AV_NOPTS_VALUE) ? 0 : s->start_time) +
-                     av_rescale(outlink->frame_count, s->ts_unit.num,
+                     av_rescale(outlink->frame_count_in, s->ts_unit.num,
                                 s->ts_unit.den);
         ret = ff_filter_frame(outlink, frame);
     }
diff --git a/libavfilter/vf_drawtext.c b/libavfilter/vf_drawtext.c
index 214aef0..a0e77ad 100644
--- a/libavfilter/vf_drawtext.c
+++ b/libavfilter/vf_drawtext.c
@@ -1184,7 +1184,7 @@ static int draw_text(AVFilterContext *ctx, AVFrame *frame,
 
     if (s->tc_opt_string) {
         char tcbuf[AV_TIMECODE_STR_SIZE];
-        av_timecode_make_string(&s->tc, tcbuf, inlink->frame_count);
+        av_timecode_make_string(&s->tc, tcbuf, inlink->frame_count_out);
         av_bprint_clear(bp);
         av_bprintf(bp, "%s%s", s->text, tcbuf);
     }
@@ -1345,7 +1345,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
 #endif
     }
 
-    s->var_values[VAR_N] = inlink->frame_count+s->start_number;
+    s->var_values[VAR_N] = inlink->frame_count_out + s->start_number;
     s->var_values[VAR_T] = frame->pts == AV_NOPTS_VALUE ?
         NAN : frame->pts * av_q2d(inlink->time_base);
 
diff --git a/libavfilter/vf_eq.c b/libavfilter/vf_eq.c
index 5ecdb31..c450d5e 100644
--- a/libavfilter/vf_eq.c
+++ b/libavfilter/vf_eq.c
@@ -265,7 +265,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
     av_frame_copy_props(out, in);
     desc = av_pix_fmt_desc_get(inlink->format);
 
-    eq->var_values[VAR_N]   = inlink->frame_count;
+    eq->var_values[VAR_N]   = inlink->frame_count_out;
     eq->var_values[VAR_POS] = pos == -1 ? NAN : pos;
     eq->var_values[VAR_T]   = TS2T(in->pts, inlink->time_base);
 
diff --git a/libavfilter/vf_fade.c b/libavfilter/vf_fade.c
index 0496645..c30c41d 100644
--- a/libavfilter/vf_fade.c
+++ b/libavfilter/vf_fade.c
@@ -300,7 +300,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
     if (s->fade_state == VF_FADE_WAITING) {
         s->factor=0;
         if (frame_timestamp >= s->start_time/(double)AV_TIME_BASE
-            && inlink->frame_count >= s->start_frame) {
+            && inlink->frame_count_out >= s->start_frame) {
             // Time to start fading
             s->fade_state = VF_FADE_FADING;
 
@@ -311,15 +311,15 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
 
             // Save start frame in case we are starting based on time and fading based on frames
             if (s->start_time != 0 && s->start_frame == 0) {
-                s->start_frame = inlink->frame_count;
+                s->start_frame = inlink->frame_count_out;
             }
         }
     }
     if (s->fade_state == VF_FADE_FADING) {
         if (s->duration == 0) {
             // Fading based on frame count
-            s->factor = (inlink->frame_count - s->start_frame) * s->fade_per_frame;
-            if (inlink->frame_count > s->start_frame + s->nb_frames) {
+            s->factor = (inlink->frame_count_out - s->start_frame) * s->fade_per_frame;
+            if (inlink->frame_count_out > s->start_frame + s->nb_frames) {
                 s->fade_state = VF_FADE_DONE;
             }
 
diff --git a/libavfilter/vf_fieldhint.c b/libavfilter/vf_fieldhint.c
index 2b845e7..26551ce 100644
--- a/libavfilter/vf_fieldhint.c
+++ b/libavfilter/vf_fieldhint.c
@@ -147,22 +147,22 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
             }
             switch (s->mode) {
             case 0:
-                if (tf > outlink->frame_count + 1 || tf < FFMAX(0, outlink->frame_count - 1) ||
-                    bf > outlink->frame_count + 1 || bf < FFMAX(0, outlink->frame_count - 1)) {
-                    av_log(ctx, AV_LOG_ERROR, "Out of range frames %"PRId64" and/or %"PRId64" on line %"PRId64" for %"PRId64". input frame.\n", tf, bf, s->line, inlink->frame_count);
+                if (tf > outlink->frame_count_in + 1 || tf < FFMAX(0, outlink->frame_count_in - 1) ||
+                    bf > outlink->frame_count_in + 1 || bf < FFMAX(0, outlink->frame_count_in - 1)) {
+                    av_log(ctx, AV_LOG_ERROR, "Out of range frames %"PRId64" and/or %"PRId64" on line %"PRId64" for %"PRId64". input frame.\n", tf, bf, s->line, inlink->frame_count_out);
                     return AVERROR_INVALIDDATA;
                 }
                 break;
             case 1:
                 if (tf > 1 || tf < -1 ||
                     bf > 1 || bf < -1) {
-                    av_log(ctx, AV_LOG_ERROR, "Out of range %"PRId64" and/or %"PRId64" on line %"PRId64" for %"PRId64". input frame.\n", tf, bf, s->line, inlink->frame_count);
+                    av_log(ctx, AV_LOG_ERROR, "Out of range %"PRId64" and/or %"PRId64" on line %"PRId64" for %"PRId64". input frame.\n", tf, bf, s->line, inlink->frame_count_out);
                     return AVERROR_INVALIDDATA;
                 }
             };
             break;
         } else {
-            av_log(ctx, AV_LOG_ERROR, "Missing entry for %"PRId64". input frame.\n", inlink->frame_count);
+            av_log(ctx, AV_LOG_ERROR, "Missing entry for %"PRId64". input frame.\n", inlink->frame_count_out);
             return AVERROR_INVALIDDATA;
         }
     }
@@ -174,8 +174,8 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 
     switch (s->mode) {
     case 0:
-        top    = s->frame[tf - outlink->frame_count + 1];
-        bottom = s->frame[bf - outlink->frame_count + 1];
+        top    = s->frame[tf - outlink->frame_count_in + 1];
+        bottom = s->frame[bf - outlink->frame_count_in + 1];
         break;
     case 1:
         top    = s->frame[1 + tf];
diff --git a/libavfilter/vf_fieldmatch.c b/libavfilter/vf_fieldmatch.c
index e155712..54a2c7a 100644
--- a/libavfilter/vf_fieldmatch.c
+++ b/libavfilter/vf_fieldmatch.c
@@ -740,7 +740,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 
     /* scene change check */
     if (fm->combmatch == COMBMATCH_SC) {
-        if (fm->lastn == outlink->frame_count - 1) {
+        if (fm->lastn == outlink->frame_count_in - 1) {
             if (fm->lastscdiff > fm->scthresh)
                 sc = 1;
         } else if (luma_abs_diff(fm->prv, fm->src) > fm->scthresh) {
@@ -748,7 +748,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
         }
 
         if (!sc) {
-            fm->lastn = outlink->frame_count;
+            fm->lastn = outlink->frame_count_in;
             fm->lastscdiff = luma_abs_diff(fm->src, fm->nxt);
             sc = fm->lastscdiff > fm->scthresh;
         }
@@ -807,7 +807,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
     dst->interlaced_frame = combs[match] >= fm->combpel;
     if (dst->interlaced_frame) {
         av_log(ctx, AV_LOG_WARNING, "Frame #%"PRId64" at %s is still interlaced\n",
-               outlink->frame_count, av_ts2timestr(in->pts, &inlink->time_base));
+               outlink->frame_count_in, av_ts2timestr(in->pts, &inlink->time_base));
         dst->top_field_first = field;
     }
 
diff --git a/libavfilter/vf_framestep.c b/libavfilter/vf_framestep.c
index 6f198b8..8102e7c 100644
--- a/libavfilter/vf_framestep.c
+++ b/libavfilter/vf_framestep.c
@@ -63,7 +63,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *ref)
 {
     FrameStepContext *framestep = inlink->dst->priv;
 
-    if (!(inlink->frame_count % framestep->frame_step)) {
+    if (!(inlink->frame_count_out % framestep->frame_step)) {
         return ff_filter_frame(inlink->dst->outputs[0], ref);
     } else {
         av_frame_free(&ref);
diff --git a/libavfilter/vf_geq.c b/libavfilter/vf_geq.c
index 88d3b75..9d26f54 100644
--- a/libavfilter/vf_geq.c
+++ b/libavfilter/vf_geq.c
@@ -208,7 +208,7 @@ static int geq_filter_frame(AVFilterLink *inlink, AVFrame *in)
     AVFilterLink *outlink = inlink->dst->outputs[0];
     AVFrame *out;
     double values[VAR_VARS_NB] = {
-        [VAR_N] = inlink->frame_count,
+        [VAR_N] = inlink->frame_count_out,
         [VAR_T] = in->pts == AV_NOPTS_VALUE ? NAN : in->pts * av_q2d(inlink->time_base),
     };
 
diff --git a/libavfilter/vf_hue.c b/libavfilter/vf_hue.c
index b5d7213..0d2862f 100644
--- a/libavfilter/vf_hue.c
+++ b/libavfilter/vf_hue.c
@@ -318,7 +318,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *inpic)
         av_frame_copy_props(outpic, inpic);
     }
 
-    hue->var_values[VAR_N]   = inlink->frame_count;
+    hue->var_values[VAR_N]   = inlink->frame_count_out;
     hue->var_values[VAR_T]   = TS2T(inpic->pts, inlink->time_base);
     hue->var_values[VAR_PTS] = TS2D(inpic->pts);
 
diff --git a/libavfilter/vf_overlay.c b/libavfilter/vf_overlay.c
index c33b35d..36659ba 100644
--- a/libavfilter/vf_overlay.c
+++ b/libavfilter/vf_overlay.c
@@ -594,7 +594,7 @@ static AVFrame *do_blend(AVFilterContext *ctx, AVFrame *mainpic,
     if (s->eval_mode == EVAL_MODE_FRAME) {
         int64_t pos = av_frame_get_pkt_pos(mainpic);
 
-        s->var_values[VAR_N] = inlink->frame_count;
+        s->var_values[VAR_N] = inlink->frame_count_out;
         s->var_values[VAR_T] = mainpic->pts == AV_NOPTS_VALUE ?
             NAN : mainpic->pts * av_q2d(inlink->time_base);
         s->var_values[VAR_POS] = pos == -1 ? NAN : pos;
diff --git a/libavfilter/vf_paletteuse.c b/libavfilter/vf_paletteuse.c
index dece05a..602e694 100644
--- a/libavfilter/vf_paletteuse.c
+++ b/libavfilter/vf_paletteuse.c
@@ -887,7 +887,7 @@ static AVFrame *apply_palette(AVFilterLink *inlink, AVFrame *in)
     }
     memcpy(out->data[1], s->palette, AVPALETTE_SIZE);
     if (s->calc_mean_err)
-        debug_mean_error(s, in, out, inlink->frame_count);
+        debug_mean_error(s, in, out, inlink->frame_count_out);
     av_frame_free(&in);
     return out;
 }
diff --git a/libavfilter/vf_perspective.c b/libavfilter/vf_perspective.c
index 287db68..d590cfa 100644
--- a/libavfilter/vf_perspective.c
+++ b/libavfilter/vf_perspective.c
@@ -135,8 +135,8 @@ static int calc_persp_luts(AVFilterContext *ctx, AVFilterLink *inlink)
     double (*ref)[2]      = s->ref;
 
     double values[VAR_VARS_NB] = { [VAR_W] = inlink->w, [VAR_H] = inlink->h,
-                                   [VAR_IN] = inlink->frame_count  + 1,
-                                   [VAR_ON] = outlink->frame_count + 1 };
+                                   [VAR_IN] = inlink->frame_count_out + 1,
+                                   [VAR_ON] = outlink->frame_count_in + 1 };
     const int h = values[VAR_H];
     const int w = values[VAR_W];
     double x0, x1, x2, x3, x4, x5, x6, x7, x8, q;
diff --git a/libavfilter/vf_rotate.c b/libavfilter/vf_rotate.c
index 42e725a..371ff7f 100644
--- a/libavfilter/vf_rotate.c
+++ b/libavfilter/vf_rotate.c
@@ -522,7 +522,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
     }
     av_frame_copy_props(out, in);
 
-    rot->var_values[VAR_N] = inlink->frame_count;
+    rot->var_values[VAR_N] = inlink->frame_count_out;
     rot->var_values[VAR_T] = TS2T(in->pts, inlink->time_base);
     rot->angle = res = av_expr_eval(rot->angle_expr, rot->var_values, rot);
 
diff --git a/libavfilter/vf_showinfo.c b/libavfilter/vf_showinfo.c
index 5146995..83d941c 100644
--- a/libavfilter/vf_showinfo.c
+++ b/libavfilter/vf_showinfo.c
@@ -107,7 +107,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
            "n:%4"PRId64" pts:%7s pts_time:%-7s pos:%9"PRId64" "
            "fmt:%s sar:%d/%d s:%dx%d i:%c iskey:%d type:%c "
            "checksum:%08"PRIX32" plane_checksum:[%08"PRIX32,
-           inlink->frame_count,
+           inlink->frame_count_out,
            av_ts2str(frame->pts), av_ts2timestr(frame->pts, &inlink->time_base), av_frame_get_pkt_pos(frame),
            desc->name,
            frame->sample_aspect_ratio.num, frame->sample_aspect_ratio.den,
diff --git a/libavfilter/vf_swaprect.c b/libavfilter/vf_swaprect.c
index a467627..a0aa59d 100644
--- a/libavfilter/vf_swaprect.c
+++ b/libavfilter/vf_swaprect.c
@@ -97,7 +97,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
     var_values[VAR_A]   = (float) inlink->w / inlink->h;
     var_values[VAR_SAR] = inlink->sample_aspect_ratio.num ? av_q2d(inlink->sample_aspect_ratio) : 1;
     var_values[VAR_DAR] = var_values[VAR_A] * var_values[VAR_SAR];
-    var_values[VAR_N]   = inlink->frame_count;
+    var_values[VAR_N]   = inlink->frame_count_out;
     var_values[VAR_T]   = in->pts == AV_NOPTS_VALUE ? NAN : in->pts * av_q2d(inlink->time_base);
     var_values[VAR_POS] = av_frame_get_pkt_pos(in) == -1 ? NAN : av_frame_get_pkt_pos(in);
 
diff --git a/libavfilter/vf_telecine.c b/libavfilter/vf_telecine.c
index 58babca..35f382e 100644
--- a/libavfilter/vf_telecine.c
+++ b/libavfilter/vf_telecine.c
@@ -244,7 +244,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *inpicref)
 
         av_frame_copy_props(frame, inpicref);
         frame->pts = ((s->start_time == AV_NOPTS_VALUE) ? 0 : s->start_time) +
-                     av_rescale(outlink->frame_count, s->ts_unit.num,
+                     av_rescale(outlink->frame_count_in, s->ts_unit.num,
                                 s->ts_unit.den);
         ret = ff_filter_frame(outlink, frame);
     }
diff --git a/libavfilter/vf_tinterlace.c b/libavfilter/vf_tinterlace.c
index 8a796ce..80146a9 100644
--- a/libavfilter/vf_tinterlace.c
+++ b/libavfilter/vf_tinterlace.c
@@ -280,12 +280,12 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *picref)
         copy_picture_field(tinterlace, out->data, out->linesize,
                            (const uint8_t **)cur->data, cur->linesize,
                            inlink->format, inlink->w, inlink->h,
-                           FIELD_UPPER_AND_LOWER, 1, tinterlace->mode == MODE_MERGEX2 ? inlink->frame_count & 1 ? FIELD_LOWER : FIELD_UPPER : FIELD_UPPER, tinterlace->flags);
+                           FIELD_UPPER_AND_LOWER, 1, tinterlace->mode == MODE_MERGEX2 ? inlink->frame_count_out & 1 ? FIELD_LOWER : FIELD_UPPER : FIELD_UPPER, tinterlace->flags);
         /* write even frame lines into the lower field of the new frame */
         copy_picture_field(tinterlace, out->data, out->linesize,
                            (const uint8_t **)next->data, next->linesize,
                            inlink->format, inlink->w, inlink->h,
-                           FIELD_UPPER_AND_LOWER, 1, tinterlace->mode == MODE_MERGEX2 ? inlink->frame_count & 1 ? FIELD_UPPER : FIELD_LOWER : FIELD_LOWER, tinterlace->flags);
+                           FIELD_UPPER_AND_LOWER, 1, tinterlace->mode == MODE_MERGEX2 ? inlink->frame_count_out & 1 ? FIELD_UPPER : FIELD_LOWER : FIELD_LOWER, tinterlace->flags);
         if (tinterlace->mode != MODE_MERGEX2)
             av_frame_free(&tinterlace->next);
         break;
diff --git a/libavfilter/vf_vignette.c b/libavfilter/vf_vignette.c
index 1d66c50..94b6c6f 100644
--- a/libavfilter/vf_vignette.c
+++ b/libavfilter/vf_vignette.c
@@ -165,7 +165,7 @@ static void update_context(VignetteContext *s, AVFilterLink *inlink, AVFrame *fr
     int dst_linesize = s->fmap_linesize;
 
     if (frame) {
-        s->var_values[VAR_N]   = inlink->frame_count;
+        s->var_values[VAR_N]   = inlink->frame_count_out;
         s->var_values[VAR_T]   = TS2T(frame->pts, inlink->time_base);
         s->var_values[VAR_PTS] = TS2D(frame->pts);
     } else {
diff --git a/libavfilter/vf_zoompan.c b/libavfilter/vf_zoompan.c
index 7a71503..136d6c8 100644
--- a/libavfilter/vf_zoompan.c
+++ b/libavfilter/vf_zoompan.c
@@ -149,7 +149,7 @@ static int output_single_frame(AVFilterContext *ctx, AVFrame *in, double *var_va
     var_values[VAR_PDURATION] = s->prev_nb_frames;
     var_values[VAR_TIME] = pts * av_q2d(outlink->time_base);
     var_values[VAR_FRAME] = i;
-    var_values[VAR_ON] = outlink->frame_count + 1;
+    var_values[VAR_ON] = outlink->frame_count_in + 1;
     if ((ret = av_expr_parse_and_eval(zoom, s->zoom_expr_str,
                                       var_names, var_values,
                                       NULL, NULL, NULL, NULL, NULL, 0, ctx)) < 0)
@@ -235,8 +235,8 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
     s->var_values[VAR_IN_H]  = s->var_values[VAR_IH] = in->height;
     s->var_values[VAR_OUT_W] = s->var_values[VAR_OW] = s->w;
     s->var_values[VAR_OUT_H] = s->var_values[VAR_OH] = s->h;
-    s->var_values[VAR_IN]    = inlink->frame_count + 1;
-    s->var_values[VAR_ON]    = outlink->frame_count + 1;
+    s->var_values[VAR_IN]    = inlink->frame_count_out + 1;
+    s->var_values[VAR_ON]    = outlink->frame_count_in + 1;
     s->var_values[VAR_PX]    = s->x;
     s->var_values[VAR_PY]    = s->y;
     s->var_values[VAR_X]     = 0;
diff --git a/libavfilter/vsrc_mptestsrc.c b/libavfilter/vsrc_mptestsrc.c
index 3c75821..c5fdea7 100644
--- a/libavfilter/vsrc_mptestsrc.c
+++ b/libavfilter/vsrc_mptestsrc.c
@@ -303,7 +303,7 @@ static int request_frame(AVFilterLink *outlink)
     AVFrame *picref;
     int w = WIDTH, h = HEIGHT,
         cw = AV_CEIL_RSHIFT(w, test->hsub), ch = AV_CEIL_RSHIFT(h, test->vsub);
-    unsigned int frame = outlink->frame_count;
+    unsigned int frame = outlink->frame_count_in;
     enum test_type tt = test->test;
     int i;
 
-- 
2.9.3

From cf5e6606b1e31d79d39e47bb5fa3bba9709689d9 Mon Sep 17 00:00:00 2001
From: Nicolas George <geo...@nsup.org>
Date: Thu, 3 Dec 2015 20:05:14 +0100
Subject: [PATCH 3/4] lavfi: add FFFrameQueue API.

Signed-off-by: Nicolas George <geo...@nsup.org>
---
 libavfilter/framequeue.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++
 libavfilter/framequeue.h | 56 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 126 insertions(+)
 create mode 100644 libavfilter/framequeue.c
 create mode 100644 libavfilter/framequeue.h

diff --git a/libavfilter/framequeue.c b/libavfilter/framequeue.c
new file mode 100644
index 0000000..ac226de
--- /dev/null
+++ b/libavfilter/framequeue.c
@@ -0,0 +1,70 @@
+/*
+ * Generic frame queue
+ * Copyright (c) 2015 Nicolas George
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public License
+ * as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "framequeue.h"
+
+static inline FFFrameBucket *bucket(FFFrameQueue *fq, size_t idx)
+{
+    return &fq->queue[(fq->tail + idx) & (fq->allocated - 1)];
+}
+
+void ff_framequeue_init(FFFrameQueue *fq, FFGlobalFrameQueue *gfq)
+{
+    fq->queue = &fq->first_bucket;
+    fq->allocated = 1;
+}
+
+int ff_framequeue_add(FFFrameQueue *fq, AVFrame *frame)
+{
+    FFFrameBucket *b;
+
+    if (fq->queued == fq->allocated) {
+        if (fq->allocated == 1) {
+            size_t na = 8;
+            FFFrameBucket *nq = av_realloc_array(NULL, na, sizeof(*nq));
+            if (!nq)
+                return AVERROR(ENOMEM);
+            nq[0] = fq->queue[0];
+            fq->queue = nq;
+            fq->allocated = na;
+        } else {
+            size_t na = fq->allocated << 1;
+            FFFrameBucket *nq = av_realloc_array(fq->queue, na, sizeof(*nq));
+            if (!nq)
+                return AVERROR(ENOMEM);
+            if (fq->tail + fq->queued > fq->allocated)
+                memmove(nq + fq->allocated, nq,
+                        (fq->fail + fq->queued - fq->allocated) * sizeof(*nq));
+            fq->queue = nq;
+            fq->allocated = na;
+        }
+    }
+    b = bucket(fq, fq->queued);
+    b->frame = frame;
+    fq->queued++;
+    return 0;
+}
+
+AVFrame *ff_framequeue_take(FFFrameQueue *fq)
+{
+}
+
+#endif /* AVFILTER_FRAMEQUEUE_H */
diff --git a/libavfilter/framequeue.h b/libavfilter/framequeue.h
new file mode 100644
index 0000000..68da1b7
--- /dev/null
+++ b/libavfilter/framequeue.h
@@ -0,0 +1,56 @@
+/*
+ * Generic frame queue
+ * Copyright (c) 2015 Nicolas George
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public License
+ * as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef AVFILTER_FRAMEQUEUE_H
+#define AVFILTER_FRAMEQUEUE_H
+
+/**
+ * FFFrameQueue: simple AVFrame queue API
+ *
+ * Note: this API is not thread-safe. Concurrent access to the same queue
+ * must be protected by a mutex or any synchronization mechanism.
+ */
+
+#include "avfilter.h"
+#include "libavutil/avassert.h"
+
+typedef struct FFFrameBucket {
+    AVFrame *frame;
+} FFFrameBucket;
+
+typedef FFGlobalFrameQueue {
+} FFGlobalFrameQueue;
+
+struct FFFrameQueue {
+    FFFrameBucket *queue;
+    size_t allocated;
+    size_t tail;
+    size_t queued;
+    FFFrameBucket first_bucket;
+};
+
+void ff_framequeue_init(FFFrameQueue *fq, FFGlobalFrameQueue *gfq);
+
+int ff_framequeue_add(FFFrameQueue *fq, AVFrame *frame);
+
+AVFrame *ff_framequeue_take(FFFrameQueue *fq);
+
+#endif /* AVFILTER_FRAMEQUEUE_H */
-- 
2.9.3

From 7e936674b5d8c7e6f3606f094720fe4137ab5b0e Mon Sep 17 00:00:00 2001
From: Nicolas George <geo...@nsup.org>
Date: Sun, 3 Jan 2016 15:44:42 +0100
Subject: [PATCH 4/4] WIP derecursive filter frame

Signed-off-by: Nicolas George <geo...@nsup.org>
---
 ffmpeg.c                       |   1 +
 libavfilter/Makefile           |   1 +
 libavfilter/avfilter.c         | 467 ++++++++++++++++++++++++++++++++++-------
 libavfilter/avfilter.h         |  14 +-
 libavfilter/avfiltergraph.c    |  50 ++---
 libavfilter/buffersink.c       |  16 +-
 libavfilter/buffersrc.c        |   1 +
 libavfilter/f_interleave.c     |   5 +-
 libavfilter/framequeue.c       |  49 ++++-
 libavfilter/framequeue.h       |  40 +++-
 libavfilter/internal.h         |  10 +
 libavfilter/private_fields.h   |  43 ++++
 libavfilter/split.c            |   3 +-
 libavfilter/vf_extractplanes.c |   3 +-
 tests/ref/fate/source          |   1 +
 15 files changed, 571 insertions(+), 133 deletions(-)
 create mode 100644 libavfilter/private_fields.h

diff --git a/ffmpeg.c b/ffmpeg.c
index 3229823..ef07b4a 100644
--- a/ffmpeg.c
+++ b/ffmpeg.c
@@ -4087,6 +4087,7 @@ static int transcode_step(void)
     ost = choose_output();
     if (!ost) {
         if (got_eagain()) {
+            av_log(0, 16, "no OST and EAGAIN, will sleep\n");
             reset_eagain();
             av_usleep(10000);
             return 0;
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 81b40ac..568fdef 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -18,6 +18,7 @@ OBJS = allfilters.o                                                     \
        fifo.o                                                           \
        formats.o                                                        \
        framepool.o                                                      \
+       framequeue.o                                                     \
        graphdump.o                                                      \
        graphparser.o                                                    \
        opencl_allkernels.o                                              \
diff --git a/libavfilter/avfilter.c b/libavfilter/avfilter.c
index ccbe4d9..9679994 100644
--- a/libavfilter/avfilter.c
+++ b/libavfilter/avfilter.c
@@ -34,6 +34,7 @@
 #include "libavutil/rational.h"
 #include "libavutil/samplefmt.h"
 
+#include "private_fields.h"
 #include "audio.h"
 #include "avfilter.h"
 #include "formats.h"
@@ -135,6 +136,10 @@ int avfilter_link(AVFilterContext *src, unsigned srcpad,
 {
     AVFilterLink *link;
 
+    av_assert0(src->graph);
+    av_assert0(dst->graph);
+    av_assert0(src->graph == dst->graph);
+
     if (src->nb_outputs <= srcpad || dst->nb_inputs <= dstpad ||
         src->outputs[srcpad]      || dst->inputs[dstpad])
         return AVERROR(EINVAL);
@@ -147,6 +152,9 @@ int avfilter_link(AVFilterContext *src, unsigned srcpad,
         return AVERROR(EINVAL);
     }
 
+#ifndef AVFILTER_LINK_INTERNAL_FIELDS
+# error AVFilterLink internal fields not defined
+#endif
     link = av_mallocz(sizeof(*link));
     if (!link)
         return AVERROR(ENOMEM);
@@ -160,6 +168,7 @@ int avfilter_link(AVFilterContext *src, unsigned srcpad,
     link->type    = src->output_pads[srcpad].type;
     av_assert0(AV_PIX_FMT_NONE == -1 && AV_SAMPLE_FMT_NONE == -1);
     link->format  = -1;
+    ff_framequeue_init(&link->fifo, &src->graph->internal->frame_queues);
 
     return 0;
 }
@@ -182,14 +191,25 @@ int avfilter_link_get_channels(AVFilterLink *link)
 
 void ff_avfilter_link_set_in_status(AVFilterLink *link, int status, int64_t pts)
 {
-    ff_avfilter_link_set_out_status(link, status, pts);
+    if (link->status_in == status)
+        return;
+    av_assert0(!link->status_in);
+    link->status_in = status;
+    link->status_in_pts = pts;
+    link->frame_wanted_out = 0;
+    link->frame_blocked_in = 0; // XXX CIG check
+    // XXX CIG must clear frame_blocked_in on outputs using the old API
+    ff_filter_schedule(link->dst, "in_status");
 }
 
 void ff_avfilter_link_set_out_status(AVFilterLink *link, int status, int64_t pts)
 {
-    link->status = status;
-    link->frame_wanted_in = link->frame_wanted_out = 0;
-    ff_update_link_current_pts(link, pts);
+    av_assert0(!link->frame_wanted_out);
+    av_assert0(!link->status_out);
+    link->status_out = status;
+    if (pts != AV_NOPTS_VALUE)
+        ff_update_link_current_pts(link, pts);
+    ff_filter_schedule(link->src, "out_status");
 }
 
 void avfilter_link_set_closed(AVFilterLink *link, int closed)
@@ -370,10 +390,22 @@ int ff_request_frame(AVFilterLink *link)
 {
     FF_TPRINTF_START(NULL, request_frame); ff_tlog_link(NULL, link, 1);
 
-    if (link->status)
-        return link->status;
-    link->frame_wanted_in = 1;
+    if (link->status_out)
+        return link->status_out;
+    /* XXX only for old API */
+    if (ff_framequeue_queued_frames(&link->fifo) > 0 &&
+        ff_framequeue_queued_samples(&link->fifo) >= link->min_samples) {
+        ff_filter_schedule(link->dst, "request_frame_available");
+        av_assert0(link->dst->ready);
+        return 0;
+    }
+    if (link->status_in) {
+        //av_assert0(!"request_frame must close"); // XXX CIG
+        link->status_out = link->status_in;
+        return link->status_out;
+    }
     link->frame_wanted_out = 1;
+    ff_filter_schedule(link->src, "request_frame");
     return 0;
 }
 
@@ -382,22 +414,16 @@ int ff_request_frame_to_filter(AVFilterLink *link)
     int ret = -1;
 
     FF_TPRINTF_START(NULL, request_frame_to_filter); ff_tlog_link(NULL, link, 1);
-    link->frame_wanted_in = 0;
+    link->frame_blocked_in = 1;
     if (link->srcpad->request_frame)
         ret = link->srcpad->request_frame(link);
     else if (link->src->inputs[0])
         ret = ff_request_frame(link->src->inputs[0]);
-    if (ret == AVERROR_EOF && link->partial_buf) {
-        AVFrame *pbuf = link->partial_buf;
-        link->partial_buf = NULL;
-        ret = ff_filter_frame_framed(link, pbuf);
-        ff_avfilter_link_set_in_status(link, AVERROR_EOF, AV_NOPTS_VALUE);
-        link->frame_wanted_out = 0;
-        return ret;
-    }
     if (ret < 0) {
-        if (ret != AVERROR(EAGAIN) && ret != link->status)
+        if (ret != AVERROR(EAGAIN) && ret != link->status_in)
             ff_avfilter_link_set_in_status(link, ret, AV_NOPTS_VALUE);
+        if (ret == AVERROR_EOF)
+            ret = 0;
     }
     return ret;
 }
@@ -1056,10 +1082,12 @@ static int ff_filter_frame_framed(AVFilterLink *link, AVFrame *frame)
     AVFilterCommand *cmd= link->dst->command_queue;
     int64_t pts;
 
-    if (link->status) {
+#if 0
+    if (link->status_in) {
         av_frame_free(&frame);
-        return link->status;
+        return link->status_in;
     }
+#endif
 
     if (!(filter_frame = dst->filter_frame))
         filter_frame = default_filter_frame;
@@ -1142,52 +1170,9 @@ fail:
     return ret;
 }
 
-static int ff_filter_frame_needs_framing(AVFilterLink *link, AVFrame *frame)
-{
-    int insamples = frame->nb_samples, inpos = 0, nb_samples;
-    AVFrame *pbuf = link->partial_buf;
-    int nb_channels = av_frame_get_channels(frame);
-    int ret = 0;
-
-    /* Handle framing (min_samples, max_samples) */
-    while (insamples) {
-        if (!pbuf) {
-            AVRational samples_tb = { 1, link->sample_rate };
-            pbuf = ff_get_audio_buffer(link, link->partial_buf_size);
-            if (!pbuf) {
-                av_log(link->dst, AV_LOG_WARNING,
-                       "Samples dropped due to memory allocation failure.\n");
-                return 0;
-            }
-            av_frame_copy_props(pbuf, frame);
-            pbuf->pts = frame->pts;
-            if (pbuf->pts != AV_NOPTS_VALUE)
-                pbuf->pts += av_rescale_q(inpos, samples_tb, link->time_base);
-            pbuf->nb_samples = 0;
-        }
-        nb_samples = FFMIN(insamples,
-                           link->partial_buf_size - pbuf->nb_samples);
-        av_samples_copy(pbuf->extended_data, frame->extended_data,
-                        pbuf->nb_samples, inpos,
-                        nb_samples, nb_channels, link->format);
-        inpos                   += nb_samples;
-        insamples               -= nb_samples;
-        pbuf->nb_samples += nb_samples;
-        if (pbuf->nb_samples >= link->min_samples) {
-            ret = ff_filter_frame_framed(link, pbuf);
-            pbuf = NULL;
-        } else {
-            if (link->frame_wanted_out)
-                link->frame_wanted_in = 1;
-        }
-    }
-    av_frame_free(&frame);
-    link->partial_buf = pbuf;
-    return ret;
-}
-
 int ff_filter_frame(AVFilterLink *link, AVFrame *frame)
 {
+    int ret;
     FF_TPRINTF_START(NULL, filter_frame); ff_tlog_link(NULL, link, 1); ff_tlog(NULL, " "); ff_tlog_ref(NULL, frame, 1);
 
     /* Consistency checks */
@@ -1220,23 +1205,361 @@ int ff_filter_frame(AVFilterLink *link, AVFrame *frame)
         }
     }
 
+    link->frame_blocked_in = 0;
     link->frame_wanted_out = 0;
     link->frame_count_in++;
-    /* Go directly to actual filtering if possible */
-    if (link->type == AVMEDIA_TYPE_AUDIO &&
-        link->min_samples &&
-        (link->partial_buf ||
-         frame->nb_samples < link->min_samples ||
-         frame->nb_samples > link->max_samples)) {
-        return ff_filter_frame_needs_framing(link, frame);
-    } else {
-        return ff_filter_frame_framed(link, frame);
+    ret = ff_framequeue_add(&link->fifo, frame);
+    if (ret < 0) {
+        av_frame_free(&frame);
+        return ret;
     }
+    ff_filter_schedule(link->dst, "filter_frame");
+    return 0;
+
 error:
     av_frame_free(&frame);
     return AVERROR_PATCHWELCOME;
 }
 
+static int take_samples(AVFilterLink *link, unsigned min, unsigned max,
+                        AVFrame **rframe)
+{
+    AVFrame *frame0, *frame, *buf;
+    unsigned nb_samples, nb_frames, i, p;
+    int ret;
+
+    /* Note: this function relies on no format changes. */
+    if (ff_framequeue_queued_samples(&link->fifo) < min)
+        return 0;
+    frame0 = frame = ff_framequeue_peek(&link->fifo, 0);
+    if (frame->nb_samples >= min && frame->nb_samples < max) {
+        *rframe = ff_framequeue_take(&link->fifo);
+        return 1;
+    }
+    nb_frames = 0;
+    nb_samples = 0;
+    while (1) {
+        if (nb_samples + frame->nb_samples > max) {
+            if (nb_samples < min)
+                nb_samples = max;
+            break;
+        }
+        nb_samples += frame->nb_samples;
+        nb_frames++;
+        if (nb_frames == ff_framequeue_queued_frames(&link->fifo))
+            break;
+        frame = ff_framequeue_peek(&link->fifo, nb_frames);
+    }
+
+    buf = ff_get_audio_buffer(link, nb_samples);
+    if (!buf)
+        return AVERROR(ENOMEM);
+    ret = av_frame_copy_props(buf, frame0);
+    if (ret < 0) {
+        av_frame_free(&buf);
+        return ret;
+    }
+    buf->pts = frame0->pts;
+
+    p = 0;
+    for (i = 0; i < nb_frames; i++) {
+        frame = ff_framequeue_take(&link->fifo);
+        av_samples_copy(buf->extended_data, frame->extended_data, p, 0,
+                        frame->nb_samples, link->channels, link->format);
+        p += frame->nb_samples;
+    }
+    if (p < nb_samples) {
+        unsigned n = nb_samples - p;
+        frame = ff_framequeue_peek(&link->fifo, 0);
+        av_samples_copy(buf->extended_data, frame->extended_data, p, 0, n,
+                        link->channels, link->format);
+        frame->nb_samples -= n;
+        av_samples_copy(frame->extended_data, frame->extended_data, 0, n,
+                        frame->nb_samples, link->channels, link->format);
+        if (frame->pts != AV_NOPTS_VALUE)
+            frame->pts += av_rescale_q(n, av_make_q(1, link->sample_rate), link->time_base);
+        ff_framequeue_update_peeked(&link->fifo, 0);
+        ff_framequeue_skip_samples(&link->fifo, n);
+    }
+
+    *rframe = buf;
+    return 1;
+}
+
+int ff_filter_frame_to_filter(AVFilterLink *link)
+{
+    AVFrame *frame;
+    AVFilterContext *dst = link->dst;
+    unsigned i;
+    int ret;
+
+    av_assert1(ff_framequeue_queued_frames(&link->fifo));
+    if (link->min_samples) {
+        int min = link->min_samples;
+        if (link->status_in)
+            min = FFMIN(min, ff_framequeue_queued_samples(&link->fifo));
+        ret = take_samples(link, min, link->max_samples, &frame);
+        if (!ret) {
+            ret = ff_request_frame(link);
+            av_assert1(!ret);
+        }
+        if (ret <= 0)
+            return ret;
+    } else {
+        frame = ff_framequeue_take(&link->fifo);
+    }
+    /* The filter will soon have received a new frame, that may allow it to
+       produce one or more: unblock its outputs. */
+    for (i = 0; i < dst->nb_outputs; i++)
+        dst->outputs[i]->frame_blocked_in = 0;
+    ret = ff_filter_frame_framed(link, frame);
+    if (ret < 0 && ret != link->status_out) {
+        ff_avfilter_link_set_out_status(link, ret, AV_NOPTS_VALUE);
+    } else {
+        ff_filter_schedule(dst, "filter_frame_to_filter");
+    }
+    return ret;
+}
+
+void ff_dump_graph_scheduling(AVFilterGraph *graph, const char *tag)
+{
+    unsigned i, j;
+    static unsigned round = 0;
+
+    return;
+    av_log(0, 16, "Graph status (round_%d) for %s:\n", round++, tag);
+    for (i = 0; i < graph->nb_filters; i++) {
+        AVFilterContext *f = graph->filters[i];
+        av_log(0, 16, "  [R:%d] %s\n", f->ready, f->name);
+        for (j = 0; j < f->nb_inputs; j++) {
+            AVFilterLink *l = f->inputs[j];
+            av_log(0, 16, "    %s %s [%zd %ld/%d] %s  <- %s\n",
+                   l->frame_wanted_out ? "W" : "-",
+                   l->status_in ? av_err2str(l->status_in) : ".",
+                   ff_framequeue_queued_frames(&l->fifo),
+                   ff_framequeue_queued_samples(&l->fifo), l->min_samples,
+                   l->status_out ? av_err2str(l->status_out) : ".",
+                   l->src->name);
+        }
+        for (j = 0; j < f->nb_outputs; j++) {
+            AVFilterLink *l = f->outputs[j];
+            av_log(0, 16, "    %s -> %s\n",
+                   l->frame_blocked_in ? "B" : "-",
+                   l->dst->name);
+        }
+    }
+}
+
+void ff_filter_schedule(AVFilterContext *filter, const char *tag)
+{
+    unsigned ready = 0, i;
+
+    ff_dump_graph_scheduling(filter->graph, "filter_schedule");
+    for (i = 0; !ready && i < filter->nb_inputs; i++)
+        if (!filter->inputs[i]->frame_wanted_out &&
+            ff_framequeue_queued_frames(&filter->inputs[i]->fifo))
+            ready = 300;
+    for (i = 0; !ready && i < filter->nb_inputs; i++)
+        if (filter->inputs[i]->status_in != filter->inputs[i]->status_out)
+            ready = 200;
+    for (i = 0; !ready && i < filter->nb_outputs; i++)
+        if (filter->outputs[i]->frame_wanted_out &&
+            !filter->outputs[i]->frame_blocked_in)
+            ready = 100;
+    filter->ready = ready;
+}
+
+static int forward_status_change(AVFilterContext *filter, AVFilterLink *in)
+{
+    unsigned out = 0, progress = 0;
+    int ret;
+
+    av_assert0(!in->status_out);
+    if (!filter->nb_outputs) {
+        /* not necessary with the current API and sinks */
+        return 0;
+    }
+    while (!in->status_out) {
+        if (!filter->outputs[out]->status_in) {
+            progress++;
+            ret = ff_request_frame_to_filter(filter->outputs[out]);
+            filter->outputs[out]->frame_blocked_in = 0; // XXX CIG understand
+            if (ret < 0)
+                return ret;
+        }
+        if (++out == filter->nb_outputs) {
+            av_assert0(progress);
+            progress = 0;
+            out = 0;
+        }
+    }
+    ff_filter_schedule(filter, "forward_status_change");
+    return 0;
+}
+
+#define FFERROR_NOT_READY FFERRTAG('N','R','D','Y')
+
+static int ff_filter_activate_default(AVFilterContext *filter)
+{
+    unsigned i;
+
+    for (i = 0; i < filter->nb_inputs; i++) {
+        if (!filter->inputs[i]->frame_wanted_out &&
+            ff_framequeue_queued_frames(&filter->inputs[i]->fifo)) {
+            return ff_filter_frame_to_filter(filter->inputs[i]);
+        }
+    }
+    for (i = 0; i < filter->nb_outputs; i++) {
+        if (filter->outputs[i]->frame_wanted_out &&
+            !filter->outputs[i]->frame_blocked_in) {
+            return ff_request_frame_to_filter(filter->outputs[i]);
+        }
+    }
+    for (i = 0; i < filter->nb_inputs; i++) {
+        if (filter->inputs[i]->status_in && !filter->inputs[i]->status_out) {
+            if (ff_framequeue_queued_frames(&filter->inputs[i]->fifo)) {
+                // XXX CIG probably impossible: frame_wanted_out should be
+                // 0, and therefore caught by the first case
+                av_assert0(!"TODO");
+            } else {
+                return forward_status_change(filter, filter->inputs[i]);
+            }
+        }
+    }
+    return FFERROR_NOT_READY;
+}
+
+/*
+   Filter scheduling and activation
+
+   When a filter is activated, it must:
+   - if possible, output a frame;
+   - else, check outputs for wanted frames and forward the requests.
+
+   The following AVFilterLink fields are used for activation:
+
+   - frame_wanted_out:
+
+     This field indicates if a frame is needed on this input of the
+     destination filter. A positive value indicates that a frame is needed
+     to process queued frames or internal data or to satisfy the
+     application; a zero value indicates that a frame is not especially
+     needed but could be processed anyway; a negative value indicates that a
+     frame would just be queued.
+
+     It is set by filters using ff_request_frame() or ff_request_no_frame(),
+     when requested by the application through a specific API or when it is
+     set on one of the outputs.
+
+     It is cleared when a frame is sent from the source using
+     ff_filter_frame().
+
+     It is also cleared when a status change is sent from the source using
+     ff_avfilter_link_set_in_status(). XXX to implement
+
+   - frame_blocked_in:
+
+     This field means that the source filter can not generate a frame as is.
+
+     It is set by the framework on all outputs of a filter before activating it.
+
+     It is automatically cleared by ff_filter_frame().
+
+   - fifo:
+
+     Contains the frames queued on a filter input. If it contains frames and
+     frame_wanted_out is not set, then the filter can be activated. If that
+     result in the filter not able to use these frames, the filter must set
+     frame_wanted_out to ask for more frames.
+
+   - status_in and status_in_pts:
+
+     Status (EOF or error code) of the link and timestamp of the status
+     change (in link time base, same as frames) as seen from the input of
+     the link. The status change is considered happening after the frames
+     queued in fifo.
+
+     It is set by the source filter using ff_avfilter_link_set_in_status().
+
+     If 
+     XXX
+
+   - status_out:
+
+     Status of the link as seen from the output of the link. The status
+     change is considered having already happened.
+
+     It is set by the destination filter using
+     ff_avfilter_link_set_out_status().
+
+   A filter is ready if any of these conditions is true on any input or
+   output, in descending order of priority:
+
+   - in->fifo contains frames and in->frame_wanted_out is not set:
+     the filter must process frames or set in->frame_wanted_out.
+     XXX several inputs
+
+   - in->status_in is set but not in->status_out:
+     the filter XXX
+
+   - out->frame_blocked_in is cleared and out->frame_wanted_out is set
+     (the filter can produce a frame with its internal data).
+
+   Exemples of scenarios to consider:
+
+   - buffersrc: activate if frame_wanted_out to notify the application;
+     activate when the application adds a frame to push it immediately.
+
+   - testsrc: activate only if frame_wanted_out to produce and push a frame.
+
+   - concat (not at stitch points): can process a frame on any output.
+     Activate if frame_wanted_out on output to forward on the corresponding
+     input. Activate when a frame is present on input to process it
+     immediately.
+
+   - framesync: needs at least one frame on each input; extra frames on the
+     wrong input will accumulate. When a frame is first added on one input,
+     set frame_wanted_out<0 on it to avoid getting more (would trigger
+     testsrc) and frame_wanted_out>0 on the other to allow processing it.
+
+   Activation of old filters:
+
+   In order to activate a filter implementing the legacy filter_frame() and
+   request_frame() methods, perform the first possible of the following
+   actions:
+
+   - If an input has frames in fifo and frame_wanted_out == 0, dequeue a
+     frame and call filter_frame().
+
+     Exception: 
+     XXX
+
+     Ratinale: filter frames as soon as possible instead of leaving them
+     queued; frame_wanted_out < 0 is not possible since the old API does not
+     set it nor provides any similar feedback.
+
+   - If an output has frame_wanted_out > 0 and not frame_blocked_in, call
+     request_frame().
+
+     Rationale: checking frame_blocked_in is necessary to avoid requesting
+     repeatedly on a blocked input if another is not blocked (example:
+     [buffersrc1][testsrc1][buffersrc2][testsrc2]concat=v=2).
+
+     XXX needs_fifo
+
+ */
+
+int ff_filter_activate(AVFilterContext *filter)
+{
+    int ret;
+
+    filter->ready = 0;
+    ret = ff_filter_activate_default(filter);
+    if (ret == FFERROR_NOT_READY)
+        ret = 0;
+    return ret;
+}
+
 const AVClass *avfilter_get_class(void)
 {
     return &avfilter_class;
diff --git a/libavfilter/avfilter.h b/libavfilter/avfilter.h
index d21b144..928b08d 100644
--- a/libavfilter/avfilter.h
+++ b/libavfilter/avfilter.h
@@ -368,6 +368,8 @@ struct AVFilterContext {
      * Overrides global number of threads set per filter graph.
      */
     int nb_threads;
+
+    unsigned ready;
 };
 
 /**
@@ -541,13 +543,6 @@ struct AVFilterLink {
     void *video_frame_pool;
 
     /**
-     * True if a frame is currently wanted on the input of this filter.
-     * Set when ff_request_frame() is called by the output,
-     * cleared when the request is handled or forwarded.
-     */
-    int frame_wanted_in;
-
-    /**
      * True if a frame is currently wanted on the output of this filter.
      * Set when ff_request_frame() is called by the output,
      * cleared when a frame is filtered.
@@ -559,6 +554,11 @@ struct AVFilterLink {
      * AVHWFramesContext describing the frames.
      */
     AVBufferRef *hw_frames_ctx;
+
+#ifdef AVFILTER_LINK_INTERNAL_FIELDS
+    AVFILTER_LINK_INTERNAL_FIELDS
+#endif
+
 };
 
 /**
diff --git a/libavfilter/avfiltergraph.c b/libavfilter/avfiltergraph.c
index 3af698d..e87ce01 100644
--- a/libavfilter/avfiltergraph.c
+++ b/libavfilter/avfiltergraph.c
@@ -32,6 +32,7 @@
 #include "libavutil/opt.h"
 #include "libavutil/pixdesc.h"
 
+#include "private_fields.h"
 #include "avfilter.h"
 #include "formats.h"
 #include "internal.h"
@@ -87,6 +88,7 @@ AVFilterGraph *avfilter_graph_alloc(void)
 
     ret->av_class = &filtergraph_class;
     av_opt_set_defaults(ret);
+    ff_framequeue_global_init(&ret->internal->frame_queues);
 
     return ret;
 }
@@ -1377,7 +1379,6 @@ void ff_avfilter_graph_update_heap(AVFilterGraph *graph, AVFilterLink *link)
     heap_bubble_down(graph, link, link->age_index);
 }
 
-
 int avfilter_graph_request_oldest(AVFilterGraph *graph)
 {
     AVFilterLink *oldest = graph->sink_links[0];
@@ -1400,7 +1401,7 @@ int avfilter_graph_request_oldest(AVFilterGraph *graph)
     if (!graph->sink_links_count)
         return AVERROR_EOF;
     av_assert1(oldest->age_index >= 0);
-    while (oldest->frame_wanted_out) {
+    while (oldest->frame_wanted_out || oldest->dst->ready) {
         r = ff_filter_graph_run_once(graph);
         if (r < 0)
             return r;
@@ -1408,41 +1409,18 @@ int avfilter_graph_request_oldest(AVFilterGraph *graph)
     return 0;
 }
 
-static AVFilterLink *graph_run_once_find_filter(AVFilterGraph *graph)
-{
-    unsigned i, j;
-    AVFilterContext *f;
-
-    /* TODO: replace scanning the graph with a priority list */
-    for (i = 0; i < graph->nb_filters; i++) {
-        f = graph->filters[i];
-        for (j = 0; j < f->nb_outputs; j++)
-            if (f->outputs[j]->frame_wanted_in)
-                return f->outputs[j];
-    }
-    for (i = 0; i < graph->nb_filters; i++) {
-        f = graph->filters[i];
-        for (j = 0; j < f->nb_outputs; j++)
-            if (f->outputs[j]->frame_wanted_out)
-                return f->outputs[j];
-    }
-    return NULL;
-}
-
 int ff_filter_graph_run_once(AVFilterGraph *graph)
 {
-    AVFilterLink *link;
-    int ret;
-
-    link = graph_run_once_find_filter(graph);
-    if (!link) {
-        av_log(NULL, AV_LOG_WARNING, "Useless run of a filter graph\n");
+    AVFilterContext *filter;
+    unsigned i;
+
+    ff_dump_graph_scheduling(graph, "run_once");
+    av_assert0(graph->nb_filters);
+    filter = graph->filters[0];
+    for (i = 1; i < graph->nb_filters; i++)
+        if (graph->filters[i]->ready > filter->ready)
+            filter = graph->filters[i];
+    if (!filter->ready)
         return AVERROR(EAGAIN);
-    }
-    ret = ff_request_frame_to_filter(link);
-    if (ret == AVERROR_EOF)
-        /* local EOF will be forwarded through request_frame() /
-           set_status() until it reaches the sink */
-        ret = 0;
-    return ret < 0 ? ret : 1;
+    return ff_filter_activate(filter);
 }
diff --git a/libavfilter/buffersink.c b/libavfilter/buffersink.c
index 2feb56d..8dfb061 100644
--- a/libavfilter/buffersink.c
+++ b/libavfilter/buffersink.c
@@ -31,6 +31,7 @@
 #include "libavutil/mathematics.h"
 #include "libavutil/opt.h"
 
+#include "private_fields.h"
 #include "audio.h"
 #include "avfilter.h"
 #include "buffersink.h"
@@ -134,13 +135,20 @@ int attribute_align_arg av_buffersink_get_frame_flags(AVFilterContext *ctx, AVFr
 
     /* no picref available, fetch it from the filterchain */
     while (!av_fifo_size(buf->fifo)) {
-        if (inlink->status)
-            return inlink->status;
-        if (flags & AV_BUFFERSINK_FLAG_NO_REQUEST)
+        if (inlink->status_out)
+            return inlink->status_out;
+        if (flags & AV_BUFFERSINK_FLAG_NO_REQUEST) {
+            if (ff_framequeue_queued_frames(&inlink->fifo)) {
+                av_log(0, 16, "NO_REQUEST with queued frames %ld, %ld/%d\n", ff_framequeue_queued_frames(&inlink->fifo), ff_framequeue_queued_samples(&inlink->fifo), inlink->min_samples);
+            }
+            if (inlink->status_in) {
+                av_log(0, 16, "NO_REQUEST with status\n");
+            }
             return AVERROR(EAGAIN);
+        }
         if ((ret = ff_request_frame(inlink)) < 0)
             return ret;
-        while (inlink->frame_wanted_out) {
+        while (inlink->frame_wanted_out || ctx->ready) {
             ret = ff_filter_graph_run_once(ctx->graph);
             if (ret < 0)
                 return ret;
diff --git a/libavfilter/buffersrc.c b/libavfilter/buffersrc.c
index 9294811..7162336 100644
--- a/libavfilter/buffersrc.c
+++ b/libavfilter/buffersrc.c
@@ -184,6 +184,7 @@ static int av_buffersrc_add_frame_internal(AVFilterContext *ctx,
 
     if (!frame) {
         s->eof = 1;
+        ff_avfilter_link_set_in_status(ctx->outputs[0], AVERROR_EOF, AV_NOPTS_VALUE);
         return 0;
     } else if (s->eof)
         return AVERROR(EINVAL);
diff --git a/libavfilter/f_interleave.c b/libavfilter/f_interleave.c
index 422f2bf..44ee11b 100644
--- a/libavfilter/f_interleave.c
+++ b/libavfilter/f_interleave.c
@@ -26,6 +26,7 @@
 #include "libavutil/avassert.h"
 #include "libavutil/avstring.h"
 #include "libavutil/opt.h"
+#include "private_fields.h"
 #include "avfilter.h"
 #include "bufferqueue.h"
 #include "formats.h"
@@ -59,7 +60,7 @@ inline static int push_frame(AVFilterContext *ctx)
     for (i = 0; i < ctx->nb_inputs; i++) {
         struct FFBufQueue *q = &s->queues[i];
 
-        if (!q->available && !ctx->inputs[i]->status)
+        if (!q->available && !ctx->inputs[i]->status_out)
             return 0;
         if (q->available) {
             frame = ff_bufqueue_peek(q, 0);
@@ -190,7 +191,7 @@ static int request_frame(AVFilterLink *outlink)
     int i, ret;
 
     for (i = 0; i < ctx->nb_inputs; i++) {
-        if (!s->queues[i].available && !ctx->inputs[i]->status) {
+        if (!s->queues[i].available && !ctx->inputs[i]->status_out) {
             ret = ff_request_frame(ctx->inputs[i]);
             if (ret != AVERROR_EOF)
                 return ret;
diff --git a/libavfilter/framequeue.c b/libavfilter/framequeue.c
index ac226de..b028a21 100644
--- a/libavfilter/framequeue.c
+++ b/libavfilter/framequeue.c
@@ -19,6 +19,7 @@
  * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
  */
 
+#include "libavutil/avassert.h"
 #include "framequeue.h"
 
 static inline FFFrameBucket *bucket(FFFrameQueue *fq, size_t idx)
@@ -26,7 +27,24 @@ static inline FFFrameBucket *bucket(FFFrameQueue *fq, size_t idx)
     return &fq->queue[(fq->tail + idx) & (fq->allocated - 1)];
 }
 
-void ff_framequeue_init(FFFrameQueue *fq, FFGlobalFrameQueue *gfq)
+void ff_framequeue_global_init(FFFrameQueueGlobal *fqg)
+{
+}
+
+static void check_consistency(FFFrameQueue *fq)
+{
+#if ASSERT_LEVEL >= 2
+    uint64_t nb_samples = 0;
+    size_t i;
+
+    av_assert0(fq->queued == fq->total_frames_head - fq->total_frames_tail);
+    for (i = 0; i < fq->queued; i++)
+        nb_samples += bucket(fq, i)->frame->nb_samples;
+    av_assert0(nb_samples == fq->total_samples_head - fq->total_samples_tail);
+#endif
+}
+
+void ff_framequeue_init(FFFrameQueue *fq, FFFrameQueueGlobal *fqg)
 {
     fq->queue = &fq->first_bucket;
     fq->allocated = 1;
@@ -36,6 +54,7 @@ int ff_framequeue_add(FFFrameQueue *fq, AVFrame *frame)
 {
     FFFrameBucket *b;
 
+    check_consistency(fq);
     if (fq->queued == fq->allocated) {
         if (fq->allocated == 1) {
             size_t na = 8;
@@ -52,7 +71,7 @@ int ff_framequeue_add(FFFrameQueue *fq, AVFrame *frame)
                 return AVERROR(ENOMEM);
             if (fq->tail + fq->queued > fq->allocated)
                 memmove(nq + fq->allocated, nq,
-                        (fq->fail + fq->queued - fq->allocated) * sizeof(*nq));
+                        (fq->tail + fq->queued - fq->allocated) * sizeof(*nq));
             fq->queue = nq;
             fq->allocated = na;
         }
@@ -60,11 +79,35 @@ int ff_framequeue_add(FFFrameQueue *fq, AVFrame *frame)
     b = bucket(fq, fq->queued);
     b->frame = frame;
     fq->queued++;
+    fq->total_frames_head++;
+    fq->total_samples_head += frame->nb_samples;
+    check_consistency(fq);
     return 0;
 }
 
 AVFrame *ff_framequeue_take(FFFrameQueue *fq)
 {
+    FFFrameBucket *b;
+
+    check_consistency(fq);
+    av_assert1(fq->queued);
+    b = bucket(fq, 0);
+    fq->queued--;
+    fq->tail++;
+    fq->tail &= fq->allocated - 1;
+    fq->total_frames_tail++;
+    fq->total_samples_tail += b->frame->nb_samples;
+    check_consistency(fq);
+    return b->frame;
 }
 
-#endif /* AVFILTER_FRAMEQUEUE_H */
+AVFrame *ff_framequeue_peek(FFFrameQueue *fq, size_t idx)
+{
+    FFFrameBucket *b;
+
+    check_consistency(fq);
+    av_assert1(idx < fq->queued);
+    b = bucket(fq, idx);
+    check_consistency(fq);
+    return b->frame;
+}
diff --git a/libavfilter/framequeue.h b/libavfilter/framequeue.h
index 68da1b7..4040504 100644
--- a/libavfilter/framequeue.h
+++ b/libavfilter/framequeue.h
@@ -29,28 +29,54 @@
  * must be protected by a mutex or any synchronization mechanism.
  */
 
-#include "avfilter.h"
-#include "libavutil/avassert.h"
+#include "libavutil/frame.h"
 
 typedef struct FFFrameBucket {
     AVFrame *frame;
 } FFFrameBucket;
 
-typedef FFGlobalFrameQueue {
-} FFGlobalFrameQueue;
+typedef struct FFFrameQueueGlobal {
+} FFFrameQueueGlobal;
 
-struct FFFrameQueue {
+typedef struct FFFrameQueue {
     FFFrameBucket *queue;
     size_t allocated;
     size_t tail;
     size_t queued;
     FFFrameBucket first_bucket;
-};
+    uint64_t total_frames_head;
+    uint64_t total_frames_tail;
+    uint64_t total_samples_head;
+    uint64_t total_samples_tail;
+} FFFrameQueue;
 
-void ff_framequeue_init(FFFrameQueue *fq, FFGlobalFrameQueue *gfq);
+void ff_framequeue_global_init(FFFrameQueueGlobal *fqg);
+
+void ff_framequeue_init(FFFrameQueue *fq, FFFrameQueueGlobal *fqg);
 
 int ff_framequeue_add(FFFrameQueue *fq, AVFrame *frame);
 
 AVFrame *ff_framequeue_take(FFFrameQueue *fq);
 
+AVFrame *ff_framequeue_peek(FFFrameQueue *fq, size_t idx);
+
+static inline size_t ff_framequeue_queued_frames(const FFFrameQueue *fq)
+{
+    return fq->queued;
+}
+
+static inline uint64_t ff_framequeue_queued_samples(const FFFrameQueue *fq)
+{
+    return fq->total_samples_head - fq->total_samples_tail;
+}
+
+static inline void ff_framequeue_update_peeked(FFFrameQueue *fq, size_t idx)
+{
+}
+
+static inline void ff_framequeue_skip_samples(FFFrameQueue *fq, size_t n)
+{
+    fq->total_samples_tail += n;
+}
+
 #endif /* AVFILTER_FRAMEQUEUE_H */
diff --git a/libavfilter/internal.h b/libavfilter/internal.h
index 3856012..d7ad99b 100644
--- a/libavfilter/internal.h
+++ b/libavfilter/internal.h
@@ -29,6 +29,7 @@
 #include "avfiltergraph.h"
 #include "formats.h"
 #include "framepool.h"
+#include "framequeue.h"
 #include "thread.h"
 #include "version.h"
 #include "video.h"
@@ -147,6 +148,7 @@ struct AVFilterPad {
 struct AVFilterGraphInternal {
     void *thread;
     avfilter_execute_func *thread_execute;
+    FFFrameQueueGlobal frame_queues;
 };
 
 struct AVFilterInternal {
@@ -336,6 +338,8 @@ int ff_request_frame(AVFilterLink *link);
 
 int ff_request_frame_to_filter(AVFilterLink *link);
 
+int ff_filter_frame_to_filter(AVFilterLink *link);
+
 #define AVFILTER_DEFINE_CLASS(fname)            \
     static const AVClass fname##_class = {      \
         .class_name = #fname,                   \
@@ -376,6 +380,10 @@ int ff_filter_frame(AVFilterLink *link, AVFrame *frame);
  */
 AVFilterContext *ff_filter_alloc(const AVFilter *filter, const char *inst_name);
 
+void ff_filter_schedule(AVFilterContext *filter, const char *tag);
+
+int ff_filter_activate(AVFilterContext *filter);
+
 /**
  * Remove a filter from a graph;
  */
@@ -408,4 +416,6 @@ static inline int ff_norm_qscale(int qscale, int type)
  */
 int ff_filter_get_nb_threads(AVFilterContext *ctx);
 
+void ff_dump_graph_scheduling(AVFilterGraph *graph, const char *tag);
+
 #endif /* AVFILTER_INTERNAL_H */
diff --git a/libavfilter/private_fields.h b/libavfilter/private_fields.h
new file mode 100644
index 0000000..a5df3da
--- /dev/null
+++ b/libavfilter/private_fields.h
@@ -0,0 +1,43 @@
+/*
+ * Generic frame queue
+ * Copyright (c) 2015 Nicolas George
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public License
+ * as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "framequeue.h"
+
+#define AVFILTER_LINK_INTERNAL_FIELDS \
+\
+    FFFrameQueue fifo; \
+\
+    int frame_blocked_in; \
+\
+    /** \
+     * Link input status. \
+     */ \
+    int status_in; \
+    int64_t status_in_pts; \
+\
+    /** \
+     * Link output status. \
+     * If not zero, all attempts of request_frame will fail with the \
+     * corresponding code.
+     */ \
+    int status_out; \
+ \
+/* define AVFILTER_LINK_INTERNAL_FIELDS */
diff --git a/libavfilter/split.c b/libavfilter/split.c
index 6cf4ab1..89e3604 100644
--- a/libavfilter/split.c
+++ b/libavfilter/split.c
@@ -30,6 +30,7 @@
 #include "libavutil/mem.h"
 #include "libavutil/opt.h"
 
+#include "private_fields.h"
 #include "avfilter.h"
 #include "audio.h"
 #include "formats.h"
@@ -78,7 +79,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
     for (i = 0; i < ctx->nb_outputs; i++) {
         AVFrame *buf_out;
 
-        if (ctx->outputs[i]->status)
+        if (ctx->outputs[i]->status_in)
             continue;
         buf_out = av_frame_clone(frame);
         if (!buf_out) {
diff --git a/libavfilter/vf_extractplanes.c b/libavfilter/vf_extractplanes.c
index a23f838..c4b01e0 100644
--- a/libavfilter/vf_extractplanes.c
+++ b/libavfilter/vf_extractplanes.c
@@ -22,6 +22,7 @@
 #include "libavutil/imgutils.h"
 #include "libavutil/opt.h"
 #include "libavutil/pixdesc.h"
+#include "private_fields.h"
 #include "avfilter.h"
 #include "drawutils.h"
 #include "internal.h"
@@ -245,7 +246,7 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
         const int idx = s->map[i];
         AVFrame *out;
 
-        if (outlink->status)
+        if (outlink->status_in)
             continue;
 
         out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
diff --git a/tests/ref/fate/source b/tests/ref/fate/source
index 63ddd3f..7b35b03 100644
--- a/tests/ref/fate/source
+++ b/tests/ref/fate/source
@@ -27,3 +27,4 @@ compat/avisynth/windowsPorts/windows2linux.h
 compat/float/float.h
 compat/float/limits.h
 compat/nvenc/nvEncodeAPI.h
+libavfilter/private_fields.h
-- 
2.9.3

Attachment: signature.asc
Description: Digital signature

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to