On 08/25/2017 05:35 PM, wm4 wrote:
+static V4L2Buffer* context_ops_dequeue_v4l2buf(V4L2Context *ctx, unsigned int 
timeout)
+{
+    struct v4l2_plane planes[VIDEO_MAX_PLANES];
+    struct v4l2_buffer buf = { 0 };
+    V4L2Buffer* avbuf = NULL;
+    struct pollfd pfd = {
+        .events =  POLLIN | POLLRDNORM | POLLPRI, /* default capture context */
+        .fd = ctx->fd,
+    };
+    int ret;
+
+    if (ctx->num_queued < ctx->min_queued_buffers)
+        return NULL;
+
+    if (V4L2_TYPE_IS_OUTPUT(ctx->type))
+        pfd.events =  POLLOUT | POLLWRNORM;
+
+    for (;;) {
+        ret = poll(&pfd, 1, timeout);
Why does this have a timeout? This makes no sense: either a frame will
be decoded/packet can be encoded, or you need to give it more input
frames or packets, or an error happened.


yes, it does makes no sense when we consider only the capture buffers in a decoding scenario (those don't time out for the reasons that you pointed out); however we also should look into how the output buffers are handled (in the same decoding case...see context_ops_getfree_v4l2buf).

every time a new output buffer is required to input data to the driver, we poll (with a timeout) until we have recovered _all_ the buffers that are no longer needed in the driver.
I dont think this is an issue.


_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to