On Sun, 11 Oct 2015 21:16:27 +0200 Michael Niedermayer <michae...@gmx.at> wrote:
> From: Michael Niedermayer <mich...@niedermayer.cc> > > Signed-off-by: Michael Niedermayer <mich...@niedermayer.cc> > --- > libavcodec/h264_slice.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/libavcodec/h264_slice.c b/libavcodec/h264_slice.c > index cce97d9..daa3737 100644 > --- a/libavcodec/h264_slice.c > +++ b/libavcodec/h264_slice.c > @@ -985,6 +985,10 @@ static enum AVPixelFormat get_pixel_format(H264Context > *h, int force_callback) > for (i=0; choices[i] != AV_PIX_FMT_NONE; i++) > if (non_j_pixfmt(choices[i]) == non_j_pixfmt(h->avctx->pix_fmt) && > !force_callback) > return choices[i]; > + > + if (!force_callback) > + return AV_PIX_FMT_NONE; > + > return ff_thread_get_format(h->avctx, choices); > } > So if I can see this right, the whole purpose of this is to check whether the h264 parameters map to a lavc pixfmt, and bail out or reinitialize if it doesn't. Why can't this be delayed later? What actually needs it sooner than the first "real" get_format? For dealing with paramater changes, why can't it check the raw parameters instead? Doing it this way seems a bit convoluted. (I understand it now that I thought about it, but normally I'd think it's VERY weird that it somehow can go on without using the user-decided pixfmt, or that the user-decided pixfmt sometimes doesn't seem to matter?) _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel