On Mon, May 30, 2011 at 05:07:38PM -0700, Alex Converse wrote: > On Mon, May 30, 2011 at 2:07 PM, Diego Biurrun <[email protected]> wrote: > > > > --- a/libavcodec/mpegaudiodec.c > > +++ b/libavcodec/mpegaudiodec.c > > @@ -431,8 +431,6 @@ static av_cold int decode_init(AVCodecContext * avctx) > > is_table_lsf[j][k ^ 1][i] = FIXR(f); > > is_table_lsf[j][k][i] = FIXR(1.0); > > - av_dlog(avctx, "is_table_lsf %d %d: %x %x\n", > > - i, j, is_table_lsf[j][0][i], > > is_table_lsf[j][1][i]); > > This chunk seems highly suspicious
Yes, I guess it needs an explanation. The declaration for is_table_lsf looks like this: static INTFLOAT is_table_lsf[2][2][16]; So depending on whether or not CONFIG_FLOAT happens to be #defined it is an int or float array. In the int case everything is fine, but in the float case the av_dlog will print nonsense because 'x' is not a suitable conversion specifier for float. Two solutions occurred to me. I could add a suitable #define that sets the correct conversion specifier depending on the mode the file is compiled as, but since this is the only place that manifests this particular issue, it seemed like overkill. Alternatively, I could just drop the debug statement. As this av_dlog seemed particularly simple and trivial to recreate should the need arise, I chose to skip the hassle and just delete it. Diego _______________________________________________ libav-devel mailing list [email protected] https://lists.libav.org/mailman/listinfo/libav-devel
