On 12.01.2016 03:26, Ronald S. Bultje wrote: > On Mon, Jan 11, 2016 at 12:06 AM, Mats Peterson < >> On 01/10/2016 11:56 AM, Andreas Cadhalpun wrote: >>> --- a/libavformat/qtpalette.c >>> +++ b/libavformat/qtpalette.c >>> @@ -48,7 +48,7 @@ int ff_get_qtpalette(int codec_id, AVIOContext *pb, >>> uint32_t *palette) >>> >>> /* If the depth is 1, 2, 4, or 8 bpp, file is palettized. */ >>> if ((bit_depth == 1 || bit_depth == 2 || bit_depth == 4 || >>> bit_depth == 8)) { >>> - int color_count, color_start, color_end; >>> + uint32_t color_count, color_start, color_end; >>> uint32_t a, r, g, b; >>> >>> /* Ignore the greyscale bit for 1-bit video and sample >>> >>> >> ping > > > Why are we using stdint types for non-vector data here? Our custom has > always been to used sized (stdint-style) data only for vector data (arrays > etc.), and use native-sized types (e.g. unsigned, int, whatever) for scalar > values. Why are we making exceptions here?
I can't find this convention in our coding rules[1]. The main reason why I used uint32_t instead of unsigned here was consistency with the line below. But as Ganesh explained it makes prefect sense to use uint32_t at least for color_start. Best regards, Andreas 1: https://ffmpeg.org/developer.html#Coding-Rules-1 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel