Le duodi 22 frimaire, an CCXIX, Thomas Worth a écrit : > Maybe I'm not understanding the self-imposed limitation.
This is not a limitation. A codec transforms an image (AVFrame) into a sequence of bytes (AVPacket). A muxer writes a sequence of bytes to a structured file. These are two separates roles. The uncompressed video codec is trivial: you can (and did) implement it with a few memcpy, instead of the DCTs and entropy codings found in most video codecs. But it is still a codec, with a role completely distinct from the muxer. When the muxer gets the sequences of bytes, it does not know whether they were produced by a few memcpy or by thousands of lines of complex code: it just gets the bytes. As it happens, there is one muxer (yuv4mpeg) that can only output one particular type of uncompressed video. For that particular codec, someone thought it was a good idea to to implement it along with the trivial codec in order to gain a few microseconds by merging redundant memcpy. This was, IMHO, a bad idea. The length of this discussion to explain you this, and the need to make a special case in the main program, is proof enough of it. Every other muxer follow the standard API: you give it a sequence of bytes with a size, no matter whether the video is compressed or not. Doing things any other way would lead to duplicated code all over the place for dibious gain. > This "uncompressed" video IS raw And the QuickTime muxer does not care: it wants a sequence of byte. Concatenating the planes from the picture is your responsibility. The only point for which the fact that the video is raw is relevant is that you can build the packet with very simple code, or even ensure that it is directly built in place, rather than relying on a complex library. Hope this concludes the discussion. Regards, -- Nicolas George
signature.asc
Description: Digital signature
_______________________________________________ libav-user mailing list [email protected] https://lists.mplayerhq.hu/mailman/listinfo/libav-user
