Decoded data in libav* is stored in an AVFrame structure. In the case of video, this is a complete frame of video. You create one of those, and fill out its data, and encode it with avcodec_encode_video. For the purposes of encoding an image, you're really just creating a "video" with one frame. (Similarly, for decoding, you treat the jpeg file as a "video" with one frame.)

The output-example.c example shows how to create the AVFrame and encode it. There is another sample app in libavcodec/api-example.c in the source distribution to look at too. If you need to convert between pixel formats (e.g. rgb to yuv), see the sws_scale() function. (The frame given to the codec to encode has to have a pixel format compatible with the codec. I use PIX_FMT_YUVJ420P for jpeg. Those constants are in libavutil/pixfmt.h.)

Study the sample code, and I would also generate doxygen docs of the libav* source to help you navigate around and familiarize yourself with the APIs and how they work.

Andy

On 7/22/2010 11:15 AM, Denis Gottardello wrote:
Alle giovedì 22 luglio 2010, Michael Chisholm ha scritto:
You ought to be able to decode and encode jpegs with libav's builtin jpeg
codec. I've done both before with the "mjpeg" codec.  I didn't need
libjpeg. It's not C++ though.  You'd have to either use C from within a C++
app, or find some other library.  From the looks of things, libjpeg isn't
C++ either...



Ignoring the language which is the rigth way to use the ffmpeg libraries to
transalte a just in memory rgb (or yuv) image to jpeg image?




_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user

Reply via email to