On 15/04/12 20:00, ashika umanga wrote:
> Greetings all,
> 
> By refering "muxing.c" and "api-example.c" I managed to encode audio+video
> streams and create OGG files.
> Now i want to encode video stream captured from webcam and audio coming
> from the microphone.
> Assuming I managed to get these two streams using some API calls ,is there
> a special way to synchronize these two while encoding?
> In the "muxing.c" example, it just encode the streams without any
> syncrhonization.
> I assume PTS/DTS values has something to do with this.

Yes it does =)

the rule of thumb is that you need to get a time reference (and time
base) good enough to represent both audio samples and video frames and
use it to mark the frames and samples when you capture them, then during
the encoding process you might have some reordering (video) or
aggregation (audio) and in the end you write in the packets you are
going to write in the container format the time information you get from
the encoders.

You can refer to avcodec_encode_audio2 and encode_video2, avconv2 might
be a not so simple example but should help you.

lu

-- 

Luca Barbato
Gentoo/linux
http://dev.gentoo.org/~lu_zero

_______________________________________________
libav-api mailing list
[email protected]
https://lists.libav.org/mailman/listinfo/libav-api

Reply via email to