On Mar 28, 2013, at 12:03 , Brad O'Hearne wrote:

> On Mar 27, 2013, at 9:25 PM, Clément Bœsch <ubi...@gmail.com> wrote:
> 
>> You realize FFmpeg 0.6 is 3 years old right? 
> 
> I know it very well -- but with FFmpeg documentation / examples, new doesn't 
> necessarily mean more helpful. I've spent more hours than I'd like to count 
> scouring the Internet for anything that could shed light on various aspects 
> of getting my use case built. That 3 year old example is one of the only 
> video / audio encoding examples I've found that even addresses pts / dts. 
> Take a look at the current decoding_encoding.c in the samples you refer to. 
> It doesn't address this at all. In fact, the example itself is very limited 
> in usefulness, as it isn't really addressing what would likely be a 
> real-world use-case. 
> 

Although the example you are using might be three years old, that it does not 
mean that you cannot use recent ffmpeg. You might have to change a few function 
calls, but other than that it should work.

I don't claim to be an expert, but here are some points which I remember when I 
 built an encoder using the ffmpeg library:

First of all, you should exclude that you introduce in your Player what could 
be responsible for audio/video not being synchronous.  (I assume you did, but 
you are not mentioning it.)

When you encode audio and video, you'll feed each packet with the dts and pts 
value. The encoding function for video and the encoding function for audio do 
not know each other, they do not communicate. You have to set these values for 
them, and pass them in when you write the already encoded packet. As far as I 
remember, the write function does not set these values, but only checks that 
what you pass in is plausible.
If you mix already encoded audio and video, you have to remux it, which is 
basically the same writing of packets as after encoding, and in that process 
set the correct values for dts and pts.

Once you understand that you are responsible for setting these values, and that 
there is no magic communication between audio and video involved, it is quite 
simple.

if you base your dts and pts values on the time when you received the data 
after it went through various buffers, you have to take the delay caused by 
these buffers into consideration.

Hth,  regards,
Kalileo



_______________________________________________
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to