On 2009-08-03 at 22:21:27 [+0200], Art Clarke <[email protected]> wrote:
> On Mon, Aug 3, 2009 at 3:11 PM, Stephan Assmus <[email protected]> wrote:
> 
> >
> > Sorry for replying to myself, but for future reference, I want to tell 
> > the solution to my problem, in case someone else stumbles over this. I 
> > have to say that there is disappointingly little help from actual 
> > FFmpeg developers on this list. I would expect at least questions for 
> > more information, if a problem cannot be understood at first.
> >
> 
> I would have expected more patience -- your original e-mail was sent less 
> that 24 hours ago, and on a Sunday no less.   Your response does little 
> to encourage others to help.  That said...

That is true for my last e-mail, and I certainly wasn't patient enough, but 
I have asked a couple questions here before, and help was very sparse. And 
- aren't Sundays ideal for asking questions on an Open Source project list? 
:-D That said, I want to apologize and thank you for your help.


> > Is this behavior intentional? If not, maybe someone could point me at 
> > some code that could be improved, so that opening the codec contexts is 
> > not required anymore when just using the muxer functionality.
> >
> 
> The problem is that your code that wraps avformat needs to be responsible 
> for setting the timestamps in the right time-base, regardless of those 
> other encoders.  For example, you could have a video encoder doing H263 
> encoding that is outputting with a time-base of 1/15, but when inserting 
> into FLV with libavformat, the packets need a time-base of 1/1000.  I 
> know this isn't code, but perhaps some background on time-bases would 
> help:
> 
> http://wiki.xuggle.com/Concepts#Time_Bases
> 
> That article is written referring to the Xuggler API objects, but the 
> concepts are the same for libav (Xuggler's API is a wrapper around libav 
> similar to what you're doing, but in Java).  AVCodecContext objects have 
> their own units they can output timestamps in, and because the 
> AVCodecContext doesn't know what AVFormatContext it will eventually end 
> up in, AVStreams also have their own timebase.  You must convert as 
> appropriate using the av_rescale classes.
> 
> See ffmpeg.c and ffplay.c for lots of examples of doing this.  When 
> decoding, they convert timestamps on packets from stream-time-bases into 
> a time-base of 1/1000000, and when encoding, they convert as approrpriate 
> to the right time base.

Thanks for the pointers. But in my original email, I am writing that I did 
exactly what you say. I computed the PTS myself, based on what the AVStream 
time base was adjusted to after calling av_write_header. Following the 
output_examle.c I am not generating PTS for audio packets anymore at all, 
and I also don't set the time_base of either the AVStream nor the 
AVCodecContext of the audio stream. For the video stream, I only set the 
time_base on the AVStream->codec, again following the API example. In 
ffmpeg.c, I did see some av_rescale() happening, but it is quit confusing. 
I think I am understanding it better now with the information you have 
given me.

I do see one problem in your explaination, though. In my code, just opening 
and closing the AVCodecContext for each stream makes all the difference (I 
tested that with a #define to either include this code or not). These 
instances have not been used for anything when I call av_write_frame(). So 
I am very much expecting that there is still something fishy. How can it be 
explained that just opening and closing the AVCodecContext instances, but 
not using them for anything, makes all the difference here?

Best regards,
-Stephan
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user

Reply via email to