Because its assumed that if there is a pts/dts value, then that value is considered more "right" than reordered opaque.
On Fri, Jun 26, 2009 at 9:42 AM, Malcolm Bechard <[email protected]>wrote: > On Wed, Jun 24, 2009 at 4:50 PM, Erik Van Grunderbeeck <[email protected] > >wrote: > > > >Thanks again. Just to make sure I'm clear can we run through an example? > > > > >So I have file with audio and video. > > >The audio stream has a start time of 56994 > > >The video stream has a start time of 64194 > > >Both have a time base of 1/90000 > > > > >The first audio packet I get has a pts of 56994 > > >The first video packet I get has a pts of 64194 > > >The next video packets have a pts of: (coming out in this order) > > >56994 > > >60594 > > >74994 <- need packet re-order here, if all these are video? > > >67794 <- need packet re-order here, if all these are video? > > >71394 > > > > >I get a frame on the 2nd call to avcodec_decode_video2() (i.e on the > > packet > > >with a pts of 56994). > > > > >So given this, should I be playing the first set of decoded audio at the > > >same time that I show the first decoded frame? or is there some delay I > > need > > >to add to one of the output streams. The fact that my lowest pts for > video > > >is 56994, but the start_time is 64194 is a little confusing to me, just > > I'm > > >just trying to understand the offsets here. > > > > Your video decoder pulls data from the video queue. You have the > correlate > > the timestamp from the video packet to the audio. In your case, video > with > > pts 64194 needs to be played when audio around that is played. I don't > > know > > what the timestamp is after audio 56994, but your video prob needs to be > > played between audio pts 56994 and the next one (say audio pts 8000). > Now, > > audio usually comes in a AVPacket that when decompressed is one second or > > so > > long. So, you need to interpolate your timestamp across the audio buffer > > (use channel count, frequency, etc), ffplay does it with is->audio_clock > += > > (double)data_size / (double)(n * dec->sample_rate); in > > audio_decode_frame(). > > > > > > For video we need to build a "wait/sleep" until we around pts 64194 (the > > pts > > of the video). Code that does that can be found in ffplay, in the > function > > compute_frame_delay(). Referring to that, since you might be able to step > > through it, and see what it does. > > > > Seeing your video queue, I see timestamps that are non-lineair? > > > > In a nutshell, > > > > Add audio and video to separate queues. > > > > Poll both queues, each in a separate thread (with enough locking to > protect > > the queues of course). > > > > Start playing the audio (by feeding the packets you stored in the audio > > queue to the decoder, and from there your WAVE/PCM player). > > > > When you feed to audio to the PCM player, keep its timestamp for the > > packet, > > call that the clock. Remember that one AVPacket can contain several audio > > buffers when decoded, so you may need to interpolate your timestamps over > > the size of the buffer. > > > > Sleep/delay your video thread until the timestamp of the audio packet > that > > is playing matches your video (bigger then). > > > > Display your video frame. > > > > Keep feeding packets to both queues (and threads) until stop. > > > > > > _______________________________________________ > > libav-user mailing list > > [email protected] > > https://lists.mplayerhq.hu/mailman/listinfo/libav-user > > > > Great thanks again. So what I'm getting from this is that the pts/dts is in > absolute time, and require no offseting (just time_base conversion if they > are in different time bases). Looking at the ffplay.c code more (I didn't > understand what reordered_opaque was for, now I do, thanks). > This brings up another question though. The pts is almost never used, it > uses the dts of the decoded packet as the pts (since decoder_reorder_pts is > 0 by default, and I've never seen a format where the packes have a dts == > AV_NOPTS_VALUE, although I'm sure it exists). > > (code from ffplay.c) > > is->video_st->codec->reordered_opaque= pkt->pts; > len1 = avcodec_decode_video2(is->video_st->codec, > frame, &got_picture, > pkt); > > if((decoder_reorder_pts || pkt->dts == AV_NOPTS_VALUE) && > frame->reordered_opaque != AV_NOPTS_VALUE) > pts= frame->reordered_opaque; > else if(pkt->dts != AV_NOPTS_VALUE) > pts= pkt->dts; > else > pts= 0; > > I guess I should just take this at face value and copy what it's doing, but > I'd love to undestand why not just use the reordered_opaque in all cases? > _______________________________________________ > libav-user mailing list > [email protected] > https://lists.mplayerhq.hu/mailman/listinfo/libav-user > -- Visit my blog at http://oddco.ca/zeroth/zblog _______________________________________________ libav-user mailing list [email protected] https://lists.mplayerhq.hu/mailman/listinfo/libav-user
