>On Mon, Jan 2, 2023 at 4:51 PM wolverin via Libav-user < [email protected] 
>> wrote:
>>>Did I understand you correctly that I need to multiply time_base and system 
>>>time between adjacent frames and add the resulting time to the previous PTS?
>>> 
>>>Nope.
>>>PTS is just timebase * seconds represented as int64 number
>>> 
>>>There are no additions with next/prev pts values.
>>> 
>> 
>>Then pts is the time of one frame?
> 
>Yes, when used with tied time_base AVRational.
> 
Thank you very much, you helped me a lot, it significantly improved the 
smoothness of playback, but I still see 1 or 2 incomprehensible pauses at the 
beginning when watching live video in VLC.
 
Currently, only the output AVStream->time_base is used to calculate pts/dts.
But then what is the outgoing AVCodecContext->time_base needed for and what 
values? Do I need to initialize AVStream->avg_frame_rate or 
AVStream->r_frame_rate? Their change does not affect almost the video, but for 
some reason it increases traffic.
 
>>avcodec_send_packet(pCdcCtxInp, pPktInp);
>>avcodec_receive_frame(pCdcCtxInp, pFrm);
>>pFrm->pts = av_rescale_q(pPktInp->pts, pFmtCtxInp->streams[0]->time_base, 
>>pCdcCtxInp->time_base);
>> 
>>avcodec_send_frame(pCdcCtxOut, pFrm);
>>avcodec_receive_packet(pCdcCtxOut, pPktOut);
>>pPktOut->pts = av_rescale_q(pFrm->pts, pCdcCtxInp->time_base, 
>>pCdcCtxOut->time_base);
>>pPktOut->dts = pPktOut->pts;
> 
>Inspect values of input/output pts and used timebases(with printfs or other 
>ways...) and you can figure it where PTS becomes nonsense.
 
As I understand it, the input stream from the usb camera has a constant FR, 
transcoded I get a variable FR + 
I can start reading earlier  frames from the camera additionally for writing to 
disk, so I can't scale timestamps directly.

 
_______________________________________________
Libav-user mailing list
[email protected]
https://ffmpeg.org/mailman/listinfo/libav-user

To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".

Reply via email to