On 07/20/2010 01:43 PM, Dominik Tomczak wrote:
Hello

My application uses libav libaries to encode frames with H264 and mux them
using RTP. The sequence is

int encodedBytes = avcodec_encode_video
if (encodedBytes>  0)
{
AVPacket packet;
...

av_interleaved_write_frame(&packet)
}

I get the video frames in RGB from my web camera, convert it to YUV but I
don't know how to set DTS and PTS values for packet.
When you get the frame from the camera, you associate a presentation timestamp
(PTS) to the frame. Generally, the API used to grab the frame provides you
with a valid PTS; if it does not, you can use something like gettimeofday(),
and use the frame grabbing time as a PTS.
Then, before calling avcodec_encode_video() you set the pts field in AVFrame
(using the PTS associated to the frame you are encoding). WARNING: a pts is
expressed in "time base" units (see AVCodecContext:time_base), so you will
probably have to convert the pts value when filling the AVFrame.

After calling avcodec_encode_video(), AVCodecContext:coded_frame will
contain the pts and dts values generated by the codec.
You can now use AVCodecContext:coded_frame:{pts,dts} to set the pts and
dts fields in the AVPacket before calling av_interleaved_write_frame().
WARNING: again, remember that codec timestamps and format timestamps are
expressed in "time base" units, and the codec time base can be different
from the format time base... Hence, you need to convert them (you can do
this by using av_rescale_q(). Look at ffmpeg.c, the calls to av_rescale_q()
immediately before write_frame()).


                                Luca
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user

Reply via email to