[Libav-user] HLS: Exactly constant segment duration

2013-08-15 Thread Andrey Mochenov
Hi, We are using FFmpeg libraries git-ee94362 libavformat v55.2.100. Our purpose is to mux two streams (video and audio) into M3U8 playlist using HLS. In addition, we want the duration of every TS segment file be exactly 3.0 sec (frame rate is 25 fps). To reach it, we are trying to set several

[Libav-user] Watermarks implementation

2013-08-15 Thread Andrey Mochenov
Hi, We are using FFmpeg libraries git-ee94362 libavformat v55.2.100. Our purpose is to add watermark (semi-transparent PNG image) on the video. The corresponding option in MMpeg application is -vf. Our question: How to implement the feature using FFmpeg libraries? Andrey Mochenov.

Re: [Libav-user] Watermarks implementation

2013-08-15 Thread Paul B Mahol
On 8/15/13, Andrey Mochenov andrey2...@gmail.com wrote: Hi, We are using FFmpeg libraries git-ee94362 libavformat v55.2.100. Our purpose is to add watermark (semi-transparent PNG image) on the video. The corresponding option in MMpeg application is -vf. Our question: How to implement the

[Libav-user] How to feed data to libav from ethernet streams with videoaudio threads

2013-08-15 Thread Екатерина
Hello,I'm working in Visual studio 2010I have two ethernet streams with H.264 payload and G.711 payload. I want to write payloads to file with avi extension using libav.I can put data from the payload to the file **0.h264 and then using libavfunction (avformat_open_input,

Re: [Libav-user] How to feed data to libav from ethernet streams with videoaudio threads

2013-08-15 Thread Andrey Utkin
2013/8/15 Екатерина malu...@yandex.ru: I guess how to collect video frame but don't know how to specify correct context( AVFormatContext) and write data to AVPacket structure (possibly using av_packet_from_data). Please suggest how to feed data to libav from ethernet streams to get file

[Libav-user] Help or Higher Level Interface Libraries

2013-08-15 Thread Glen
Hi All, I have not worked with the FFmpeg libraries before and would appreciate some initial guidance. I am researching a new project that involves the random access of the decoded video frames. I believe there is a need to initially map the “I” frame offsets in the input stream to gain the