Hi all!
I write live transcoder (https://gitorious.org/live-transcoder) that can
accept input streams from web cams (and other live streams) and transcode
it into given formats and distribute it via different transports (currently
HTTP only).
On my work we plan to use this transcoder for transcod
Hi List,
I write live transcoder for our IP-cameras, in some cases audio straem also
need to transcoder.
But I have a problem with non-equal frame_size in input codec context and
output codec context:
1. I read audio packet
2. I decode it with avcodec_decode_audio4()
It provides AVFrame's with nb_
Nicolas, thanks for answer. Currently I implement this solution but I have
a warning message:
Que input is backward in time
but I think I will resolve it :-)
2012/11/15 Nicolas George
> Le quintidi 25 brumaire, an CCXXI, hatred a écrit :
> > I think that I must split one AVFrame in
Hi List!
I have FFMPEG 1.0 installed in my ArchLinux box, I try to write simple
audio+video transcoder example using existings doc/examples/muxing.c and
doc/examples/demuxing.c ones and I have trouble: I can't listen sound in
resulting file.
Source example with command line to compile: http://pas
Hi, Hannes!
My comments below. Worked code also.
2012/11/21 Hannes Wuerfel
> Am 21.11.2012 07:42, schrieb hatred:
>
> In my case, I'd like to manipulate video frames of the file with some
> image processing kernels etc. and write the file with the same codecs they
> have
Hi Svetlana,
Try this params:
AVCodecContext::rc_min_rate - minimum bitrate
AVCodecContext::rc_max_rate - maximum bitrate
AVCodecContext::bit_rate - the average bitrate
And see preset files for h264 encoding.
2012/11/23 Svetlana Olonetsky
> Hi Everyone,
>
> I am trying to encode to h264 and ge
2012/11/23 Hannes Wuerfel
> Good job man. This works for perfectly for me.
>
Thanks :-)
> But I discovered the same when transcoding from some codec to another.
> Now there is only one thing different in my code.
> As in the demuxing.c example I'm using something like this after the
> trancod
Hi Hannes,
Seems like offtopic, but why do you not use C++ interface of opencv?
like:
#include
instead
#include
more info: http://docs.opencv.org/modules/refman.html and
http://docs.opencv.org/modules/core/doc/intro.html#api-concepts
___
Libav-user
Hi Igor,
As I known some formats require specific timebases (like MPEGTS streams)
so, you can not manipulate with them. But you can manipulate with
CodecContext and can set time base that you need. For example, if FPS equal
to 25, you can set coder time base to 1/25 and simple increment PTS by 1
f
Hi Igor, my comments below
2012/11/28 Morduhaev, Igor (ig...@stats.com)
> I had fps issues when transcoding from avi to mp4(x264). Eventually the
> problem was in PTS and DTS values, so lines 12-15 where added before
> av_interleaved_write_frame function:
>
>
>
> 1. AVFormatContext* outContai
Hi List!
AVFilterInOut is linked list that we can pass to
avfilter_graph_parse/avfilter_graph_parse2. More interesting use case where
we pass this list to avfilter_graph_parse(): as I understand I must fill
fields with valid values,
so I have a question:
Does filter context that stored in 'filter_
Hi Vistas,
> I want to be able to open a video file on one end, encode it to h264 and
then send it over RTP(custom library) to another end for decoding.> I am
not sure how exactly I am going to open the AVFormatContext and
> AVCodecContext on the receiving end. I am able to serialize AVPackets
cor
12 matches
Mail list logo