On 9/21/2010 1:23 AM, Tomas Härdin wrote:
On Mon, 2010-09-20 at 18:30 -0400, Michael Chisholm wrote:
I have a transcoder app which uses libav*. Video comes in via UDP, and I want
to be more robust when the encoding can't keep up with the rate at which video
comes in. So I want to decouple the video demuxing/decoding from the encoding,
using multiple threads: one thread decodes frames, another thread encodes
frames. If another frame comes in while a frame is already waiting to be
encoded, it is dropped. As opposed to just letting UDP randomly drop packets,
which can seriously mess with the video quality.
Unfortunately, a decoded frame is coupled to the decoder codec context: the
codec context owns the frame data. That means it can't continue to decode
frames without clobbering the one I want to save. I need a frame which can
exist independently of any codec context.
So what's the best way to duplicate a frame? The only way I can see is with
swscale, but I don't want to scale it, at least not yet. Just a simple fast
memcpy would do. Is there a function I've overlooked to do this?
Andy
How about av_picture_copy()? You probably should buffer them a short
time using a small FIFO to avoid dropping frames. Also, you might want
to look up how muxing variable framerate works. I explained how it can
be done in a thread here yesterday.
/Tomas
That seems to do the trick so far, thanks! I hadn't noticed it before. If I
buffered frames, I'd still have to cap the size of the buffer and drop them when
it fills. I think the attitude to take in my case is that this app could be
transcoding a live video stream over many hours; it's probably more important
for the video to stay current (if a little jumpy at times) than lagged behind
but including every frame.
I wish I were more knowledgeable about this; timestamping is one thing that
seems to be a pain in the rear. Just about every little test-video I've looked
at has packet timestamps which are not monotonically increasing; they jump
around. And some codecs don't handle it well... x264 would just abort the
program (it was awhile ago I was playing with it though). As I recall, I could
never make it work using packet timestamps; I just had to make up my own. I'm
using the h.263 flv codec now, which seems nicer: I am using the timestamps on
the last packet which completed the frame. And that is working... but begs the
question of what sense it makes to use timestamps on packets for frames anyway,
when a frame can span multiple packets and start/stop in the middle? ffmpeg
always handled it fine, but I was never quite able to discern what black magic
it did to handle disordered packet timestamps. I don't even know why we care
when a frame is decompressed! Why isn't it enough to tag a frame with the time
it should be displayed to a user?
I'm not using a muxer (in my transcoder). Raw encoded video frame data is
passed on to another server, where it and other stuff from other sources is
muxed into an rtmp stream. So if variable framerate is a trick of the container
format or a lavf muxer, then it won't work for me.
Andy
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user