Hi, I'm using AVFormat library to decode a MPEG1 system layer stream that I receive through net with a custom protocol, to do this I've added a protocol handler that implements a circular buffer, so my network receiving layer writes to the buffer and the avformat av_read_frame call reads from it.
The problem I have is that av_read_frame seems to introduce about half a second of delay (maybe for buffering?) and for this application it's very important the low latency... There is some parameter to reduce the buffering used by avformat? Adding debug information I've seen that while I write to the buffer in 4096 bytes chunk av_read_frame reads in chunks of 32kbyte and doesn't start to give me the first video packet for avcodec_decode_video() (that always contain a full frame), before it reaches 64kbyte of data inside its buffers. 64kbyte in a 1mbit stream are a lot of frames.... and way more latency that I can allow (about half a second). There is some parameters in AVFormatContext or ByteIOContext that I can use to reduce this effect? Reading the headers didn't help :( -- Bye, Gabry _______________________________________________ libav-user mailing list [email protected] https://lists.mplayerhq.hu/mailman/listinfo/libav-user
