>Paul Davis wrote: > >> JACK is specifically designed not to allow latency jitter. you can't >> properly sync audio+video with latency jitter, because the software >> cannot know for sure when the audio it generates will actually become >> audible. > >Why not ? > >If B = buffer size, Fs = sample frequency, T = start time, and the >buffer if pre-filled before starting, then the k-th sample will reach
the buffer may have been prefilled. but the problem arises when you've missed some period deadlines. you no longer have a full buffer and you don't know how much of the buffer you are filling if you do it 1 period at a time. if you ask how much total space is available, then that will work, but this then requires dynamic buffer resizing inside the software, or alternatively, the software has to use buffers sized to hold the entire hardware buffer, which is very wasteful. using the process-in-period-sized-blocks model, if you missed several period deadlines, then the "last" audio you deliver as you finish catching up has longer latency than the ones you processed at the beginning. this is why JACK uses ASIO's model of 2 periods per buffer by default, and under RT operation it doesn't allow you to miss any deadlines. --p