On Wed, Oct 26, 2016 at 01:30:19PM +0200, Marcus Müller wrote: > Now, these microsecond timestamps > will introduce a /third/ clock into our problems. I can see how the > control loop converges in case of that clock being both faster than your > sampling clock and relatively well-behaved, but: is this an assumption > we can generally make?
If I understand this correctly, you say that the resolution of the timer should be better than the sample time ? This is not required. The timer is read whenever a _block_ of samples is handled at either side. For audio a typical block size is 256 samples, 5.333 ms at 48 kHz, or more than 5000 clock ticks. Round-off error is small compared to timing jitter, and will be filtered by the DLL anyway. It doesn't have any cumulative effect. The actual frequency of the clock used to measure time doesn't matter as long as it has reasonable short term stability (and both sides use the same clock of course). > Let's first just focus on the Audio part (I personally think matching a > 100MS/s $\pm$ 2ppm stream to a whatever 31.42MS/s $\pm$ 20ppb stream > with a clock that has microsecond resolution and more ppms is out of > question): No, it would be possible, there is no need to time individual samples. > Hm, OK. So you get a $\hat t$ time estimate. Wow! Third loop of control! Yes, there are three loops: a DLL on either side, and control loop that drives the resampler. But they are not nested, so this won't affect stability. In theory all filtering could be done by the latter loop, and the DLLs would not be necessary. But there are practical reasons for having them: - it provides a layer of abstraction, which - simplifies the design of the resampling control loop, - simplifies error detection and graceful recovery. > Do you have any ressources on that? How is that cycle start time > prediction (which is a sampling rate estimator, inherently) realized? in pseudo-C: while (true) { wait_for_start_of_next_period(); er = time_now() - t1; t0 = t1 t1 += dt + w1 * er; dt += w2 * er; } where t0 = filtered start time of current period (= previous t1) t1 = predicted start time of next period dt = current estimate of period time. w1, w2 = filter coefficients. wait_for_start_of_next_period() is a call the the sound card driver. It returns when there is a full buffer of samples available to be read and written. On some systems you don't have the loop and wait() but provide a callback instead. The code above assumes a constant number of samples per iteration. If that's not the case things get a little more complicated - the actual number of samples in each block needs to be taken into account - but not fundamentally different. > I think it'll be a little unlikely to implement this as a block that you > drop in somewhere in your flow graph. In theory it would be possible. The requirement then (assuming RF in and audio out) is that everything upstream is directly or indirectly triggered by the RF clock, and everything downstream by the audio clock. Don't know if that's possible in GR (still have to discover the internals). > it has to be done directly inside the audio sink. That would probably be the best solution. So you'd have fixed decimation block somewhere, producing a nominal audio sample rate, and the sink takes care of resampling that to the actual one. > The reason simply is that unlike audio > architectures, and especially the low-latency Jack arch, GNU Radio > doesn't depend on fixed sample packet sizes, and as an effect of that, > you're very likely to see very jumpy throughput scenarios. The only assumption for this to work is that there is no 'chocking point', i.e. all modules are fast enough to keep up with the signal. Then what matters is how over how much time the stream of sample blocks delivered to the resampler must be observed to get a reliable estimate of the average sample rate. The most important parameter if blocks have variable size and irregular timing is the maximim time between two blocks. This will determine both the amount of buffering required and the DLL loop bandwidth. > The problem gets even worse if the output buffer of the rate-correction > block isn't directly coupled to the consuming (audio) clock – if there's > nondeterministic error introduced at the $\hat W$ estimation, the > control loop Fons showed is likely to break down at some point. Not if things are correctly dimensioned. The whole control system is symmetric w.r.t. the two sides. If it can tolerate jitter from both sides. But normally one end will be close to the audio HW. The only consequence of having no direct coupling is that the _average_ error resulting from this is not corrected. This only means you don't have defined latency. > So in this case, the throughput-optimizing architecture of GNU Radio is > in conflict with the wish for good delay estimator Not having constant-rate and constant-size blocks does not fundamentally change anything. The variability just must be taken into account when dimensioning the buffers and loops. You get the same situation when one side is not some local hardware but e.g. a network stream. > In practice, the "best" clock in most GNU Radio flow graphs attached to > an SDR receiver is the clock of the SDR receiver (RTL dongles > notwithstanding); if we had a way of measuring other clocks, especially > CPU time and audio time, using the sample rate coming out of these > devices, that'd be rather handy for all kinds of open-loop resampling. Open loop doesn't work. No matter how accurate your frequency ratio estimation, any remaining error is integrated. You need _some_ form of feedback to avoid that. Which will lead you back to something similar to the presented scheme. Ciao, -- FA A world of exhaustive, reliable metadata would be an utopia. It's also a pipe-dream, founded on self-delusion, nerd hubris and hysterically inflated market opportunities. (Cory Doctorow) _______________________________________________ Discuss-gnuradio mailing list Discuss-gnuradio@gnu.org https://lists.gnu.org/mailman/listinfo/discuss-gnuradio