On Fri, Mar 23, 2012 at 11:49:19AM +0100, Terje Mathisen wrote:
> unruh wrote:
> >No I would not. That is not what ntpd does. It really does throw away 7
> >of the samples and never uses them. The whole question is what is the
> >best statistic to use. I do not believe that the "shortest roundtrip
> >time" is that best statistic. If you could convince me it is, I would be
> >more than happy to have ntp use it.
> 
> In _some_ scenarios, keeping only the minimum rttsample is indeed
> the best approach:

Yes, it depends on the network jitter and clock stability. But ntpd
doesn't try to estimate the stability and uses a fixed dispersion rate
and Allan intercept in the filter algorithm (15 ppm and 1024 sec by
default). By tweaking the constants you can change the ratio of
dropped samples.

But I think a much bigger problem with the clock filter and PLL
combination is that it can't drop more than 7 samples. When the
network is saturated, it's usually better to drop much more than. If
the increase in delay is 1 second and the clock is good to 10 ppm, it
could wait for days before accepting another sample.

> In order to be considered OK, we can't accept more than 50 ppb
> frequency offset.
> 
> Handling this with up to 50 ms sawtooth variation (with periods up
> to several hours) in the one-way latency means that the vendor
> require sampling periods of up to 10+ hours, with multiple
> packets/second and then keeping a single packet at the end.

That seems excessive. Do they set the frequency directly just from the
last two samples? With PLL or similar, increasing the time constant
accordingly might be a better approach.

-- 
Miroslav Lichvar
_______________________________________________
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Reply via email to