William Unruh wrote:
On 2015-02-11, Harlan Stenn <st...@ntp.org> wrote:
William Unruh writes:
On 2015-02-10, Terje Mathisen <terje.mathi...@tmsw.no> wrote:
And as far as I can see, 500 or 5000 makes little
difference to the control loop. Yes, it is harder for other systems to
follow one with a large drift, but it is even harder to follow one that
jumps, which is what we get now.

So what's the difference between following a jump and following a
system that applies changes faster than the 500ppm that NTP is designed
for?

And given that reality bites, what are the tradeoffs between re-synching
after a step and slowly dealing with a know offset error?

We're talking about choices, and the different effects of these choices.

It's one thing if a system rarely steps.  It's a bit different if those
steps happen more frequently.

Yes. And it is either equally rare that the system will go over 500PPM,
but sometimes a computer can have a large "natural" drift, (even over
500PPM) and that will drastically reduce the "headroom" to deal with
unusual situations. (ie, if the computers normal drift is 400PPM, that
means that the effective cap is only 100PPM, not 500).
stepping is much worse than high PPM since it is infinite PPM.

Note that were ntpd designed for 5000 PPM then anything else could
follow it since it could also do 5000 PPM.

Yes, we are talking about choices. And all I was saying was that this
particular choice was somewhat arbitrary.

What you seem to disregard is the need for client computers to track a server when that server (or some other up-stream source) is slewing at a larger than 500 ppm rate. (Alternatively, when something happens on the client computer to cause the local clock to start drifting away at high rates.)

Let's take your 5000 ppm as a starting point: Independent of the polling interval it will take multiple polls to realize that the server has started to slew, right?

At 5000 ppm we are adjusting by 5 ms every second, which corresponds to over 5 seconds over a single polling period, i.e. far more than the 128 ms maximum threshold.

In fact, even a 500 ppm frequency delta means 512 ms over that single poll, so you cannot track any fast changes in frequency if your clock has currently stabilized at 1024 s polls.

With the default minimum poll of 64 sec however, 500 ppm results in just 32 ms of offset, which means that you have 4 full polling periods to realize that you have to adjust the local clock and thereby avoid the 128 ms limit.

It should by now be obvious that the numbers 64 sec, 500 ppm, half of the last 8 samples and 128 ms are all very closely related!

Yes, it is possible to slew much faster, but only if you start gradually enough that all clients will realize that the dispersion is increasing and that they must reduce the polling interval, and only if you spend at least 4 polling periods for each 500ppm of frequency adjustment you want to apply.

Terje
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

_______________________________________________
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Reply via email to