"David L. Mills" <[EMAIL PROTECTED]> writes:

>Danny,

>True; there is an old RFC or IEN that reports the results with varying 
>numbers of clock filter stages, from which the number eight was the 
>best. Keep in mind these experiments were long ago and with, as I 
>remember, ARPAnet sources. The choice might be different today, but 
>probably would not result in great improvment in the general cases. Note 
>however that the popcorn spike supressor is a very real Internet add-on.

Oh yes. popcorn suppression is important. I agree. But the filter goes well
beyond that. My eaction is that on the one hand people keep saying how
important net load is, and that one does not want to use poll intervals
that are much smaller than 8 or 10, and on the other hand, throwing away
80-90% of the data collected. Remin ds me of the story of Saul, king of the
Israelites, whose army was besieged, and he mentioned that he was thirsty.
A few of his soldiers risked everything to get through the enemy lines and
bring him water. He was so impressed that he poured it all out on the
ground, in tribute to their courage. I have always found that story an
incredible insult to the bravery instead.

The procedure does drastically reduce the variance of the delay, but does
not much for the variance of the offset, which is of coure what is
important. Just to bring up chrony again, it uses both a suppression where
round trips greater than say 1.5 of min are discarded, and data is weighted
by some power of the invere of the delay.


>The number of stages  may have unforseen consequences. The filter can 
>(and often does) introduce additional delay in the feedback loop. The 
>loop time constant takes this into account so the impulse response is 
>only marginaly affected. So, the loop is really engineered for good 
>response with one accepted sample in eight. Audio buffs will recognize 
>any additional aamples only improve the response, since they amount to 
>oversampling the signal. Audio buffs will also recognize the need for 
>zeal in avoiding undersampling, which is why the poll-adjust algorithm 
>is so squirrely.

>Dave

>Danny Mayer wrote:

>> Unruh wrote:
>> 
>>>[EMAIL PROTECTED] (Danny Mayer) writes:
>>>
>>>
>>>>Unruh wrote:
>>>>
>>>>>Brian Utterback <[EMAIL PROTECTED]> writes:
>>>>>
>>>>>
>>>>>>Unruh wrote:
>>>>>>
>>>>>>>"David L. Mills" <[EMAIL PROTECTED]> writes:
>>>>>>>
>>>>>>>>You might not have noticed a couple of crucial issues in the clock 
>>>>>>>>filter code.
>>>>>>>
>>>>>>>I did notice them all. Thus my caveate. However throwing away 80% of the
>>>>>>>precious data you have seems excessive.
>>>>>
>>>>>Note that the situation can arise that the one can wait many more than 8
>>>>>samples for another one. Say sample i is a good one. and remains the best
>>>>>for the next 7 tries. Sample i+7 is slightly worse than sample i and thus
>>>>>it is not picked as it comes in. But the next i samples are all worse than
>>>>>it. Thus it remains the filtered one, but is never used because it was not
>>>>>the best when it came in. This situation could keep going for a long time,
>>>>>meaning that ntp suddenly has no data to do anything with for many many
>>>>>poll intervals. Surely using sample i+7 is far better than  not using any
>>>>>data for that length of time.
>>>
>>>>On the contrary, it's better not to use the data at all if its suspect. 
>>>>ntpd is designed to continue to work well even in the event of loosing 
>>>>all access to external sources for extended periods.
>>>
>>>>>And this could happen again. Now, since the
>>>>>delays are presumably random variables, the chances of this happening are
>>>>>not great ( although under a condition of gradually worsening network the
>>>>>chances are not that small), but since one is running ntp for millions or
>>>>>billions of samples, the chances of this happening sometime becomes large. 
>>>>>
>>>
>>>>There are quite a few ntpd servers which are isolated and once an hour 
>>>>use ACTS to fetch good time samples. This is not rare at all.
>>>
>>>And then promplty throw them away because they do not satify the minimum
>>>condition? No, it is not "best" to throw away data no matter how suspect.
>>>Data is a preecious comodity and should be thrown away only if you are damn
>>>sure it cannot help you. For example lets say that the change in delay is
>>>.1 of the variance of the clock. The max extra noise that delay can cause
>>>is about .01 Yet NTP will chuck it. Now if the delay is 100 times the
>>>variance, sure chuck it. It probably cannot help you. The delay is a random
>>>process, non-gaussian admitedly, and its effect on the time is also a
>>>random process-- usually much closer to gaussian. And why was the figure of
>>>8 chosen ( the best of the last 8 tries) why not 10000? or 3? I suspect it
>>>came off the top of someone's head-- lets not throuw away too much stuff,
>>>since it would make ntp unseable, but lets throw away some to feel
>>>virtuous. Sorry for being sarcastic, but I would really like to know what
>>>the justification was for throwing so much data away.
>> 
>> 
>> No, 8 was chosen after a lot of experimentation to ensure the best 
>> results over a wide range of configurations. Dave has adjusted these 
>> numbers over the years and he's the person to ask.
>> 
>> Danny

_______________________________________________
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions

Reply via email to