On 2012-03-23, Terje Mathisen <"terje.mathisen at tmsw.no"> wrote:
> unruh wrote:
>> No I would not. That is not what ntpd does. It really does throw away 7
>> of the samples and never uses them. The whole question is what is the
>> best statistic to use. I do not believe that the "shortest roundtrip
>> time" is that best statistic. If you could convince me it is, I would be
>> more than happy to have ntp use it.
>
> In _some_ scenarios, keeping only the minimum rttsample is indeed the 
> best approach:
>
> I have been working for a couple of years on the new cell phone network 
> in Norway (we've replaced everything, including every single base station!).
>
> Even if GSM and related cell phone standards does not require the same 
> absolute timing precision as some of those used in the US, there is 
> still a requirement for a _very_ stable frequency base in order to do 
> transparent handover from one base station to the next, and the relative 
> offset determines the maximum doppler that can be handled.
>
> In order to be considered OK, we can't accept more than 50 ppb frequency 
> offset.
>
> Handling this with up to 50 ms sawtooth variation (with periods up to 
> several hours) in the one-way latency means that the vendor require 
> sampling periods of up to 10+ hours, with multiple packets/second and 
> then keeping a single packet at the end.
>
> Of course, the main requirement is to start with a _very_ stable time 
> base, in this case double-oven OCXOs with daily drift rates in the 
> fractional ppb range!
>>
>> IF the roundtrip times were to vary by factors of 2 from one instance to
>> the next, I might be persuaded that it was the best statistic. But it
>> does not in almost all cases where ntpd is used. It varies by a few
>> percent ( with maybe an occasional blip with larger delays.) I have huge
>> reams of data to support my statement.
>
> So do I, and I would have agreed with you a month ago, but I have gotten 
> actual measurement data from the base station vendor showing really 
> _huge_ packet-to-packet jitter values.

In your case, I might well be persuaded that throwing away data is the
best way of handling the errors. But it would still leave me
uncomfortable. Data is precious. It is unrepeatable (in the case of
timing). My prejudice would be that surely there is some way of
extracting more from the data than simply that one least delay packet.

For example, for ntpd if the delay is in either one path or the other,
then instead of using the mean as the best estimate of the remote clock
time, one could assume that the increase in travel time is in only one
path, and by looking at the offset get a pretty good idea of which one
it is in (see the offset vs travel plots in Mill's book, where you can
get these wings in which the change in travel time is due to only one of
the paths.) That is infomation you could use to better the time
estimates. It of course makes for a much more complex clock filter than
the simple "throw away everything but the shortest roundtrip". 

Recall that with 8 data points, you reduce the statistical error by
about a factor of 3 over 1. If the variation in roundtrip is less than
about 3, then the advantage of the greater number of points can outweigh
the increased noise due to round trip variation. 


>
> Terje
> - <Terje.Mathisen at tmsw.no>
> "almost all programming can be viewed as an exercise in caching"

_______________________________________________
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Reply via email to