Bruce <[EMAIL PROTECTED]> writes:

>The only thing that you can conclude from the referenced paper is that 
>for very short measurement intervals, order of a second, the noise 
>process is white phase.  Obviously a simple average is optimal to 
>determine the time from a group of such closely spaced measurements.

It is not the time that is of interest in this discussion, it is the
frequency estimate. The frequency is estimated from the time measurements.
Mills claimed that there was no advantage to doing a least squares fit to
the sequence of time measurements, but that using only the end points was
always as good as a fit. I diputed that, and stated that IF the noise in
the time measuements was independent gaussian noise, fitting was always
better. If the noise is say 1/f noise (ie longer than the Allan minimum)
then there is no advantage I agree, but then using the endpoints is also
bad because, although you get a longer lever arm, you also get more noise
as well. 


>The paper then describes an algorithm to adapt to the decidedly 
>non-stationary measurement noise and local oscillator noise processes as 
>the averaging time is increased.  Flicker and random walk of frequency 
>are mentioned, as well as white frequency.  However he attributes this 
>to the crystal oscillator, which I believe is a mistake.  Crystal 
>oscillators do not exhibit white frequency noise over longer averaging 
>times.  If they did, we probably would not need Rubidium oscillators.

They do, but the flicker and random walk dominate if the time is long
enough.


>All of these multiple noise processes operating simultaneously and 
>completely uncorrelated with each other are what make the precision time 
>transfer and frequency control art so interesting and allow so many 
>different ideas for how to best set the clock for the particular 
>application.

What is interesting about his comments is that doing a whole sequence of
very rapid time measurements and then long periods of quiet is better than
equally spacing the measurements. HOwever this would almost certainly
trigger the Kiss of Death and get the systems very upset with you. (eg make
10 measuments withing 2 sec say, and then wait for a day, rather than one
every couple of hours is better he would claim) But if a server running ntp
were hit with 10 rapid fire measurements from one machine would it
complain?



>Bruce

>Unruh wrote:
>> Bruce <[EMAIL PROTECTED]> writes:
>> 
>>> My understanding is that least squares is optimal only when the 
>>> residuals are white.  For measurements of atomic frequency standards 
>> 
>> Yes, and I clearly made that assumption. And the assumption is generally
>> true for short enough time intervals. If the netwrok delays are really
>> really short or if the time over which you are determining the frequency is
>> very long, then that is a bad assumption.
>> 
>> 
>>> such as Rubidium or Cesium, the noise process is dominated by white 
>>> frequency noise, and in this case linear regression yields the optimal 
>>> estimate of frequency.  A little snooping around on the NIST web site 
>>> will provide the relevant backup info.
>> 
>>> For so many other precision timing and frequency applications, the noise 
>>> processes are decidedly un-white.  David Allan developed his famous 
>>> two-sample variance to handle these divergent, non-stationary noise 
>>> processes.  For instance, quartz oscillators are dominated by flicker 
>>> frequency noise for averaging times greater than about 100 milliseconds, 
>>> and eventually turn to random walk at longer averaging times.  Selective 
>>> Availability of GPS (when it was in effect) was a white phase noise 
>>> process that modulated the time transfer for un-keyed users.  The 
>>> statistics of network time transfer via ntp are undoubtedly divergent, 
>>> but I have not seen any data that showed it to be white frequency noise 
>>> dominant.
>> 
>> All the data suggests that it is white and the explicit assumption of that
>> Levine paper is that it is white. 
>> 
>> 
>> 
>>> So, it is not clear that linear regression is optimal for estimating the 
>>> frequency via ntp, unless someone has determined the statistics to be 
>>> white frequency.  I personally have not performed the measurements to 
>>> make that determination, but it would not surprise me if Judah Levine has.
>> 
>> And he explicitly assumes that the network delay noise is white. His whole
>> procedure is to make a large ( eg 25-50) of ntp type measurements at one
>> time (withing seconds) then wait a long time ( 1/4 of a day) and doing it
>> again. He estimates teh frequency by averaging the measurements at any one
>> time, and then using that average phase error to determine the frequency.
>> 
>> The number of measurements at one instant is determined by requiring that
>> the frequency error due to the white noise ( decreases as sqrt(n)) is equal
>> to the other errors (flicker noise, etc) or equals the pre determined error
>> level wanted. 
>> 
>> 
>> 
>> 
>>> Bruce
>> 
>>> Unruh wrote:
>>>> "David L. Mills" <[EMAIL PROTECTED]> writes:
>>>>
>>>>> Bill,
>>>>> Read it again. Judah takes multiple samples to reduce the phase noise, 
>>>>> not to improve the frequency estimation.
>>>> Dave: The frequency estimate is done by subtracting two phase
>>>> determinations. Thus the phase noise enters the frequency determination. By
>>>> reducing the phase noise you reduce the frequency noise as well. I think
>>>> you need to read it again, but us just telling the other to read properly
>>>> will not help. 
>>>>
>>>> The frequency estimate is obtained in NTP and in his procedure by making
>>>> phase measurements. 
>>>> f_i= (y_i-y_{i-1})/T
>>>> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
>>>> variable, then delta f_i= sqrt( <e_i^2>+<e_{i-1}^2>)/T
>>>> By reducing <e_i^2> you reduce delta f_i. And as you point out, you can
>>>> reduce <e_i^2> by making a bunch of measurements. Those measurements can be
>>>> all done at the end points or spread over the time interval T. The latter
>>>> is not quite as effective in reducing delta f_i since many of the
>>>> measurements do not have as long a "lever arm" as if they were all at the
>>>> endpoints, and that is why uniform sampling is about sqrt(3) worse than
>>>> clustering at the end points. But in either case, the more measurements you
>>>> make the more you reduce the uncertainty in the frequency estimate. 
>>>>
>>>> Anyway, at this point everyone else has enough information to make up their
>>>> own mind. 
>>>>
>>>>
>>>>
>>>>> Dave
>>>>> Unruh wrote:
>>>>>> You must have read a different paper than that one. I found it (through 
>>>>>> our
>>>>>> library) and it says that if you have n measurements in a time period T,
>>>>>> the best strategy is to take n/2 measurements at the beginning of the 
>>>>>> time
>>>>>> and n/2 at the end to minimize the effect of the white noise phase error 
>>>>>> on the
>>>>>> frequency estimate. That is perfectly true, and gives an error which goes
>>>>>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
>>>>>> measurements (assuming large n) T is the total time interval and delta 
>>>>>> is the std dev of each phase measurement . But it certainly does NOT say 
>>>>>> that if you have n
>>>>>> measurements, just use the first and last one to estimate the slope. 
>>>>>>
>>>>>> If you have n measurements, the best estimate of the slope is to do a 
>>>>>> least
>>>>>> squares fit. If they are equally spaced, the center third do not help 
>>>>>> much
>>>>>> (nor do they hinder), but a least squares fit is always the best thing to
>>>>>> do. 
>>>>>>
>>>>>>
>>>>>> "David L. Mills" <[EMAIL PROTECTED]> writes:
>>>>>>
>>>>>>
>>>>>>> Bill,
>>>>>>> NIST doesn't agree with you. Only the first and last are truly 
>>>>>>> significant. Reference: Levine, J. Time synchronization over the 
>>>>>>> Internet using an adaptive frequency locked loop. IEEE Trans. UFFC, 
>>>>>>> 46(4), 888-896, 1999.
>>>>>>> Dave
>>>>>>> Unruh wrote:
>>>>>>>> "David L. Mills" <[EMAIL PROTECTED]> writes:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> Bill,
>>>>>>>>> Ahem. The first point I made was that least-squares doesn't help the 
>>>>>>>>> frequency estimate. The next point you made is that least-squares 
>>>>>>>>> improves the phase estimate. The last point you made is that phase 
>>>>>>>>> noise 
>>>>>>>> No. The point I tried to make was the least squares improved the 
>>>>>>>> FREQUENCY 
>>>>>>>> estimate by sqrt(n/6) for large n, where n is the number of points 
>>>>>>>> (assumed
>>>>>>>> equally spaced) at which the phase is measured. I am sorry that the 
>>>>>>>> way I
>>>>>>>> phrased it could have been misunderstood.
>>>>>>>>
>>>>>>>>
>>>>>>>> The phase is ALSO improved proportional to sqrt(n)
>>>>>>>> . 
>>>>>>>> This assumes uncorrelated phase errors dominate the error budget. 
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> is not important. Our points have been made and further discussion 
>>>>>>>>> would 
>>>>>>>>> be boring.
>>>>>>>> Except you misunderstood my point. It may still be boring to you. 
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> Dave
>>>>>>>>> Unruh wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> "David L. Mills" <[EMAIL PROTECTED]> writes:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Bill,
>>>>>>>>>>> If you need only the frequency, least-squares doesn't help a lot; 
>>>>>>>>>>> all 
>>>>>>>>>>> you need are the first and last points during the measurement 
>>>>>>>>>>> interval. 
>>>>>>>>>> Well, no. If you have random phase noise, a least squares fit will 
>>>>>>>>>> improve
>>>>>>>>>> the above estimate by roughly sqrt(n/4) where n is the number of 
>>>>>>>>>> points.
>>>>>>>>>> That can be significant. It is certainly true that the end points 
>>>>>>>>>> have the
>>>>>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have 64 
>>>>>>>>>> points,
>>>>>>>>>> you are better by about a factor of 4 which is not insignificant. 
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency 
>>>>>>>>>>> directly and exponentially average successive intervals. The NTP 
>>>>>>>>>>> discipline is in fact a hybrid PLL/FLL where the PLL dominates 
>>>>>>>>>>> below the 
>>>>>>>>>>> Allan intercept and FLL above it and also when started without a 
>>>>>>>>>>> frequency file. The trick is to separate the phase component from 
>>>>>>>>>>> the 
>>>>>>>>>>> frequency component, which requires some delicate computations. 
>>>>>>>>>>> This 
>>>>>>>>>>> allows the frequency to be accurately computed as above, yet allows 
>>>>>>>>>>> a 
>>>>>>>>>>> phase correction during the measurement interval.
>>>>>>>>>> He of course is not interested in phase corrections. 
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Dave
>>>>>>>>>>> Unruh wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> David Woolley <[EMAIL PROTECTED]> writes:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> Unruh wrote:
>>>>>>>>>>>>>> I do not understand this. You seem to be measuring the offsets, 
>>>>>>>>>>>>>> not the
>>>>>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do is to 
>>>>>>>>>>>>>> measure
>>>>>>>>>>>>> Measuring phase error to control frequency is pretty much THE 
>>>>>>>>>>>>> standard 
>>>>>>>>>>>>> way of doing it in modern electronics.  It's called a phase 
>>>>>>>>>>>>> locked loop 
>>>>>>>>>>>> Sure. In the case of ntp you want to have zero phase error. ntp 
>>>>>>>>>>>> reduces the
>>>>>>>>>>>> phase error slowly by changing the frequency. This has the 
>>>>>>>>>>>> advantage that
>>>>>>>>>>>> the frequency error also gets reduced (slowly). He wants to reduce 
>>>>>>>>>>>> the
>>>>>>>>>>>> frequency error only. He does not give a damn about the phase error
>>>>>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy error by 
>>>>>>>>>>>> attacking
>>>>>>>>>>>> the phase error. That is a slow way of doing it. You want to 
>>>>>>>>>>>> estimate the
>>>>>>>>>>>> frequency error directly. Now in his case he is doing so by 
>>>>>>>>>>>> measuring the
>>>>>>>>>>>> phase, so you need at least two phase measurements to estimate the
>>>>>>>>>>>> frequency error. But you do NOT want to reduce the frequency error 
>>>>>>>>>>>> by
>>>>>>>>>>>> reducing the phase error-- far too slow. 
>>>>>>>>>>>>
>>>>>>>>>>>> One way of reducing the frequency error is to use the ntp 
>>>>>>>>>>>> procedure but
>>>>>>>>>>>> applied to the frequency. But you must feed in an estimate of the 
>>>>>>>>>>>> frequecy
>>>>>>>>>>>> error. Anothr way is the chrony technique. -- collect phase 
>>>>>>>>>>>> points, do a
>>>>>>>>>>>> least squares fit to find the frequency, and then use that 
>>>>>>>>>>>> information to
>>>>>>>>>>>> drive the frequecy to zero. To reuse past data, also correct the 
>>>>>>>>>>>> prior
>>>>>>>>>>>> phase measurements by the change in frequency.
>>>>>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> (PLL) and it is getting difficult to find any piece of electrnics 
>>>>>>>>>>>>> that 
>>>>>>>>>>>>> doesn't include one these days.  E.g. the typical digitally tuned 
>>>>>>>>>>>>> radio 
>>>>>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A few 
>>>>>>>>>>>> resistors
>>>>>>>>>>>> and capacitors. It however is a very simply Markovian process. 
>>>>>>>>>>>> There is far
>>>>>>>>>>>> more information in the data than that, and digititally it is easy 
>>>>>>>>>>>> to
>>>>>>>>>>>> impliment far more complex feedback loops than that.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> or TV has a crystal oscillator, which is divided down to the 
>>>>>>>>>>>>> channel 
>>>>>>>>>>>>> spacing or a sub-multiple, and a configurable divider on the 
>>>>>>>>>>>>> local 
>>>>>>>>>>>>> oscillator divides that down to the same frequency.  The 
>>>>>>>>>>>>> resulting two 
>>>>>>>>>>>>> signals are then phase locked, by measuring the phase error on 
>>>>>>>>>>>>> each 
>>>>>>>>>>>>> cycle, low pass filtering it, and using it to control the local 
>>>>>>>>>>>>> oscillator frequency, resulting in their matching in frequency, 
>>>>>>>>>>>>> and 
>>>>>>>>>>>>> having some constant phase error.
>>>>>>>>>>>>>> the offset twice, and ask if the difference is constant or not. 
>>>>>>>>>>>>>> Ie, th
>>>>>>>>>>>>>> eoffset does not correspond to being off by 5Hz. 
>>>>>>>>>>>>> ntpd only uses this method on a cold start, to get the initial 
>>>>>>>>>>>>> coarse 
>>>>>>>>>>>>> calibration.  Typical electronic implementations don't use it at 
>>>>>>>>>>>>> all, 
>>>>>>>>>>>>> but either do a frequency sweep or simply open up the low pass 
>>>>>>>>>>>>> filter, 
>>>>>>>>>>>>> to get initial lock.
>>>>>>>>>>>> And? You are claiming that that is efficient or easy? I would 
>>>>>>>>>>>> claim the
>>>>>>>>>>>> latter. And his requirements are NOT ntp's requirements. He does 
>>>>>>>>>>>> not care
>>>>>>>>>>>> about the phase errors. He is onlyconcerned about the frequency 
>>>>>>>>>>>> errors.
>>>>>>>>>>>> driving the frequency errors to zero by driving the phase errors 
>>>>>>>>>>>> to zero is
>>>>>>>>>>>> not a very efficient technique-- unless of course you want the 
>>>>>>>>>>>> phase errors
>>>>>>>>>>>> to be zero( as ntp does, and he does not). 
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>

_______________________________________________
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions

Reply via email to