[time-nuts] Re: GPSDO testing

2021-08-23 Thread Magnus Danielson via time-nuts

Hi Rob!

Welcome!

On 2021-08-22 14:48, Bob kb8tq wrote:

Hi

Welcome !!!

At some point it is worth converting this stuff into relative units. They
can be expressed in scientific form or engineering form. It saves a bit
of time when working at this frequency today and that frequency tomorrow.
It also lines up with the numbers you see in a typical OCXO spec sheet.

Your standard deviation number is already in this sort of nomenclature.
It’s 1.4 x10^-9 or 1.4 ppb. At least for me the ppb (parts per billion) ppm
(parts per million) and ppt (parts per trillion) seems to work better. Both
are equally useful commonly used.


Also, considering scaling with frequency, as a higher or lower frequency 
is generated, the relative frequency error is the same. Measuring limits 
may however differ, which can work against you or for you.



GPS can come in many flavors. What you are running is single band L1. For
enough money you can run multiple bands and even toss in other GNSS systems
like Glonas and Galileo. If handled properly, more satellites and more bands
means better performance. Because of this you will see a range of numbers
if you go looking at papers, depending on just what they were looking at.
Indeed true. When I got started in this, L1 C/A receiver with 6 or 8 
channels was all I could get my hands on. Now there seem to be 
multi-band receivers scattered in the lab and I should get a more 
advanced receiver up to speed soon.

Typically GPS via the normal antennas and chip sets delivers something in
the low ns range after correction for sawtooth at 1 second. Your GPSDO’s
do this so it’s a reasonable number in this case. That gets you a few ppb
of noise. Go to 100 seconds and you have the same low ns noise. Now
it’s over a longer period so you get tens of ppt noise.

This same “it gets better at longer observation times” effect is common to a
lot of devices, it’s hardly unique to GPSDO’s. Since it makes such a major
difference, specifying just what time you are sampling at is a major part of
this. One *assumes* (sometimes incorrectly) that when a time isn’t specified
the number is 1 second. The magic term used for the sample time is tau.


There is however one problem as you extend your observation interval. 
The frequency drift starts to show up. At some point it will become 
large enough to obscure the measurement. So, you learn to handle the 
drift. The alternative to Allan Deviation is Hadamard Deviation, and 
Hadamard Deviation (really the three-point Hadamard deviation as we 
updated P1139 to say) does a first degree compensation of linear 
frequency drift. Another is to estimate the drift, remove the drift and 
then do the Allan deviation on that. One should be careful thought, 
because drift removal also tends to consume some of the noise, because 
the drift estimation is sensitive to long-term noise.




Next up is the fact ( documented in a number of papers starting back in
the 1960’s. Many by Barnes and Allan ) that measures like peak to peak
and even standard deviation don’t work well with frequency. The larger you
number of observations, the worse those measures get. The result is that
frequency “accuracy” tends to get some sort of footnote on it. ( maybe “99%
of the time at 100 second tau ….).
This is true for any measure. The gaussian noise does not have defined 
end-points, you just have various confidence intervals for various 
degrees of confidence. For time and frequency, we have a host of 
noise-types which is nastier than gaussian noise, so it's... interesting.


By far the most common approach to evaluating stability of a frequency or
time source is to use Allan deviation. ( ADEV ):

https://en.wikipedia.org/wiki/Allan_variance 


Yes the page is for AVAR. For computational reasons AVAR is the “heart” of
the measure. You pretty much never see data presented in terms of AVAR.
The page describes both, and ADEV = SQRT(AVAR). It should be clear from 
the page, if not I have to go and fix it.

Just as GPS has a “floor” your counter has a floor as well. In a lot of cases
this floor may not matter. It is unfortunately not low enough to measure a
good OCXO at shorter tau. Typically you get the counter floor out to a point
and then start “seeing” the OCXO.
The counter floor is dominated by the 1/tau slope, which is close to the 
single-shot resolution of your counter for tau=1s. I say close, because 
it depends on it, but it's not that simple. While the counter resolution 
noise behaves like white phase modulation noise, it's actually a 
systematic noise. The actual noise depends on the systematic 
phase-quantization and white phase modulation noise interaction.

So yes, this all is complicated and there are lots of rabbit holes to run down
already. There are a whole lot more bits and pieces that get into it one way
or the other.
Indeed rabbit holes. I've been doing this for 20+ years and I've yet to 
escape the rabbit holes.

Getting back to your 

[time-nuts] Re: GPSDO testing

2021-08-22 Thread Bob kb8tq
Hi

Welcome !!!

At some point it is worth converting this stuff into relative units. They
can be expressed in scientific form or engineering form. It saves a bit
of time when working at this frequency today and that frequency tomorrow. 
It also lines up with the numbers you see in a typical OCXO spec sheet. 

Your standard deviation number is already in this sort of nomenclature. 
It’s 1.4 x10^-9 or 1.4 ppb. At least for me the ppb (parts per billion) ppm
(parts per million) and ppt (parts per trillion) seems to work better. Both
are equally useful commonly used. 

GPS can come in many flavors. What you are running is single band L1. For
enough money you can run multiple bands and even toss in other GNSS systems
like Glonas and Galileo. If handled properly, more satellites and more bands
means better performance. Because of this you will see a range of numbers
if you go looking at papers, depending on just what they were looking at. 

Typically GPS via the normal antennas and chip sets delivers something in 
the low ns range after correction for sawtooth at 1 second. Your GPSDO’s 
do this so it’s a reasonable number in this case. That gets you a few ppb
of noise. Go to 100 seconds and you have the same low ns noise. Now
it’s over a longer period so you get tens of ppt noise. 

This same “it gets better at longer observation times” effect is common to a 
lot of devices, it’s hardly unique to GPSDO’s. Since it makes such a major
difference, specifying just what time you are sampling at is a major part of
this. One *assumes* (sometimes incorrectly) that when a time isn’t specified
the number is 1 second. The magic term used for the sample time is tau. 

Next up is the fact ( documented in a number of papers starting back in 
the 1960’s. Many by Barnes and Allan ) that measures like peak to peak 
and even standard deviation don’t work well with frequency. The larger you
number of observations, the worse those measures get. The result is that
frequency “accuracy” tends to get some sort of footnote on it. ( maybe “99%
of the time at 100 second tau ….). 

By far the most common approach to evaluating stability of a frequency or
time source is to use Allan deviation. ( ADEV ):

https://en.wikipedia.org/wiki/Allan_variance 


Yes the page is for AVAR. For computational reasons AVAR is the “heart” of
the measure. You pretty much never see data presented in terms of AVAR.

Just as GPS has a “floor” your counter has a floor as well. In a lot of cases
this floor may not matter. It is unfortunately not low enough to measure a 
good OCXO at shorter tau. Typically you get the counter floor out to a point
and then start “seeing” the OCXO. 

So yes, this all is complicated and there are lots of rabbit holes to run down 
already. There are a whole lot more bits and pieces that get into it one way
or the other.

Getting back to your numbers. 0.001 Hz at 10 MHz is 0.1 ppb or 1x10^-10. 
Measured as ADEV you should be able to hit this with a typical GPSDO or OCXO
at 1 second and beyond. You also should be able to do the usual “99% of the 
time” accuracy claim at 1 second and beyond if everything is working properly.. 

If you want to test what you have, setting up to do ADEV (and understanding the
counter’s limitations …. the dead spot at 10 MHz ….) is the best way to 
validate 
things …..

Fun 

Bob

> On Aug 22, 2021, at 1:56 AM, Rob Leahy  wrote:
> 
> G'Day All,
> 
> Thanks for adding to me to this group.
> 
> I had an application at my workplace where we sent out GPS simulator out for
> calibration, and it was found its OCXO was drifting and unstable.
> 
> We had no equipment at work, with the required resolution and accuracy to
> measure this drift, so to add to my personal test gear I purchased a Chinese
> GPSDO which was built around a Trimble unit, A gentric Chinese GPSDO and a
> symmetricom GPSDO pcb. Below are links to some of the time toys I have
> purchased, in addition to an Agilent 53132A  counter.
> 
> 
> 
> I have had the Trimble acting as a timebase for the 53132A and am measuring
> the symmetricom pcb. This board has a new OCXO from digikey fitted - Connor
> Winfield / OH200-71005SV-010.0M
> 
> 
> 
> I am chasing 0.001hz accuracy - short and long term, is this an achievable
> goal with what I have?
> 
> 
> 
> Here is the lastest results over 48 hours after have the GPSDO's operating
> for over a week.
> 
> 
> 
> 
> Average MHz
> 
> 9.9990
> 
> 
> Standard deviaion
> 
> 1.42841E-09
> 
> 
> Max
> 
> 10.6275
> 
> 
> Min
> 
> 9.2006
> 
> 
> 
> 
> Opinions would be appreciated!
> 
> 
> 
> Cheers
> 
> 
> 
> Rob
> 
> 
> 
> 
> 
> https://www.ebay.com.au/itm/264742751023?ssPageName=STRK%3AMEBIDX%3AIT&_trks
> id=p2060353.m2749.l2649
> 
> 
> 
> https://www.aliexpress.com/item/1005001868503492.html?spm=a2g0s.9042311.0.0.
> 24914c4dOxEx60
> 
> 
> 
> https://www.aliexpress.com/item/4000169088065.html?spm=a2g0s.9042311.0.0.249
> 14c4dOxEx60
> 
> 
> 
>