Hi,

As one decimate data, one needs to be very very careful with bandwidth.
It would make biases in values which would over-state stability. Yes, we
have seen it happen. Even big names has come clean and confessed doing
it wrong when they decimated the data.

Cheers,
Magnus

On 2020-02-21 17:12, Bob kb8tq wrote:
> Hi
>
> The primer talks a lot about “averaging” of the samples. If you dig deep into 
> the various papers on
> doing AVAR for frequency / time standards … you want to decimate / downsample 
> the data 
> rather than average. There are a *lot* of papers that make this distinction 
> less than totally 
> clear. 
>
> Bob
>
>> On Feb 21, 2020, at 9:58 AM, Chris Burford <cburfo...@austin.rr.com> wrote:
>>
>> Here is a good article for Allan deviation that you can file with other 
>> reference material. It is well written and somewhat high level.
>>
>> https://www.phidgets.com/docs/Allan_Deviation_Primer 
>> <https://www.phidgets.com/docs/Allan_Deviation_Primer>
>>
>> Chris
>>
>>
>> On 02/20/20 21:45:58, Taka Kamiya via time-nuts wrote:
>>> I was in electronics in big ways in 70s.  Then had a long break and came 
>>> back to it in last few years.  Back then, if I wanted 1s resolution, the 
>>> gate time had to be 1s.  So measuring ns and ps was pretty much impossible. 
>>>  As I understand it, HP53132A (my main counter) takes thousands of samples 
>>> (I assume t samples) to arrive at most likely real frequency.  That was 
>>> something I had hard time wrapping my head around.
>>>
>>> I understand most of what you said, but I've never taken statistics, so I 
>>> am guessing on some part.  I can see how adev goes down as tau gets longer. 
>>>  Basically, averaging is taking place.  But I am still not sure why at some 
>>> point, it goes back up.  I understand noise will start to take effect, but 
>>> the same noise has been there all along while adev was going down.  Then, 
>>> why is this inflection point where sign of slope suddenly changes?
>>>
>>> Also, to reach adev(tau=10), it takes longer than 10 seconds.  Manual for 
>>> TimeLab basically says more samples are taken than just 10, but does not 
>>> elaborate further.  Say it takes 50 seconds to get there, and say that's 
>>> the lowest point of adev, does that mean it is the best to set gate time to 
>>> 10 second or 50 second?  (or even, take whatever gate time and repeat the 
>>> measurement until accumulated gate time equals tau?
>>>
>>> ---------------------------------------
>>> (Mr.) Taka Kamiya
>>> KB4EMF / ex JF2DKG
>>>  
>>>     On Thursday, February 20, 2020, 7:54:22 PM EST, Magnus Danielson 
>>> <mag...@rubidium.se> wrote:
>>>    Hi Taka,
>>>
>>> On 2020-02-20 19:40, Taka Kamiya via time-nuts wrote:
>>>> I have a question concerning frequency standard and their Allen deviation. 
>>>>  (to measure Allen Dev in frequency mode using TimeLab)
>>>>
>>>> It is commonly said that for shorter tau measurement, I'd need OCXO 
>>>> because it's short tau jitter is superior to just about anything else.  
>>>> Also, it is said that for longer tau measurement, I'd need something like 
>>>> Rb or Cs which has superior stability over longer term.
>>> Seems reasonably correct.
>>>> Here's the question part.  A frequency counter that measures DUT basically 
>>>> puts out a reading every second during the measurement.  When TimeLab is 
>>>> well into 1000s or so, it is still reading every second; it does not 
>>>> change the gate time to say, 1000s.
>>>> That being the case, why this consensus of what time source to use for 
>>>> what tau?
>>>> I recall reading on TICC, in time interval mode, anything that's 
>>>> reasonably good is good enough.  I'm aware TI mode and Freq mode is 
>>>> entirely different, but it is the same in fact that measurement is made 
>>>> for very short time span AT A TIME.
>>>> I'm still trying to wrap my small head around this.
>>> OK.
>>>
>>> I can understand that this is confusing. You are not alone being
>>> confused about it, so don't worry.
>>>
>>> As you measure frequency, you "count" a number of cycles over some time,
>>> hence the name frequency counter. The number of periods (sometimes
>>> called events) over the observation time (also known as time-base or
>>> tau) can be used to estimate frequency like this:
>>>
>>> f = events / time
>>>
>>> while it is practical that average period time becomes
>>>
>>> t = time / events
>>>
>>> In modern counters (that is starting from early 70thies) we can
>>> interpolate time to achieve better time-resolution for the integer
>>> number of events.
>>>
>>> This is all nice and dandy, but now consider that the start and stop
>>> events is rather represented by time-stamps in some clock x, such that
>>> for the measurements we have
>>>
>>> time = x_stop - x_start
>>>
>>> This does not really change anything for the measurements, but it helps
>>> to bridge over to the measurement of Allan deviation for multiple tau.
>>> It turns out that trying to build a standard deviation for the estimated
>>> frequency becomes hard, so that is why a more indirect method had to be
>>> applied, but the Allan deviation fills the role of the standard
>>> deviation for the frequency estimation of two phase-samples being the
>>> time-base time tau inbetween. As we now combine the counters noise-floor
>>> with that of the reference, the Allan deviation plots provide a slopes
>>> of different directions due to different noises. At the lowest point on
>>> the curve, is where the least deviation of frequency measurement occurs.
>>> Due to the characteristics of a crystal oscillator to that of the
>>> rubidium, cesium or hydrogen maser, the lowest point occurs at different
>>> taus, and provide different values. Lowest value is better, so there is
>>> where I should select the time-base for my frequency measurement. So,
>>> this may be at 10 s, 100 s or 1000 s, which means that the frequency
>>> measurement should be using start and stop measurements with that
>>> distance. OK, fine. So what about TimeLab in all this. Well, as we
>>> measure with a TIC we collect a bunch of phase-samples at some base
>>> rate, such as 10 Hz or whatever. TimeLab and other tools can then use
>>> this to calculate Allan Deviation for a number of different taus simply
>>> by using three samples, these being tau in between and algoritmically do
>>> that for different taus. One then collects a number of such measurements
>>> to form an average, the more, the better confidence interval we can but
>>> on the Allan Deviation estimation, but it does not improve our frequency
>>> estimation, just our estimation of uncertainty for that frequency
>>> estimation for that tau. Once you have that Allan Deviation plot, you
>>> can establish the lowest point and then only need two phase samples to
>>> estimate frequency.
>>>
>>> So, the measurement per second thing is more collection of data rather
>>> than frequency estimation in itself.
>>>
>>> Cheers,
>>> Magnus
>>>
>>>
>>> _______________________________________________
>>> time-nuts mailing list -- time-nuts@lists.febo.com
>>> To unsubscribe, go to 
>>> http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
>>> and follow the instructions there.
>>>   _______________________________________________
>>> time-nuts mailing list -- time-nuts@lists.febo.com
>>> To unsubscribe, go to 
>>> http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
>>> and follow the instructions there.
>> _______________________________________________
>> time-nuts mailing list -- time-nuts@lists.febo.com
>> To unsubscribe, go to 
>> http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
>> and follow the instructions there.
>
> _______________________________________________
> time-nuts mailing list -- time-nuts@lists.febo.com
> To unsubscribe, go to 
> http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
> and follow the instructions there.

_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe, go to 
http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
and follow the instructions there.

Reply via email to