Re: [time-nuts] Characterising frequency standards

2009-04-25 Thread Steve Rooke
Hi Magnus,

2009/4/14 Magnus Danielson mag...@rubidium.dyndns.org

  Say I have a 1Hz input source and my counter measures the period of
  the first cycle and assigns this to A1. At the end of the first cycle
  the counter is able to be rest and re-triggered to capture the second
  cycle and assign this to A2. So far 2 sec have passed and I have two
  readings in data set A.




 Strange counter. Traditionally counters rests after the stop event have
 occured, since they cannot know anything else.The Gate time gives a hint
 on the first point in time it can trigger, the gate just arms the stop
 event. There is no real end point. It can however rest and retrigger the
 start event ASAP when gate times are sufficiently large. It's just a
 smart rearrangement of what to do when to achieve zero dead-time for
 period/frequency measurements.


I am making period measurements so the gate time does not come into it. My
counter can be set to continuously take period readings starting/stopping on
a positive or negative edge. Also when my counter finishes a reading it can
generate a SRQ allowing me to transfer the measurement to the PC and I can
also immediately generate a reset of the counter to take another
measurement. Unfortunately it is not possible for the counter to be reset
and then trigger again before the last triggering event has finished, IE. an
individual trigger event can only be used once per measurement cycle, the
same trigger event cannot stop one period measurement and start a second
one. All this means that there will always be a one period gap between each
period measurement.


 You could also use a counter which is pseudo zero dead time in that it
 can time-stamp three values, two differences without deadtime but has
 deadtime after that. Essentially two counters where the stop event of
 the first is the start event of the next.


Yes, I could do that but it is extra expense and complication which I do not
think is necessary.


  I now repeat the experiment and assign the measurement of the first
  period to B1. The counter I am using this time is unable to stop at
  the end of the first measurement and retrigger immediately so I'm
  unable to measure the second cycle but is left in the armed position.
  When the third cycle starts, the counter triggers and completes the
  measurement of the third cycle which is now assigned to B2.
 This is what most normal counters do.


So we can agree on this.

 For the purposes of my original text, the first data set refers to A1
   A2. Similarly the second data set refers to B1  B2. Reference to
  pre-processing of the second data set refers to mathematically
  removing the effects of drift from B1  B2 to produce a third data set
  which is used as the data input for an ADEV calculation where tau0 = 1
  sec with output of tau = 1 sec.
 You would need to use bias adjustments, but the B1  B2 period/frequency
 samples is badly tainted data and should not be used.having a deadtime
 at the size of tau0 is serious bussness. Removing the phase drift over


But for the purposes of how i now think it can be calculated, tau0 will be
set equal to 2 x actual period of input source, IE. if f = 1Hz, tau0 = 2
sec.

Lets take a look at what we are saying about badly tainted data here. The
whole purpose of this exercise is to predict the effects of noise on a
stable frequency. We have already agreed that a phase/frequency modulation
source ate EXACTLY 1/2 of the input source will be masked by this method but
we can get round that. So for the rest of the measurement, we have half the
data per tau than if there was no missing data. This will have some baring
on the accuracy of the result but will only be significant for maximum tau,
in almost exactly the same degree that existing ADEV measurements have
limited accuracy at maximum tau as there are not enough measurements to
provide the statistical probably over that time, IE, if we measure for
100,000 seconds, the calculation for tau = 100,000 will have only one set of
values. Remember we are looking at noise here and if for the missing data
method we take readings for twice the full test time as a conventional
test, we will have data with the same amount of statistical probability.
This badly tainted data is just the same unless we have such periodic
effects that over the period of the whole test we will always miss them.
There is no magic here.

the dead time does not aid you since if you remove the phase ramp of the
 evolving clock, that of f*t or v*t (depending on which normalisation you
 prefer), you have the background phase noise. What we want to do is to
 characterize this phase noise. Taking two samples of it back-to-back and
 taking two samples with a (equalent sized length) gap becomes two
 different filters. Maybe some ascii art may aid:


For a 1Hz input I would be able to calculate for tau = 2 with the
unmodified data using tau0 = 2 sec. If I remove the effects of drift, all my
data points are the same as 

Re: [time-nuts] Characterising frequency standards

2009-04-13 Thread Magnus Danielson
Steve Rooke wrote:
 Hi Mark,

 2009/4/13 Mark Sims hol...@hotmail.com:
   
 Hello Steve,

 Try this...  take Tom's sample data set,  run the numbers.  Then,  using a 
 good random number generator,  make another data set by randomly throwing 
 out half (or more) of the samples (to simulate a non ZDT counter).  Run the 
 numbers again.  See how they change.  This should give you a good idea of 
 how using a standard counter would affect your adev numbers.
 

 But randomly throwing out data points would introduce ZDT.
It would introduce dead-time, it would not introduce zero dead-time 
(ZDT). Dropping every second sample of a phase/time-error series can 
maintain the zero dead-time property, but you loose the resolution for 
higher taus.
  The whole
 point I was making was that the data set is well defined the missing
 data occurs every other sample therefore tau0 = 2 x (sample period of
 each sample).
   
You can reduce the dataset size that way if you had phase/time-error 
samples and attain twice the tau, yes.

The downside is that you also reduce the degrees of freedom in the 
dataset and thus the statistical precission. With a large enought 
dataset this may not be much of an issue.

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-13 Thread Magnus Danielson
Steve,

Steve Rooke wrote:
 Bruce,

 2009/4/12 Bruce Griffiths bruce.griffi...@xtra.co.nz:
 Steve

 Steve Rooke wrote:
 If I take two sequential phase readings from an input source and place
 this into one data set and aniother two readings from the same source
 but spaced by one cycle and put this in a second data set. From the
 first data set I can calculate ADEV for tau = 1s and can calculate
 ADEV for tau = 2 sec from the second data set. If I now pre-process
 the data in the second set to remove all the effects of drift (given
 that I have already determined this), I now have two 1 sec samples
 which show a statistical difference and can be fed to ADEV with a tau0
 = 1 sec producing a result for tau = 1 sec. The results from this
 second calculation should show equal accuracy as that using the first
 data set (given the limited size of the data set).


 You need to give far more detail as its unclear exactly what you are
 doing with what samples.
 Label all the phase samples and then show which samples belong to which
 data set.
 Also need to show clearly what you mean by skipping a cycle.

 Say I have a 1Hz input source and my counter measures the period of
 the first cycle and assigns this to A1. At the end of the first cycle
 the counter is able to be rest and re-triggered to capture the second
 cycle and assign this to A2. So far 2 sec have passed and I have two
 readings in data set A.
Strange counter. Traditionally counters rests after the stop event have 
occured, since they cannot know anything else.The Gate time gives a hint 
on the first point in time it can trigger, the gate just arms the stop 
event. There is no real end point. It can however rest and retrigger the 
start event ASAP when gate times are sufficiently large. It's just a 
smart rearrangement of what to do when to achieve zero dead-time for 
period/frequency measurements.

You could also use a counter which is pseudo zero dead time in that it 
can time-stamp three values, two differences without deadtime but has 
deadtime after that. Essentially two counters where the stop event of 
the first is the start event of the next.
 I now repeat the experiment and assign the measurement of the first
 period to B1. The counter I am using this time is unable to stop at
 the end of the first measurement and retrigger immediately so I'm
 unable to measure the second cycle but is left in the armed position.
 When the third cycle starts, the counter triggers and completes the
 measurement of the third cycle which is now assigned to B2.
This is what most normal counters do.
 For the purposes of my original text, the first data set refers to A1
  A2. Similarly the second data set refers to B1  B2. Reference to
 pre-processing of the second data set refers to mathematically
 removing the effects of drift from B1  B2 to produce a third data set
 which is used as the data input for an ADEV calculation where tau0 = 1
 sec with output of tau = 1 sec.
You would need to use bias adjustments, but the B1  B2 period/frequency 
samples is badly tainted data and should not be used.having a deadtime 
at the size of tau0 is serious bussness. Removing the phase drift over 
the dead time does not aid you since if you remove the phase ramp of the 
evolving clock, that of f*t or v*t (depending on which normalisation you 
prefer), you have the background phase noise. What we want to do is to 
characterize this phase noise. Taking two samples of it back-to-back and 
taking two samples with a (equalent sized length) gap becomes two 
different filters. Maybe some ascii art may aid:
  __
__   |  |__
  |__|  

   y1 y2 y3
   A1 A2

A2-A1 = y2-y1

vs.
 __
____|  |__
  |__|

  y1  y2   y3
  B1B2

B2-B1 = y3-y1

Consider now the case when frequency samples has twice the tau of the 
above examples
 _
__  | |__
  |_|

 y1y2
y2-y1

These examples where all based on sequences of frequency measurements, 
just as you indicate in your caes.

As you see on the differences, the nominal frequency cancels and the 
nominal phase error has also cancled out, so there is nothing to 
compensate there. Drift rate would however not be canceled, but for most 
of our sources, the noise is higher than the drift rate for shorter taus.

Time-differences allows us to skip every other cycle thought.
 I now collect a large data set but with a single cycle skipped between
 each sample. I feed this into ADEV using tau0 = 2 sec to produce tau
 results = 2 sec. I then pre-process the data to remove any drift and
 feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec
 result. I now have a complete set of results for tau = 1 sec. Agreed,
 there is the issue of modulation at 1/2 input f but ignoring this for
 the moment, this should give a valid result.


 Again you need to give more detail.

 In this case the data set is constructed from the measurement of the
 cycle periods of a 1Hz input source where even cycles are 

Re: [time-nuts] Characterising frequency standards

2009-04-13 Thread Steve Rooke
2009/4/13 Magnus Danielson mag...@rubidium.dyndns.org:
 But randomly throwing out data points would introduce ZDT.
 It would introduce dead-time, it would not introduce zero dead-time
 (ZDT). Dropping every second sample of a phase/time-error series can
 maintain the zero dead-time property, but you loose the resolution for
 higher taus.

First rule of posting, engage brain before typing :-) Yes, of course
you are right Magnus and that was what I really meant.

73,
Steve
-- 
Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
Omnium finis imminet

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-13 Thread Magnus Danielson
Steve Rooke skrev:
 2009/4/13 Magnus Danielson mag...@rubidium.dyndns.org:
 But randomly throwing out data points would introduce ZDT.
 It would introduce dead-time, it would not introduce zero dead-time
 (ZDT). Dropping every second sample of a phase/time-error series can
 maintain the zero dead-time property, but you loose the resolution for
 higher taus.
 
 First rule of posting, engage brain before typing :-) Yes, of course
 you are right Magnus and that was what I really meant.

I thought that you just made the mistake... :)

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-13 Thread Magnus Danielson
Mark,

Mark Sims skrev:
 The whole purpose of taking a data set from a known ZDT counter and then 
 throwing out random samples is to simulate the kind of data that a normal 
 counter would produce.  You could compare the results and get an idea of how 
 using a normal counter for calculating adevs would compare to using a ZDT 
 counter. I would start by generating random numbers from 0-3 and throwing out 
 that many samples.
 
 With most normal counters you cannot guarantee that you would get a sample 
 every other interval.  It all depends upon how the counter works,  what its 
 timebase is,  how it triggers and retriggers,  how it is being read out,  
 what the input signal is, etc.   I would suspect that most counters would 
 give a reading every two or three intervals.  I have seen some counters give 
 two consecutive back-to-back readings then a long dead time. 

Most counters I know of would make one frequency measure, then miss the 
directly following just to trigger directly ontop the next, those for a 
PPS pulse it would measure the period between the first and second 
pulse, then dwell until the third pps pulse and measure until the fourth 
pulse, but then happily repeat this pattern.

But measuring frequency/period like this is not very useful for 
post-processing in any Allan Deviation measure. The lack of back-to-back 
measures prohibits you from achieving the data you need.

We rather use time-interval measures. Let's consider the same counter, 
we arm it with a PPS pulse from either of the sources, but then measure 
the time interval between two 1 kHz variants of the signal, or use the 
PPS as start of the TI and the stop channel sees the 1 kHz signal, I'll 
use the later as a reference, but the cases are equalent.

The same counter can now dwell between the measurements, but most 
counters can withstand 1 measurement per second without too much 
trouble. The 1 kHz signal allow for a maximum of 1 ms delay from 
arming/start trigger to stop trigger. This still allows for plenty of 
time for the counter post-processing to occur and re-arming. As the 
clocks drift, dynamically would stop-channels choice of 1 kHz flanks 
shift, but it would be a fairly simple task to post-process that into a 
continous stream of PPS marks.

Using these time-interval measures of tau0 being 1 s, we can now make 
any set of back-to-back frequency measures as we please, as long as they 
are integer multiples of tau0 by dropping n-1 samples inbetween and 
recall that the sample-series has converted to a tau0 of n seconds.
We can also use the series directly for the Allan Deviation estimator of 
choice in either time or frequency form.

Thus, the lack of zero dead time does not necesserilly prohibits the 
use, but care in setting up the signals and I/O can curcumvent the problem.

Many counters is being used one way or another for continuous measures 
even if they are not exclusive ZDT counters, but it takes care.

Having one or two of TVBs PIC dividers at hand should certainly be handy 
for doing tricks like this.

Time-resolution of the counters as well as trigger noise may be issues 
to look at.

When do one need true ZDT counters then? Well, if you want to make 
measurements for higher frequency modulations, you need that power, but 
most of the time they are just very handy tools.

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Steve Rooke
Ulrich,

2009/4/11 Ulrich Bangert df...@ulrich-bangert.de:
 So why would my counter show any significant differences
 between a 1 sec or 2 sec gate time?

 suppose your source has a 0.5 Hz frequency modulation. Would you see it with
 2 s gate time or a integer multiple of it? Would you notice it with 1 s gate
 time or an odd integer of it?

Agreed, if the source is modulated at exactly 1/2 the input frequency,
the measurement would be blind to it. So the way to account for this
would be to take half the readings, then skip one cycle and take the
other half. Examination of the data would then show the modulation.

 I've just done a Google search for dead time correction
 scheme and I just turn up results relating to particle
 physics where it seems measurements are unable to keep up
 with the flow of data, hence the need to factor in the dead
 time of system.

 Google for the STABLE32 manual. THIS literature will bring you a lot
 further, many well documented source examples in Forth and PL/1, hi. F.e.
 you may look here:

 http://www.wriley.com/

Thanks for the pointers.

Kind regards,
Steve


 Best regards
 Ulrich Bangert

 -Ursprungliche Nachricht-
 Von: time-nuts-boun...@febo.com
 [mailto:time-nuts-boun...@febo.com] Im Auftrag von Steve Rooke
 Gesendet: Freitag, 10. April 2009 12:55
 An: Discussion of precise time and frequency measurement
 Betreff: [!! SPAM] Re: [time-nuts] Characterising frequency standards


 Ulrich,

 2009/4/10 Ulrich Bangert df...@ulrich-bangert.de:
  Steve,
 
  I think the penny has dropped now, thanks. It's
 interesting that the
  ADEV calculation still works even without continuous data
 as all the
  reading I have done has led me to belive this was sacrosanct.
 
  The penny may be falling but it is not completely dropped:
 Of course
  you can feed your ADEV calculation with every second sample removed
  and setting Tau0 = 2. And of course you receive a result
 that now is
  in harmony with your all samples / Tau0 = 1 s
 computation. Had you
  done frequency measurements the reason for this appearant
 harmony is
  that your counter does not show significant different behaviour
  whether set to 1 s gate time or alternate 2 second gate time.

 So why would my counter show any significant differences
 between a 1 sec or 2 sec gate time?

  Nevertheless leaving every second sample out is NOT exactly
 the same
  as continous data with Tau0 = 2 s. Instead it is data with
 Tau0 = 1 s
  and a DEAD TIME of 1s. There are dead time correction schemes
  available in the literature.

 I've just done a Google search for dead time correction
 scheme and I just turn up results relating to particle
 physics where it seems measurements are unable to keep up
 with the flow of data, hence the need to factor in the dead
 time of system. This form of application does not appear to
 correlate with the measurement of plain oscillators. Yes
 there is dead time, per say, but I fail to see how this can
 detract significantly from continuous data given a sufficient
 data set size (as for a total measurement time).

 I guess what we need is a real data set which would show that
 this form of ADEV calculation produces incorrect results, IE.
 the proof of the pudding is in the eating.

 73,
 Steve

  Best regards
  Ulrich Bangert
 
  -Ursprungliche Nachricht-
  Von: time-nuts-boun...@febo.com
 [mailto:time-nuts-boun...@febo.com]
  Im Auftrag von Steve Rooke
  Gesendet: Donnerstag, 9. April 2009 14:00
  An: Tom Van Baak; Discussion of precise time and frequency
  measurement
  Betreff: Re: [time-nuts] Characterising frequency standards
 
 
  Tom,
 
  2009/4/9 Tom Van Baak t...@leapsecond.com:
   The first argument to the adev1 program is the sampling
  interval t0.
   The program doesn't know how far apart the input file
 samples are
   taken so it is your job to specify this. The default is 1 second.
  
   If you have data taken one second apart then t0 = 1.
   If you have data taken two seconds apart then t0 = 2.
   If you have data taken 60 seconds apart then t0 = 60, etc.
  
   If, as in your case, you take raw one second data and
 remove every
   other sample (a perfectly valid thing to do), then t0 = 2.
  
   Make sense now? It's still continuous data in the
 sense that all
   measurements are a fixed interval apart. But in any ADEV
  calculation
   you have to specify the raw data interval.
 
  I think the penny has dropped now, thanks. It's
 interesting that the
  ADEV calculation still works even without continuous data
 as all the
  reading I have done has led me to belive this was sacrosanct.
 
  What I now believe is that it's possible to measure oscillator
  performance with less than optimal test gear. This will
 enable me to
  see the effects of any experiments I make in the future.
 If you can't
  measure it, how can you know that what your doing is good or bad.
 
  73,
  Steve
  --
  Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
  Omnium finis imminet

Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Steve Rooke
Tom,

2009/4/11 Tom Van Baak t...@leapsecond.com:
 Nevertheless leaving every second sample out is NOT exactly the same as
 continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
 DEAD TIME of 1s. There are dead time correction schemes available in the
 literature.

 Ulrich, and Steve,

 Wait, are we talking phase measurements here or frequency
 measurements? My assumption with this thread is that Steve
 is simply taking phase (time error) measurements, as in my
 GPS raw data page, in which case there is no such thing as
 dead time.

Yes, phase measurements as in the original GPS.dat data set on your site.

73,
Steve


 /tvb


 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.




-- 
Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
Omnium finis imminet

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Steve Rooke
2009/4/11 Magnus Danielson mag...@rubidium.dyndns.org:
 Tom Van Baak skrev:
 Nevertheless leaving every second sample out is NOT exactly the same as
 continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
 DEAD TIME of 1s. There are dead time correction schemes available in the
 literature.

 Ulrich, and Steve,

 Wait, are we talking phase measurements here or frequency
 measurements? My assumption with this thread is that Steve
 is simply taking phase (time error) measurements, as in my
 GPS raw data page, in which case there is no such thing as
 dead time.

 I agree. I was also considering this earlier but put my mind to rest by
 assuming phase/time samples.

 Dead time is when the counter looses track of time in between two
 consecutive measurements. A zero dead-time counter uses the stop of one
 measure as the start of the next measure.

This becomes very important when the data to be measured has a degree
of randomness and it is therefore important to capture all the data
without any dead time. In the case of measurements of phase error in
an oscillator, it should be possible to miss some data points provided
that the frequency of capture is still known (assuming that accuracy
of drift measurements is required).

 If you have a series of time-error values taken each second and then
 drop every other sample and just recall that the time between the
 samples is now 2 seconds, then the tau0 has become 2s without causing
 dead-time. However, if the original data would have been kept, better
 statistical properties would be given, unless there is a strong
 repetitive disturbance at 2 s period, in which case it would be filtered
 out.

Indeed, there would be a loss of statistical data but this could be
made up by sampling over a period of twice the time. This system is
blind to noise at 1/2 f but ways and means could be taken to account
for that, IE. taking two data sets with a single cycle space between
them or taking another small data set with 2 cycles skipped between
each measurement.

 An example when one does get dead-time, consider a frequency counter
 which measures frequency with a gate-time of say 2 s. However, before it
 re-arms and start the next measures is takes 300 ms. The two samples
 will have 2,3 s between its start and actually spans 4,3 seconds rather
 than 4 seconds. When doing Allan Deviation calculations on such a
 measurement series, it will be biased and the bias may be compensated,
 but these days counters with zero dead-time is readily available or the
 problem can be avoided by careful consideration.

I'm looking at what can be acheieved by a budget strapped amateur who
would have trouble purchasing a later counter capable of measuring
with zero dead time.

 I believe Grenhall made some extensive analysis of the biasing of
 dead-time, so it should be available from NIST FT online library.

I'll see what I can find.

 Before zero dead-time counters was available, a setup of two counters
 was used so that they where interleaved so the dead-time was the measure
 time of the other.

I could look at doing that perhaps.

 I can collect some references to dead-time articles if anyone need them.
 I'd happy to.

73,
Steve


 Cheers,
 Magnus

 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.




-- 
Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
Omnium finis imminet

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Bruce Griffiths
Steve

Steve Rooke wrote:
 2009/4/11 Magnus Danielson mag...@rubidium.dyndns.org:
   
 Tom Van Baak skrev:
 
 Nevertheless leaving every second sample out is NOT exactly the same as
 continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
 DEAD TIME of 1s. There are dead time correction schemes available in the
 literature.
 
 Ulrich, and Steve,

 Wait, are we talking phase measurements here or frequency
 measurements? My assumption with this thread is that Steve
 is simply taking phase (time error) measurements, as in my
 GPS raw data page, in which case there is no such thing as
 dead time.
   
 I agree. I was also considering this earlier but put my mind to rest by
 assuming phase/time samples.

 Dead time is when the counter looses track of time in between two
 consecutive measurements. A zero dead-time counter uses the stop of one
 measure as the start of the next measure.
 

 This becomes very important when the data to be measured has a degree
 of randomness and it is therefore important to capture all the data
 without any dead time. In the case of measurements of phase error in
 an oscillator, it should be possible to miss some data points provided
 that the frequency of capture is still known (assuming that accuracy
 of drift measurements is required).

   
 If you have a series of time-error values taken each second and then
 drop every other sample and just recall that the time between the
 samples is now 2 seconds, then the tau0 has become 2s without causing
 dead-time. However, if the original data would have been kept, better
 statistical properties would be given, unless there is a strong
 repetitive disturbance at 2 s period, in which case it would be filtered
 out.
 

 Indeed, there would be a loss of statistical data but this could be
 made up by sampling over a period of twice the time. This system is
 blind to noise at 1/2 f but ways and means could be taken to account
 for that, IE. taking two data sets with a single cycle space between
 them or taking another small data set with 2 cycles skipped between
 each measurement.

   
 An example when one does get dead-time, consider a frequency counter
 which measures frequency with a gate-time of say 2 s. However, before it
 re-arms and start the next measures is takes 300 ms. The two samples
 will have 2,3 s between its start and actually spans 4,3 seconds rather
 than 4 seconds. When doing Allan Deviation calculations on such a
 measurement series, it will be biased and the bias may be compensated,
 but these days counters with zero dead-time is readily available or the
 problem can be avoided by careful consideration.
 

 I'm looking at what can be acheieved by a budget strapped amateur who
 would have trouble purchasing a later counter capable of measuring
 with zero dead time.

   
You don't need a full featured counter for this application.
One  can easily implement a zero deadtime counter or the equivalent
thereof in an FPGA.
 I believe Grenhall made some extensive analysis of the biasing of
 dead-time, so it should be available from NIST FT online library.
 

 I'll see what I can find.

   
You still need to know the phase noise spectrum of the source being
characterised.
 Before zero dead-time counters was available, a setup of two counters
 was used so that they where interleaved so the dead-time was the measure
 time of the other.
 

 I could look at doing that perhaps.

   
Very easy to do at low cost in an FPGA.
 I can collect some references to dead-time articles if anyone need them.
 I'd happy to.
 

 73,
 Steve

   
 Cheers,
 Magnus

 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.

 



   

Brice

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Bruce Griffiths
Steve

Steve Rooke wrote:
 If I take two sequential phase readings from an input source and place
 this into one data set and aniother two readings from the same source
 but spaced by one cycle and put this in a second data set. From the
 first data set I can calculate ADEV for tau = 1s and can calculate
 ADEV for tau = 2 sec from the second data set. If I now pre-process
 the data in the second set to remove all the effects of drift (given
 that I have already determined this), I now have two 1 sec samples
 which show a statistical difference and can be fed to ADEV with a tau0
 = 1 sec producing a result for tau = 1 sec. The results from this
 second calculation should show equal accuracy as that using the first
 data set (given the limited size of the data set).

   
You need to give far more detail as its unclear exactly what you are
doing with what samples.
Label all the phase samples and then show which samples belong to which
data set.
Also need to show clearly what you mean by skipping a cycle.

 I now collect a large data set but with a single cycle skipped between
 each sample. I feed this into ADEV using tau0 = 2 sec to produce tau
 results = 2 sec. I then pre-process the data to remove any drift and
 feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec
 result. I now have a complete set of results for tau = 1 sec. Agreed,
 there is the issue of modulation at 1/2 input f but ignoring this for
 the moment, this should give a valid result.

   
Again you need to give more detail.
 Now indulge me while I have a flight of fantasy.

 As the effects of jitter and phase noise will produce a statistical
 distribution of measurements, any results from these ADEV calculations
 will be limited on accuracy by the size of the data set. Only if we
 sample for a very long time will we see the very limits of the effects
 of noise. 
What noise from what source?
Noise in such measurements can originate in the measuring instrument or
the source.
For short measurement times quantisation noise and instrumental noise
may mask the noise from the source but they are still present.


 The samples which deviate the most from the median will
 occur very infrequently and it is statistically likely that they will
 not occur adjacent to another highly deviated sample. We could
 pre-process the data to remove all drift and then sort it into an
 array of increasing size. This would give the greatest deviations at
 each end of the array. For 1 sec stability the deviation would be the
 greatest difference from the median of the first and last samples in
 the array. For a 2 sec stability, this same calculation could be made
 taking the first two and last two readings in the array and
 calculating their difference from 2 x the median. This calculation
 could be continued until all the data is used for the final
 calculation. In fact the whole sorted data set could be fed to ADEV to
 produce a result that would show better worse case measurement of the
 input source which still has some statistical probability. In theory,
 if we took an infinite number of samples, there would be a whole
 string of absolutely maximum deviation measurements in a row which
 would show the absolute worse case.

 Is any of this valid or just bad physics, I don't know, but I'm sure
 it will solicit interesting comment.

   
No, not poor physics but poor statistics.

 73,
 Steve

 2009/4/10 Tom Van Baak t...@leapsecond.com:
   
 I think the penny has dropped now, thanks. It's interesting that the
 ADEV calculation still works even without continuous data as all the
 reading I have done has led me to belive this was sacrosanct.
   
 We need to be careful about what you mean by continuous.
 Let me probe a bit further to make sure you or others understand.

 The data that you first mentioned, some GPS and OCXO data at:
http://www.leapsecond.com/pages/gpsdo-sim
 was recorded once per second, for 400,000 samples without any
 interruption; that's over 4 days of continuous data.

 As you see it is very possible to extract every other, or every 10th,
 every 60th, or every Nth point from this large data set to create a
 smaller data set.

 Is it as if you had several counters all connected to the same DUT.
 Perhaps one makes a new phase measurement each second,
 another makes a measurement every 10 seconds; maybe a third
 counter just measures once a minute.

 The key here is not how often they make measurements, but that
 they all keep running at their particular rate.

 The data sets you get from these counters all represent 4 days
 of measurement; what changes is the measurement interval, the
 tau0, or whatever your ADEV tool calls it.

 Now the ADEV plots you get from these counters will all match
 perfectly with the only exception being that the every-60 second
 counter cannot give you any ADEV points for tau less than 60;
 the every-10 second counter cannot give you points for tau less
 than 10 seconds; and for that matter; the every 

Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Rex
Bruce Griffiths wrote:
 ...

 Brice

   

An impostor? An alias? :-)



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Bruce Griffiths
Rex wrote:
 Bruce Griffiths wrote:
   
 ...

 Brice

   
 

 An impostor? An alias? :-)


   
And I thought I was alluding to aliasing of the phase noise spectrum not
the characters of the alphabet.

Bruce
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.

   


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Magnus Danielson
Steve Rooke skrev:
 2009/4/11 Magnus Danielson mag...@rubidium.dyndns.org:
 Tom Van Baak skrev:
 Nevertheless leaving every second sample out is NOT exactly the same as
 continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
 DEAD TIME of 1s. There are dead time correction schemes available in the
 literature.
 Ulrich, and Steve,

 Wait, are we talking phase measurements here or frequency
 measurements? My assumption with this thread is that Steve
 is simply taking phase (time error) measurements, as in my
 GPS raw data page, in which case there is no such thing as
 dead time.
 I agree. I was also considering this earlier but put my mind to rest by
 assuming phase/time samples.

 Dead time is when the counter looses track of time in between two
 consecutive measurements. A zero dead-time counter uses the stop of one
 measure as the start of the next measure.
 
 This becomes very important when the data to be measured has a degree
 of randomness and it is therefore important to capture all the data
 without any dead time. In the case of measurements of phase error in
 an oscillator, it should be possible to miss some data points provided
 that the frequency of capture is still known (assuming that accuracy
 of drift measurements is required).

Depending on the dominant noise type, the ADEV measure will be biased.

 If you have a series of time-error values taken each second and then
 drop every other sample and just recall that the time between the
 samples is now 2 seconds, then the tau0 has become 2s without causing
 dead-time. However, if the original data would have been kept, better
 statistical properties would be given, unless there is a strong
 repetitive disturbance at 2 s period, in which case it would be filtered
 out.
 
 Indeed, there would be a loss of statistical data but this could be
 made up by sampling over a period of twice the time. This system is
 blind to noise at 1/2 f but ways and means could be taken to account
 for that, IE. taking two data sets with a single cycle space between
 them or taking another small data set with 2 cycles skipped between
 each measurement.

Actually, you can take any number of 2 cycle measures and be unable to 
detect the 1/2 f oscillation without detecting it. In order to be able 
to detect it you will need to take 2 measures and be able to make an odd 
number of cycles trigger difference between them to have a chance.

The trouble is that the modulation is at the Nyquist frequency of the 1 
cycle data, so it will fold down to DC on sampling it at half-rate. 
Canceling it from other DC offset errors could be challenging.

Sampling it at 1/3 rate would discover it thought.

 An example when one does get dead-time, consider a frequency counter
 which measures frequency with a gate-time of say 2 s. However, before it
 re-arms and start the next measures is takes 300 ms. The two samples
 will have 2,3 s between its start and actually spans 4,3 seconds rather
 than 4 seconds. When doing Allan Deviation calculations on such a
 measurement series, it will be biased and the bias may be compensated,
 but these days counters with zero dead-time is readily available or the
 problem can be avoided by careful consideration.
 
 I'm looking at what can be acheieved by a budget strapped amateur who
 would have trouble purchasing a later counter capable of measuring
 with zero dead time.

Beleive me, that's where I am too. Patience and saving money for things 
I really want and allowing accumulation over time has allowed me some 
pretty fancy tools in my private lab. Infact I have to lend some of my 
gear to commercial labs as I outperform them...

 I believe Grenhall made some extensive analysis of the biasing of
 dead-time, so it should be available from NIST FT online library.
 
 I'll see what I can find.

I recalled wrong. You should look for Barnes Tables of Bias Functions, 
B1 and B2, for Variance Based on Finite Samples of Processes with Power 
Law Spectral Densities, NBS Technical Note 375, Janurary 1969 as well 
as Barnes and Allan Variance Based on Data with Dead Time Between the 
Mesurements NIST Technical Note 1318, 1990.

A ahort into to the subject is found in NIST Special Publication 1065 by 
W.J. Riley as found on http://www.wriley.com along other excelent 
material. The good thing about that material is that he gives good 
references, as one should.

 Before zero dead-time counters was available, a setup of two counters
 was used so that they where interleaved so the dead-time was the measure
 time of the other.
 
 I could look at doing that perhaps.

You should have two counters of equivalent performance, preferably same 
model. It's a rather expensive approach IMHO.

Have a look at the possibility of picking up a HP 5371A or 5372A. You 
can usually snag one for about 600 USD or 1000 USD respectively on Ebay.

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go 

Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Magnus Danielson
Bruce Griffiths skrev:
 Rex wrote:
 Bruce Griffiths wrote:
   
 ...

 Brice

   
 
 An impostor? An alias? :-)


   
 And I thought I was alluding to aliasing of the phase noise spectrum not
 the characters of the alphabet.

So it is not a case of shot noise of Bruce fingers? :)
I know mine has some, and besides that there are several bugs in the 
language unit...

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Bruce Griffiths
Hej Magnus

Magnus Danielson wrote:
 Bruce Griffiths skrev:
   
 Rex wrote:
 
 Bruce Griffiths wrote:
   
   
 ...

 Brice

   
 
 
 An impostor? An alias? :-)


   
   
 And I thought I was alluding to aliasing of the phase noise spectrum not
 the characters of the alphabet.
 

 So it is not a case of shot noise of Bruce fingers? :)
 I know mine has some, and besides that there are several bugs in the 
 language unit...

 Cheers,
 Magnus

 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.

   
More a case of digital jitter.
Perhaps the control system phase noise was too high.

Bruce

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Steve Rooke
Bruce,

2009/4/12 Bruce Griffiths bruce.griffi...@xtra.co.nz:
 Steve

 Steve Rooke wrote:
 If I take two sequential phase readings from an input source and place
 this into one data set and aniother two readings from the same source
 but spaced by one cycle and put this in a second data set. From the
 first data set I can calculate ADEV for tau = 1s and can calculate
 ADEV for tau = 2 sec from the second data set. If I now pre-process
 the data in the second set to remove all the effects of drift (given
 that I have already determined this), I now have two 1 sec samples
 which show a statistical difference and can be fed to ADEV with a tau0
 = 1 sec producing a result for tau = 1 sec. The results from this
 second calculation should show equal accuracy as that using the first
 data set (given the limited size of the data set).


 You need to give far more detail as its unclear exactly what you are
 doing with what samples.
 Label all the phase samples and then show which samples belong to which
 data set.
 Also need to show clearly what you mean by skipping a cycle.

Say I have a 1Hz input source and my counter measures the period of
the first cycle and assigns this to A1. At the end of the first cycle
the counter is able to be rest and re-triggered to capture the second
cycle and assign this to A2. So far 2 sec have passed and I have two
readings in data set A.

I now repeat the experiment and assign the measurement of the first
period to B1. The counter I am using this time is unable to stop at
the end of the first measurement and retrigger immediately so I'm
unable to measure the second cycle but is left in the armed position.
When the third cycle starts, the counter triggers and completes the
measurement of the third cycle which is now assigned to B2.

For the purposes of my original text, the first data set refers to A1
 A2. Similarly the second data set refers to B1  B2. Reference to
pre-processing of the second data set refers to mathematically
removing the effects of drift from B1  B2 to produce a third data set
which is used as the data input for an ADEV calculation where tau0 = 1
sec with output of tau = 1 sec.


 I now collect a large data set but with a single cycle skipped between
 each sample. I feed this into ADEV using tau0 = 2 sec to produce tau
 results = 2 sec. I then pre-process the data to remove any drift and
 feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec
 result. I now have a complete set of results for tau = 1 sec. Agreed,
 there is the issue of modulation at 1/2 input f but ignoring this for
 the moment, this should give a valid result.


 Again you need to give more detail.

In this case the data set is constructed from the measurement of the
cycle periods of a 1Hz input source where even cycles are skipped,
hence each data point is a measurement of the period of each odd (1,
3, 5, 7...) cycle of the incoming waveform. In this case the time
between each measurement is 2 sec so ADEV is calculated with tau = 2
sec for tau = 2 sec. This data set is then mathematically processed
to remove the effects of drift, bearing in mind the 2 sec spacing of
each data point, and ADEV is then calculated with tau0 = 1 sec for tau
= 1 sec.


 Now indulge me while I have a flight of fantasy.

 As the effects of jitter and phase noise will produce a statistical
 distribution of measurements, any results from these ADEV calculations
 will be limited on accuracy by the size of the data set. Only if we
 sample for a very long time will we see the very limits of the effects
 of noise.

 What noise from what source?

PN - White noise phase WPM, Flicker noise phase FPM, White noise
frequency WFM, Flicker noise frequency FFM and Random walk frequency
RWFM.

 Noise in such measurements can originate in the measuring instrument or
 the source.

Indeed, and this is an important aspect to consider as we have been
discussing the effects of induced jitter/PN to a frequency standard
when it is buffered and divided down. Ideally measurements of ADEV
would be made on the raw frequency standard source (eg. 10MHz) rather
than, say, a divided 1Hz signal.

 For short measurement times quantisation noise and instrumental noise
 may mask the noise from the source but they are still present.

Well, these form the noise floor of our measurement system.



 The samples which deviate the most from the median will
 occur very infrequently and it is statistically likely that they will
 not occur adjacent to another highly deviated sample. We could
 pre-process the data to remove all drift and then sort it into an
 array of increasing size. This would give the greatest deviations at
 each end of the array. For 1 sec stability the deviation would be the
 greatest difference from the median of the first and last samples in
 the array. For a 2 sec stability, this same calculation could be made
 taking the first two and last two readings in the array and
 calculating their difference from 2 x the median. This 

Re: [time-nuts] Characterising frequency standards

2009-04-12 Thread Steve Rooke
2009/4/13 Magnus Danielson mag...@rubidium.dyndns.org:
 Dead time is when the counter looses track of time in between two
 consecutive measurements. A zero dead-time counter uses the stop of one
 measure as the start of the next measure.

 This becomes very important when the data to be measured has a degree
 of randomness and it is therefore important to capture all the data
 without any dead time. In the case of measurements of phase error in
 an oscillator, it should be possible to miss some data points provided
 that the frequency of capture is still known (assuming that accuracy
 of drift measurements is required).

 Depending on the dominant noise type, the ADEV measure will be biased.

If the noise has a component related to the measurement frequency,
agreed, but I have already commented on that before.

 Indeed, there would be a loss of statistical data but this could be
 made up by sampling over a period of twice the time. This system is
 blind to noise at 1/2 f but ways and means could be taken to account
 for that, IE. taking two data sets with a single cycle space between
 them or taking another small data set with 2 cycles skipped between
 each measurement.

 Actually, you can take any number of 2 cycle measures and be unable to
 detect the 1/2 f oscillation without detecting it. In order to be able
 to detect it you will need to take 2 measures and be able to make an odd
 number of cycles trigger difference between them to have a chance.

Agreed.

 The trouble is that the modulation is at the Nyquist frequency of the 1
 cycle data, so it will fold down to DC on sampling it at half-rate.
 Canceling it from other DC offset errors could be challenging.

Comparing the frequency calculated from the data would show a 2Hz
offset with the fundamental frequency of the source.

 Sampling it at 1/3 rate would discover it thought.

Agreed.

 I'm looking at what can be acheieved by a budget strapped amateur who
 would have trouble purchasing a later counter capable of measuring
 with zero dead time.

 Beleive me, that's where I am too. Patience and saving money for things
 I really want and allowing accumulation over time has allowed me some
 pretty fancy tools in my private lab. Infact I have to lend some of my
 gear to commercial labs as I outperform them...

Well, that's a goal for me but I'm looking at what is achievable in
the short term instead of sitting on my hands.

 I recalled wrong. You should look for Barnes Tables of Bias Functions,
 B1 and B2, for Variance Based on Finite Samples of Processes with Power
 Law Spectral Densities, NBS Technical Note 375, Janurary 1969 as well
 as Barnes and Allan Variance Based on Data with Dead Time Between the
 Mesurements NIST Technical Note 1318, 1990.

 A ahort into to the subject is found in NIST Special Publication 1065 by
 W.J. Riley as found on http://www.wriley.com along other excelent
 material. The good thing about that material is that he gives good
 references, as one should.

Thanks for the pointer.

 I could look at doing that perhaps.

 You should have two counters of equivalent performance, preferably same
 model. It's a rather expensive approach IMHO.

It may still be cheaper than the purchase of a counter capable of
continuous collection, especially if you already have a counter that
is capable at 1/2 f.

 Have a look at the possibility of picking up a HP 5371A or 5372A. You
 can usually snag one for about 600 USD or 1000 USD respectively on Ebay.

I'd have to be a really good boy for Santa to bring me something of
that ilk. Perhaps the lotto will come up one day :-)

73,
Steve

 Cheers,
 Magnus

 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.




-- 
Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
Omnium finis imminet

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-10 Thread Ulrich Bangert
Steve,

 I think the penny has dropped now, thanks. It's interesting 
 that the ADEV calculation still works even without continuous 
 data as all the reading I have done has led me to belive this 
 was sacrosanct.

The penny may be falling but it is not completely dropped: Of course you can
feed your ADEV calculation with every second sample removed and setting Tau0
= 2. And of course you receive a result that now is in harmony with your
all samples / Tau0 = 1 s computation. Had you done frequency measurements
the reason for this appearant harmony is that your counter does not show
significant different behaviour whether set to 1 s gate time or alternate 2
second gate time. 

Nevertheless leaving every second sample out is NOT exactly the same as
continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
DEAD TIME of 1s. There are dead time correction schemes available in the
literature.

Best regards
Ulrich Bangert   

 -Ursprungliche Nachricht-
 Von: time-nuts-boun...@febo.com 
 [mailto:time-nuts-boun...@febo.com] Im Auftrag von Steve Rooke
 Gesendet: Donnerstag, 9. April 2009 14:00
 An: Tom Van Baak; Discussion of precise time and frequency measurement
 Betreff: Re: [time-nuts] Characterising frequency standards
 
 
 Tom,
 
 2009/4/9 Tom Van Baak t...@leapsecond.com:
  The first argument to the adev1 program is the sampling 
 interval t0. 
  The program doesn't know how far apart the input file samples are 
  taken so it is your job to specify this. The default is 1 second.
 
  If you have data taken one second apart then t0 = 1.
  If you have data taken two seconds apart then t0 = 2.
  If you have data taken 60 seconds apart then t0 = 60, etc.
 
  If, as in your case, you take raw one second data and remove every 
  other sample (a perfectly valid thing to do), then t0 = 2.
 
  Make sense now? It's still continuous data in the sense that all 
  measurements are a fixed interval apart. But in any ADEV 
 calculation 
  you have to specify the raw data interval.
 
 I think the penny has dropped now, thanks. It's interesting 
 that the ADEV calculation still works even without continuous 
 data as all the reading I have done has led me to belive this 
 was sacrosanct.
 
 What I now believe is that it's possible to measure 
 oscillator performance with less than optimal test gear. This 
 will enable me to see the effects of any experiments I make 
 in the future. If you can't measure it, how can you know that 
 what your doing is good or bad.
 
 73,
 Steve
 -- 
 Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
 Omnium finis imminet
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to 
 https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-10 Thread Steve Rooke
Ulrich,

2009/4/10 Ulrich Bangert df...@ulrich-bangert.de:
 Steve,

 I think the penny has dropped now, thanks. It's interesting
 that the ADEV calculation still works even without continuous
 data as all the reading I have done has led me to belive this
 was sacrosanct.

 The penny may be falling but it is not completely dropped: Of course you can
 feed your ADEV calculation with every second sample removed and setting Tau0
 = 2. And of course you receive a result that now is in harmony with your
 all samples / Tau0 = 1 s computation. Had you done frequency measurements
 the reason for this appearant harmony is that your counter does not show
 significant different behaviour whether set to 1 s gate time or alternate 2
 second gate time.

So why would my counter show any significant differences between a 1
sec or 2 sec gate time?

 Nevertheless leaving every second sample out is NOT exactly the same as
 continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
 DEAD TIME of 1s. There are dead time correction schemes available in the
 literature.

I've just done a Google search for dead time correction scheme and I
just turn up results relating to particle physics where it seems
measurements are unable to keep up with the flow of data, hence the
need to factor in the dead time of system. This form of application
does not appear to correlate with the measurement of plain
oscillators. Yes there is dead time, per say, but I fail to see how
this can detract significantly from continuous data given a sufficient
data set size (as for a total measurement time).

I guess what we need is a real data set which would show that this
form of ADEV calculation produces incorrect results, IE. the proof of
the pudding is in the eating.

73,
Steve

 Best regards
 Ulrich Bangert

 -Ursprungliche Nachricht-
 Von: time-nuts-boun...@febo.com
 [mailto:time-nuts-boun...@febo.com] Im Auftrag von Steve Rooke
 Gesendet: Donnerstag, 9. April 2009 14:00
 An: Tom Van Baak; Discussion of precise time and frequency measurement
 Betreff: Re: [time-nuts] Characterising frequency standards


 Tom,

 2009/4/9 Tom Van Baak t...@leapsecond.com:
  The first argument to the adev1 program is the sampling
 interval t0.
  The program doesn't know how far apart the input file samples are
  taken so it is your job to specify this. The default is 1 second.
 
  If you have data taken one second apart then t0 = 1.
  If you have data taken two seconds apart then t0 = 2.
  If you have data taken 60 seconds apart then t0 = 60, etc.
 
  If, as in your case, you take raw one second data and remove every
  other sample (a perfectly valid thing to do), then t0 = 2.
 
  Make sense now? It's still continuous data in the sense that all
  measurements are a fixed interval apart. But in any ADEV
 calculation
  you have to specify the raw data interval.

 I think the penny has dropped now, thanks. It's interesting
 that the ADEV calculation still works even without continuous
 data as all the reading I have done has led me to belive this
 was sacrosanct.

 What I now believe is that it's possible to measure
 oscillator performance with less than optimal test gear. This
 will enable me to see the effects of any experiments I make
 in the future. If you can't measure it, how can you know that
 what your doing is good or bad.

 73,
 Steve
 --
 Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
 Omnium finis imminet

 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to
 https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.


 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.




-- 
Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
Omnium finis imminet

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-10 Thread Steve Rooke
Tom,

2009/4/10 Tom Van Baak t...@leapsecond.com:
 We need to be careful about what you mean by continuous.
 Let me probe a bit further to make sure you or others understand.

My reference to continuous data would be defined as measurements
over a specific sampling period with each sample following directly
after the previous. This seems to be what is generally required for
the calculation of ADEV in the literature and postings on this group.
Such that techniques like the picket fence are suggested as a way to
deduce continuous data when using instruments that are unable to
measure sequential cycles of the input.

 The data that you first mentioned, some GPS and OCXO data at:
    http://www.leapsecond.com/pages/gpsdo-sim
 was recorded once per second, for 400,000 samples without any
 interruption; that's over 4 days of continuous data.

 As you see it is very possible to extract every other, or every 10th,
 every 60th, or every Nth point from this large data set to create a
 smaller data set.

 Is it as if you had several counters all connected to the same DUT.
 Perhaps one makes a new phase measurement each second,
 another makes a measurement every 10 seconds; maybe a third
 counter just measures once a minute.

 The key here is not how often they make measurements, but that
 they all keep running at their particular rate.

Agreed.

 The data sets you get from these counters all represent 4 days
 of measurement; what changes is the measurement interval, the
 tau0, or whatever your ADEV tool calls it.

 Now the ADEV plots you get from these counters will all match
 perfectly with the only exception being that the every-60 second
 counter cannot give you any ADEV points for tau less than 60;
 the every-10 second counter cannot give you points for tau less
 than 10 seconds; and for that matter; the every 1-second counter
 cannot give you points for tau less than 1 second.

It is certainly true that 1 second sampled data collected at 60 second
intervals cannot be fed into an ADEV calculation as having a tau of 1
sec as the resultant calculation will show incorrect results when
noise like drift is a factor. If the data set is pre-processed and
corrected for such effects as drift, I believe it should be possible
to feed this discontinuous data as continuous data for the
measurement of short tau with reasonable accuracy.

 So what makes all these continuous is that the runs were not
 interrupted and that the data points were taken at regular intervals.

 The x-axis of an ADEV plot spans a logarithmic range of tau. The
 farthest point on the *right* is limited by how long your run was. If
 you collect data for 4 or 5 days you can compute and plot points
 out to around 1 day or 10^5 seconds.

 On the other hand, the farthest point on the *left* is limited by how
 fast you collect data. If you collect one point every 10 seconds,
 then tau=10 is your left-most point. Yes, it's common to collect data
 every second; in this case you can plot down to tau=1s. Some of
 my instruments can collect phase data at 1000 points per second
 (huge files!) and this means my leftmost ADEV point is 1 millisecond.

I guess it really depends on what level your measurement system is
able to work. For, say, the output of a 10MHz OCXO it would be
desirable to measure the source frequency although that would require
a fast measurement system and significant storage. The benefits of
this is that the input source is not degraded in the process of
division down to a more manageable frequency. We are currently
discussing the effects of the introduction of noise into frequency
standards just with distribution amplifiers and dividers. The ability
to measure such close in noise effects would indeed be a great bonus
and I envy your abilty to perform that.

 Here's an example of collecting data at 10 Hz:
 http://www.leapsecond.com/pages/gpsdo/
 You can see this allows me to plot from ADEV tau = 0.1 s.

 Does all this make sense now?

Yes, I understand.

 What I now believe is that it's possible to measure oscillator
 performance with less than optimal test gear. This will enable me to
 see the effects of any experiments I make in the future. If you can't
 measure it, how can you know that what your doing is good or bad.

 Very true. So what one or several performance measurements
 are you after?

Well there are a number of them. The selection of best free-running
OCXOs. The effects of locking an OCXO to GPS and the tuning of this.
Running a OCXO in active holdover mode. I'd like to separate the
effects of temperature, rate of change of temperature, aging,
humidity, atmospheric pressure and, possibly, gravity on a
free-running OCXO. By changing just one variable at a time, I'd like
to measure the effects of each one with respect to determining the
correction required from a holdover circuit. Agreed, some of these are
simply defined as frequency change in the oscillator but I will wish
to measure the full system performance and need some form of

Re: [time-nuts] Characterising frequency standards

2009-04-10 Thread Ulrich Bangert
Steve,

 So why would my counter show any significant differences 
 between a 1 sec or 2 sec gate time?

suppose your source has a 0.5 Hz frequency modulation. Would you see it with
2 s gate time or a integer multiple of it? Would you notice it with 1 s gate
time or an odd integer of it? 

 I've just done a Google search for dead time correction 
 scheme and I just turn up results relating to particle 
 physics where it seems measurements are unable to keep up 
 with the flow of data, hence the need to factor in the dead 
 time of system. 

Google for the STABLE32 manual. THIS literature will bring you a lot
further, many well documented source examples in Forth and PL/1, hi. F.e.
you may look here:

http://www.wriley.com/

Best regards
Ulrich Bangert

 -Ursprungliche Nachricht-
 Von: time-nuts-boun...@febo.com 
 [mailto:time-nuts-boun...@febo.com] Im Auftrag von Steve Rooke
 Gesendet: Freitag, 10. April 2009 12:55
 An: Discussion of precise time and frequency measurement
 Betreff: [!! SPAM] Re: [time-nuts] Characterising frequency standards
 
 
 Ulrich,
 
 2009/4/10 Ulrich Bangert df...@ulrich-bangert.de:
  Steve,
 
  I think the penny has dropped now, thanks. It's 
 interesting that the 
  ADEV calculation still works even without continuous data 
 as all the 
  reading I have done has led me to belive this was sacrosanct.
 
  The penny may be falling but it is not completely dropped: 
 Of course 
  you can feed your ADEV calculation with every second sample removed 
  and setting Tau0 = 2. And of course you receive a result 
 that now is 
  in harmony with your all samples / Tau0 = 1 s 
 computation. Had you 
  done frequency measurements the reason for this appearant 
 harmony is 
  that your counter does not show significant different behaviour 
  whether set to 1 s gate time or alternate 2 second gate time.
 
 So why would my counter show any significant differences 
 between a 1 sec or 2 sec gate time?
 
  Nevertheless leaving every second sample out is NOT exactly 
 the same 
  as continous data with Tau0 = 2 s. Instead it is data with 
 Tau0 = 1 s 
  and a DEAD TIME of 1s. There are dead time correction schemes 
  available in the literature.
 
 I've just done a Google search for dead time correction 
 scheme and I just turn up results relating to particle 
 physics where it seems measurements are unable to keep up 
 with the flow of data, hence the need to factor in the dead 
 time of system. This form of application does not appear to 
 correlate with the measurement of plain oscillators. Yes 
 there is dead time, per say, but I fail to see how this can 
 detract significantly from continuous data given a sufficient 
 data set size (as for a total measurement time).
 
 I guess what we need is a real data set which would show that 
 this form of ADEV calculation produces incorrect results, IE. 
 the proof of the pudding is in the eating.
 
 73,
 Steve
 
  Best regards
  Ulrich Bangert
 
  -Ursprungliche Nachricht-
  Von: time-nuts-boun...@febo.com 
 [mailto:time-nuts-boun...@febo.com] 
  Im Auftrag von Steve Rooke
  Gesendet: Donnerstag, 9. April 2009 14:00
  An: Tom Van Baak; Discussion of precise time and frequency 
  measurement
  Betreff: Re: [time-nuts] Characterising frequency standards
 
 
  Tom,
 
  2009/4/9 Tom Van Baak t...@leapsecond.com:
   The first argument to the adev1 program is the sampling
  interval t0.
   The program doesn't know how far apart the input file 
 samples are 
   taken so it is your job to specify this. The default is 1 second.
  
   If you have data taken one second apart then t0 = 1.
   If you have data taken two seconds apart then t0 = 2.
   If you have data taken 60 seconds apart then t0 = 60, etc.
  
   If, as in your case, you take raw one second data and 
 remove every 
   other sample (a perfectly valid thing to do), then t0 = 2.
  
   Make sense now? It's still continuous data in the 
 sense that all 
   measurements are a fixed interval apart. But in any ADEV
  calculation
   you have to specify the raw data interval.
 
  I think the penny has dropped now, thanks. It's 
 interesting that the 
  ADEV calculation still works even without continuous data 
 as all the 
  reading I have done has led me to belive this was sacrosanct.
 
  What I now believe is that it's possible to measure oscillator 
  performance with less than optimal test gear. This will 
 enable me to 
  see the effects of any experiments I make in the future. 
 If you can't 
  measure it, how can you know that what your doing is good or bad.
 
  73,
  Steve
  --
  Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
  Omnium finis imminet
 
  ___
  time-nuts mailing list -- time-nuts@febo.com
  To unsubscribe, go to 
  https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
  and follow the instructions there.
 
 
  ___
  time-nuts mailing list -- time-nuts@febo.com
  To unsubscribe, go to 
  https

Re: [time-nuts] Characterising frequency standards

2009-04-10 Thread Tom Van Baak
 Nevertheless leaving every second sample out is NOT exactly the same as
 continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
 DEAD TIME of 1s. There are dead time correction schemes available in the
 literature.

Ulrich, and Steve,

Wait, are we talking phase measurements here or frequency
measurements? My assumption with this thread is that Steve
is simply taking phase (time error) measurements, as in my
GPS raw data page, in which case there is no such thing as
dead time.

/tvb


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-10 Thread Magnus Danielson
Tom Van Baak skrev:
 Nevertheless leaving every second sample out is NOT exactly the same as
 continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
 DEAD TIME of 1s. There are dead time correction schemes available in the
 literature.
 
 Ulrich, and Steve,
 
 Wait, are we talking phase measurements here or frequency
 measurements? My assumption with this thread is that Steve
 is simply taking phase (time error) measurements, as in my
 GPS raw data page, in which case there is no such thing as
 dead time.

I agree. I was also considering this earlier but put my mind to rest by 
assuming phase/time samples.

Dead time is when the counter looses track of time in between two 
consecutive measurements. A zero dead-time counter uses the stop of one 
measure as the start of the next measure.

If you have a series of time-error values taken each second and then 
drop every other sample and just recall that the time between the 
samples is now 2 seconds, then the tau0 has become 2s without causing 
dead-time. However, if the original data would have been kept, better 
statistical properties would be given, unless there is a strong 
repetitive disturbance at 2 s period, in which case it would be filtered 
out.

An example when one does get dead-time, consider a frequency counter 
which measures frequency with a gate-time of say 2 s. However, before it 
re-arms and start the next measures is takes 300 ms. The two samples 
will have 2,3 s between its start and actually spans 4,3 seconds rather 
than 4 seconds. When doing Allan Deviation calculations on such a 
measurement series, it will be biased and the bias may be compensated, 
but these days counters with zero dead-time is readily available or the 
problem can be avoided by careful consideration.

I believe Grenhall made some extensive analysis of the biasing of 
dead-time, so it should be available from NIST FT online library.

Before zero dead-time counters was available, a setup of two counters 
was used so that they where interleaved so the dead-time was the measure 
time of the other.

I can collect some references to dead-time articles if anyone need them. 
I'd happy to.

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-09 Thread Steve Rooke
Bruce,

2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:
 Doesn't that imply that the data point should correspond to the whole
 sampling period and not just half of it?


 The total measurement time is only deceased by 1 sec at the most if you
 delete every second line.
 The resampled data now has a sampling interval of 2 sec for the entire
 measurment time.

 The original data samples are phase differences measured on the second
 every second.
 The resampled data are phase differences measured every 2 seconds on the
 corresponding second transition.

OK, I'm ready to be shot down on this but from what I can see right
now the measurement period of 2 sec should be maintained to satisfy
the measurement of drift which would otherwise be incorrectly
interpreted if I processed 40 sec of data as only 20 sec. I
can see that noise on the data can be broken down into two major
groups, drift and what I would really see as noise, IE PN, flicker,
random, etc. I guess I have been ignoring the whole drift component
with my missing data used for the ADEV plots. The point to me though
is that, even with the reduced data, an ADEV plot should be able to
characterise 'noise' for the actual sampling duration of the data, IE.
1 sec. What would obviously be incorrect is the affects of drift which
should logically show up as being twice as great. Maybe my idea of
using the 1 sec sampling period would work out better with HDEV.

73,
Steve
-- 
Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
Omnium finis imminet

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-09 Thread Steve Rooke
Tom,

2009/4/9 Tom Van Baak t...@leapsecond.com:
 The first argument to the adev1 program is the sampling interval t0.
 The program doesn't know how far apart the input file samples are
 taken so it is your job to specify this. The default is 1 second.

 If you have data taken one second apart then t0 = 1.
 If you have data taken two seconds apart then t0 = 2.
 If you have data taken 60 seconds apart then t0 = 60, etc.

 If, as in your case, you take raw one second data and remove
 every other sample (a perfectly valid thing to do), then t0 = 2.

 Make sense now? It's still continuous data in the sense that all
 measurements are a fixed interval apart. But in any ADEV
 calculation you have to specify the raw data interval.

I think the penny has dropped now, thanks. It's interesting that the
ADEV calculation still works even without continuous data as all the
reading I have done has led me to belive this was sacrosanct.

What I now believe is that it's possible to measure oscillator
performance with less than optimal test gear. This will enable me to
see the effects of any experiments I make in the future. If you can't
measure it, how can you know that what your doing is good or bad.

73,
Steve
-- 
Steve Rooke - ZL3TUV  G8KVD  JAKDTTNW
Omnium finis imminet

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Steve Rooke
Tom,

I understand fully the points that you have made but I have obviously
not made my point clear to all and i apologise for my poor
communication skills.

This is what I'm getting at:

Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
and processing various forms of gps.dat from
http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 40
** Normal and Overlapping Allan deviation:

   1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
   2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
   5 tau, 6.2107e-010 adev(n=79998),6.1983e-010 oadev(n=30)
  10 tau, 3.1578e-010 adev(n=39998),3.1549e-010 oadev(n=399980)
  20 tau, 1.6531e-010 adev(n=19998),1.6534e-010 oadev(n=399960)
  50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
 100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
 200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
 500 tau, 9.2193e-012 adev(n=798),  9.1630e-012 oadev(n=399000)
1000 tau, 4.9719e-012 adev(n=398),  4.7750e-012 oadev(n=398000)
2000 tau, 2.6742e-012 adev(n=198),  2.5214e-012 oadev(n=396000)
5000 tau, 1.0010e-012 adev(n=78),   1.1032e-012 oadev(n=39)
   1 tau, 6.1333e-013 adev(n=38),   6.1039e-013 oadev(n=38)
   2 tau, 3.8162e-013 adev(n=18),   3.2913e-013 oadev(n=36)
   5 tau, 1.0228e-013 adev(n=6),1.5074e-013 oadev(n=30)
  10 tau, 5.8577e-014 adev(n=2),6.7597e-014 oadev(n=20)

So far, so good. Now I delete every even line in the file which leaves
me with 20 lines of data (40 lines in original gps.dat file).
(awk 'and(NR, 1) == 0 {print}' gps.dat gps1.dat)

C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps1.dat

** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 20
** Normal and Overlapping Allan deviation:

   1 tau, 3.0257e-009 adev(n=18),   3.0257e-009 oadev(n=18)
   2 tau, 1.5373e-009 adev(n=8),1.5345e-009 oadev(n=16)
   5 tau, 6.3147e-010 adev(n=39998),6.3057e-010 oadev(n=10)
  10 tau, 3.3140e-010 adev(n=19998),3.3067e-010 oadev(n=199980)
  20 tau, 1.7872e-010 adev(n=9998), 1.7810e-010 oadev(n=199960)
  50 tau, 7.9428e-011 adev(n=3998), 8.1216e-011 oadev(n=199900)
 100 tau, 4.2352e-011 adev(n=1998), 4.3265e-011 oadev(n=199800)
 200 tau, 2.2001e-011 adev(n=998),  2.2593e-011 oadev(n=199600)
 500 tau, 9.6853e-012 adev(n=398),  9.5441e-012 oadev(n=199000)
1000 tau, 5.0139e-012 adev(n=198),  5.0387e-012 oadev(n=198000)
2000 tau, 2.7994e-012 adev(n=98),   2.7090e-012 oadev(n=196000)
5000 tau, 1.4280e-012 adev(n=38),   1.2214e-012 oadev(n=19)
   1 tau, 7.4881e-013 adev(n=18),   6.5814e-013 oadev(n=18)
   2 tau, 7.6518e-013 adev(n=8),3.7253e-013 oadev(n=16)
   5 tau, 2.4698e-014 adev(n=2),1.3539e-013 oadev(n=10)

Obviously we don't have enough data now for a measurement of 10
tau but the results for the other tau are quite close, especially when
there are sufficient data points. Now this is discontinuous data,
exactly what I was trying to allude to.

OK, so now I take only the top 20 lines of the gps.dat file (head
-20 gps.dat gps2.dat)

C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps2.dat

** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 20
** Normal and Overlapping Allan deviation:

   1 tau, 3.0411e-009 adev(n=18),   3.0411e-009 oadev(n=18)
   2 tau, 1.4985e-009 adev(n=8),1.4999e-009 oadev(n=16)
   5 tau, 6.1964e-010 adev(n=39998),6.2010e-010 oadev(n=10)
  10 tau, 3.1315e-010 adev(n=19998),3.1339e-010 oadev(n=199980)
  20 tau, 1.6499e-010 adev(n=9998), 1.6495e-010 oadev(n=199960)
  50 tau, 7.1425e-011 adev(n=3998), 7.3416e-011 oadev(n=199900)
 100 tau, 3.9940e-011 adev(n=1998), 4.0730e-011 oadev(n=199800)
 200 tau, 2.1488e-011 adev(n=998),  2.1558e-011 oadev(n=199600)
 500 tau, 8.4809e-012 adev(n=398),  9.0886e-012 oadev(n=199000)
1000 tau, 4.9223e-012 adev(n=198),  4.7104e-012 oadev(n=198000)
2000 tau, 2.4335e-012 adev(n=98),   2.4515e-012 oadev(n=196000)
5000 tau, 1.0308e-012 adev(n=38),   1.0861e-012 oadev(n=19)
   1 tau, 5.9504e-013 adev(n=18),   6.1031e-013 oadev(n=18)
   2 tau, 3.6277e-013 adev(n=8),3.1994e-013 oadev(n=16)
   5 tau, 1.0630e-013 adev(n=2),1.6715e-013 oadev(n=10)

Is there any Linux tools for calculating adev as I'm having to run
Windows in a VMware session?

73,
Steve

2009/4/8 Tom Van Baak t...@leapsecond.com:
 Steve,

 You've asked a 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Bruce Griffiths
Steve

It cant, it must be a matter of interpretation.
Perhaps it means something like:

1 tau means tau = 1x the interval between consecutive measurements.
2 tau means tau = 2x the interval between consecutive measurements

10 tau means  tau = 100,000 x the interval between  consecutive
measurements

Bruce

Steve Rooke wrote:
 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:
   
 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:
 
 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
5 tau, 6.2107e-010 adev(n=79998),6.1983e-010 oadev(n=30)
   10 tau, 3.1578e-010 adev(n=39998),3.1549e-010 oadev(n=399980)
   20 tau, 1.6531e-010 adev(n=19998),1.6534e-010 oadev(n=399960)
   50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
  100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
  200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
  500 tau, 9.2193e-012 adev(n=798),  9.1630e-012 oadev(n=399000)
 1000 tau, 4.9719e-012 adev(n=398),  4.7750e-012 oadev(n=398000)
 2000 tau, 2.6742e-012 adev(n=198),  2.5214e-012 oadev(n=396000)
 5000 tau, 1.0010e-012 adev(n=78),   1.1032e-012 oadev(n=39)
1 tau, 6.1333e-013 adev(n=38),   6.1039e-013 oadev(n=38)
2 tau, 3.8162e-013 adev(n=18),   3.2913e-013 oadev(n=36)
5 tau, 1.0228e-013 adev(n=6),1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),6.7597e-014 oadev(n=20)

 So far, so good. Now I delete every even line in the file which leaves
 me with 20 lines of data (40 lines in original gps.dat file).
 (awk 'and(NR, 1) == 0 {print}' gps.dat gps1.dat)

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps1.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 20
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0257e-009 adev(n=18),   3.0257e-009 oadev(n=18)
2 tau, 1.5373e-009 adev(n=8),1.5345e-009 oadev(n=16)
5 tau, 6.3147e-010 adev(n=39998),6.3057e-010 oadev(n=10)
   10 tau, 3.3140e-010 adev(n=19998),3.3067e-010 oadev(n=199980)
   20 tau, 1.7872e-010 adev(n=9998), 1.7810e-010 oadev(n=199960)
   50 tau, 7.9428e-011 adev(n=3998), 8.1216e-011 oadev(n=199900)
  100 tau, 4.2352e-011 adev(n=1998), 4.3265e-011 oadev(n=199800)
  200 tau, 2.2001e-011 adev(n=998),  2.2593e-011 oadev(n=199600)
  500 tau, 9.6853e-012 adev(n=398),  9.5441e-012 oadev(n=199000)
 1000 tau, 5.0139e-012 adev(n=198),  5.0387e-012 oadev(n=198000)
 2000 tau, 2.7994e-012 adev(n=98),   2.7090e-012 oadev(n=196000)
 5000 tau, 1.4280e-012 adev(n=38),   1.2214e-012 oadev(n=19)
1 tau, 7.4881e-013 adev(n=18),   6.5814e-013 oadev(n=18)
2 tau, 7.6518e-013 adev(n=8),3.7253e-013 oadev(n=16)
5 tau, 2.4698e-014 adev(n=2),1.3539e-013 oadev(n=10)

 Obviously we don't have enough data now for a measurement of 10
 tau but the results for the other tau are quite close, especially when
 there are sufficient data points. Now this is discontinuous data,
 exactly what I was trying to allude to.

 OK, so now I take only the top 20 lines of the gps.dat file (head
 -20 gps.dat gps2.dat)

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps2.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 20
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0411e-009 adev(n=18),   3.0411e-009 oadev(n=18)
2 tau, 1.4985e-009 adev(n=8),1.4999e-009 oadev(n=16)
5 tau, 6.1964e-010 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Steve Rooke
Bruce,

But how does that explain the output of Tom's adev1 program which
still seems to give a a good measurement at tau = 1s?

73,
Steve

2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:
 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:
 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
        2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
        5 tau, 6.2107e-010 adev(n=79998),    6.1983e-010 oadev(n=30)
       10 tau, 3.1578e-010 adev(n=39998),    3.1549e-010 oadev(n=399980)
       20 tau, 1.6531e-010 adev(n=19998),    1.6534e-010 oadev(n=399960)
       50 tau, 7.2513e-011 adev(n=7998),     7.3531e-011 oadev(n=399900)
      100 tau, 4.0029e-011 adev(n=3998),     4.0618e-011 oadev(n=399800)
      200 tau, 2.1512e-011 adev(n=1998),     2.1633e-011 oadev(n=399600)
      500 tau, 9.2193e-012 adev(n=798),      9.1630e-012 oadev(n=399000)
     1000 tau, 4.9719e-012 adev(n=398),      4.7750e-012 oadev(n=398000)
     2000 tau, 2.6742e-012 adev(n=198),      2.5214e-012 oadev(n=396000)
     5000 tau, 1.0010e-012 adev(n=78),       1.1032e-012 oadev(n=39)
    1 tau, 6.1333e-013 adev(n=38),       6.1039e-013 oadev(n=38)
    2 tau, 3.8162e-013 adev(n=18),       3.2913e-013 oadev(n=36)
    5 tau, 1.0228e-013 adev(n=6),        1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),        6.7597e-014 oadev(n=20)

 So far, so good. Now I delete every even line in the file which leaves
 me with 20 lines of data (40 lines in original gps.dat file).
 (awk 'and(NR, 1) == 0 {print}' gps.dat gps1.dat)

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps1.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 20
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0257e-009 adev(n=18),   3.0257e-009 oadev(n=18)
        2 tau, 1.5373e-009 adev(n=8),    1.5345e-009 oadev(n=16)
        5 tau, 6.3147e-010 adev(n=39998),    6.3057e-010 oadev(n=10)
       10 tau, 3.3140e-010 adev(n=19998),    3.3067e-010 oadev(n=199980)
       20 tau, 1.7872e-010 adev(n=9998),     1.7810e-010 oadev(n=199960)
       50 tau, 7.9428e-011 adev(n=3998),     8.1216e-011 oadev(n=199900)
      100 tau, 4.2352e-011 adev(n=1998),     4.3265e-011 oadev(n=199800)
      200 tau, 2.2001e-011 adev(n=998),      2.2593e-011 oadev(n=199600)
      500 tau, 9.6853e-012 adev(n=398),      9.5441e-012 oadev(n=199000)
     1000 tau, 5.0139e-012 adev(n=198),      5.0387e-012 oadev(n=198000)
     2000 tau, 2.7994e-012 adev(n=98),       2.7090e-012 oadev(n=196000)
     5000 tau, 1.4280e-012 adev(n=38),       1.2214e-012 oadev(n=19)
    1 tau, 7.4881e-013 adev(n=18),       6.5814e-013 oadev(n=18)
    2 tau, 7.6518e-013 adev(n=8),        3.7253e-013 oadev(n=16)
    5 tau, 2.4698e-014 adev(n=2),        1.3539e-013 oadev(n=10)

 Obviously we don't have enough data now for a measurement of 10
 tau but the results for the other tau are quite close, especially when
 there are sufficient data points. Now this is discontinuous data,
 exactly what I was trying to allude to.

 OK, so now I take only the top 20 lines of the gps.dat file (head
 -20 gps.dat gps2.dat)

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps2.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 20
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0411e-009 adev(n=18),   3.0411e-009 oadev(n=18)
        2 tau, 1.4985e-009 adev(n=8),    1.4999e-009 oadev(n=16)
        5 tau, 6.1964e-010 adev(n=39998),    6.2010e-010 oadev(n=10)
       10 tau, 3.1315e-010 adev(n=19998),    3.1339e-010 oadev(n=199980)
       20 tau, 1.6499e-010 adev(n=9998),     1.6495e-010 oadev(n=199960)
       50 tau, 7.1425e-011 adev(n=3998),     7.3416e-011 oadev(n=199900)
      100 tau, 3.9940e-011 adev(n=1998),     4.0730e-011 oadev(n=199800)
      200 tau, 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Steve Rooke
Bruce,

I hear what you say but the results seem to correlate quite well:-

1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
1 tau, 3.0257e-009 adev(n=18),   3.0257e-009 oadev(n=18)

And using the first half of the data:-

1 tau, 3.0411e-009 adev(n=18),   3.0411e-009 oadev(n=18)

So I'm trying to understand why this won't work.

73,
Steve

2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:
 Steve

 It cant, it must be a matter of interpretation.
 Perhaps it means something like:

 1 tau means tau = 1x the interval between consecutive measurements.
 2 tau means tau = 2x the interval between consecutive measurements

 10 tau means  tau = 100,000 x the interval between  consecutive
 measurements

 Bruce

 Steve Rooke wrote:
 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:

 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:

 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
        2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
        5 tau, 6.2107e-010 adev(n=79998),    6.1983e-010 oadev(n=30)
       10 tau, 3.1578e-010 adev(n=39998),    3.1549e-010 oadev(n=399980)
       20 tau, 1.6531e-010 adev(n=19998),    1.6534e-010 oadev(n=399960)
       50 tau, 7.2513e-011 adev(n=7998),     7.3531e-011 oadev(n=399900)
      100 tau, 4.0029e-011 adev(n=3998),     4.0618e-011 oadev(n=399800)
      200 tau, 2.1512e-011 adev(n=1998),     2.1633e-011 oadev(n=399600)
      500 tau, 9.2193e-012 adev(n=798),      9.1630e-012 oadev(n=399000)
     1000 tau, 4.9719e-012 adev(n=398),      4.7750e-012 oadev(n=398000)
     2000 tau, 2.6742e-012 adev(n=198),      2.5214e-012 oadev(n=396000)
     5000 tau, 1.0010e-012 adev(n=78),       1.1032e-012 oadev(n=39)
    1 tau, 6.1333e-013 adev(n=38),       6.1039e-013 oadev(n=38)
    2 tau, 3.8162e-013 adev(n=18),       3.2913e-013 oadev(n=36)
    5 tau, 1.0228e-013 adev(n=6),        1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),        6.7597e-014 oadev(n=20)

 So far, so good. Now I delete every even line in the file which leaves
 me with 20 lines of data (40 lines in original gps.dat file).
 (awk 'and(NR, 1) == 0 {print}' gps.dat gps1.dat)

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps1.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 20
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0257e-009 adev(n=18),   3.0257e-009 oadev(n=18)
        2 tau, 1.5373e-009 adev(n=8),    1.5345e-009 oadev(n=16)
        5 tau, 6.3147e-010 adev(n=39998),    6.3057e-010 oadev(n=10)
       10 tau, 3.3140e-010 adev(n=19998),    3.3067e-010 oadev(n=199980)
       20 tau, 1.7872e-010 adev(n=9998),     1.7810e-010 oadev(n=199960)
       50 tau, 7.9428e-011 adev(n=3998),     8.1216e-011 oadev(n=199900)
      100 tau, 4.2352e-011 adev(n=1998),     4.3265e-011 oadev(n=199800)
      200 tau, 2.2001e-011 adev(n=998),      2.2593e-011 oadev(n=199600)
      500 tau, 9.6853e-012 adev(n=398),      9.5441e-012 oadev(n=199000)
     1000 tau, 5.0139e-012 adev(n=198),      5.0387e-012 oadev(n=198000)
     2000 tau, 2.7994e-012 adev(n=98),       2.7090e-012 oadev(n=196000)
     5000 tau, 1.4280e-012 adev(n=38),       1.2214e-012 oadev(n=19)
    1 tau, 7.4881e-013 adev(n=18),       6.5814e-013 oadev(n=18)
    2 tau, 7.6518e-013 adev(n=8),        3.7253e-013 oadev(n=16)
    5 tau, 2.4698e-014 adev(n=2),        1.3539e-013 oadev(n=10)

 Obviously we don't have enough data now for a measurement of 10
 tau but the results for the other tau are quite close, especially when
 there are sufficient data points. Now this is discontinuous data,
 exactly what I was trying to allude to.

 OK, so now I take only the top 20 lines of the 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Bruce Griffiths
Steve

The data file doesn't include the time interval between samples so do
you set this in some way?
If so you need to set it to 1s for the unaltered data, to 2s when you
take every 2nd sample, and 1s when you take the first 200,000 samples.

In principle you could use CANVAS (available on request from USNO -
however you may have to wait a few days while they decide whether to
grant your request.) for such analysis in Linux but you would then need
the Linux version of Matlab.
Or you could request that it be compiled for Linux - a fairly simple
task if one has the Linux version of Matlab.

In principle you should be able to port the m source files to Scilab,
but there are some subtle differences between Scilab and Matlab so this
may take a while.

Bruce

Steve Rooke wrote:
 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:
   
 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:
 
 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
5 tau, 6.2107e-010 adev(n=79998),6.1983e-010 oadev(n=30)
   10 tau, 3.1578e-010 adev(n=39998),3.1549e-010 oadev(n=399980)
   20 tau, 1.6531e-010 adev(n=19998),1.6534e-010 oadev(n=399960)
   50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
  100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
  200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
  500 tau, 9.2193e-012 adev(n=798),  9.1630e-012 oadev(n=399000)
 1000 tau, 4.9719e-012 adev(n=398),  4.7750e-012 oadev(n=398000)
 2000 tau, 2.6742e-012 adev(n=198),  2.5214e-012 oadev(n=396000)
 5000 tau, 1.0010e-012 adev(n=78),   1.1032e-012 oadev(n=39)
1 tau, 6.1333e-013 adev(n=38),   6.1039e-013 oadev(n=38)
2 tau, 3.8162e-013 adev(n=18),   3.2913e-013 oadev(n=36)
5 tau, 1.0228e-013 adev(n=6),1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),6.7597e-014 oadev(n=20)

 So far, so good. Now I delete every even line in the file which leaves
 me with 20 lines of data (40 lines in original gps.dat file).
 (awk 'and(NR, 1) == 0 {print}' gps.dat gps1.dat)

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps1.dat

 ** Sampling period: 1 s
   
INCORRECT!!
sampling period is now 2s.
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 20
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0257e-009 adev(n=18),   3.0257e-009 oadev(n=18)
2 tau, 1.5373e-009 adev(n=8),1.5345e-009 oadev(n=16)
5 tau, 6.3147e-010 adev(n=39998),6.3057e-010 oadev(n=10)
   10 tau, 3.3140e-010 adev(n=19998),3.3067e-010 oadev(n=199980)
   20 tau, 1.7872e-010 adev(n=9998), 1.7810e-010 oadev(n=199960)
   50 tau, 7.9428e-011 adev(n=3998), 8.1216e-011 oadev(n=199900)
  100 tau, 4.2352e-011 adev(n=1998), 4.3265e-011 oadev(n=199800)
  200 tau, 2.2001e-011 adev(n=998),  2.2593e-011 oadev(n=199600)
  500 tau, 9.6853e-012 adev(n=398),  9.5441e-012 oadev(n=199000)
 1000 tau, 5.0139e-012 adev(n=198),  5.0387e-012 oadev(n=198000)
 2000 tau, 2.7994e-012 adev(n=98),   2.7090e-012 oadev(n=196000)
 5000 tau, 1.4280e-012 adev(n=38),   1.2214e-012 oadev(n=19)
1 tau, 7.4881e-013 adev(n=18),   6.5814e-013 oadev(n=18)
2 tau, 7.6518e-013 adev(n=8),3.7253e-013 oadev(n=16)
5 tau, 2.4698e-014 adev(n=2),1.3539e-013 oadev(n=10)

 Obviously we don't have enough data now for a measurement of 10
 tau but the results for the other tau are quite close, especially when
 there are sufficient data points. Now this is discontinuous data,
 exactly what I was trying 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Steve Rooke
Bruce,

I set nothing, as indicated in my text, I just delete data points, IE.
a file of 40 records now becomes 20. I'm trying to get my head
round this as the absolute requirement for continuous data seems
unneeded. What you have to remember here is that the data set I'm
working with consists of discrete measurements of the period of each
pulse. If it was timestamps, then there would be problems.

I don't know how much MATLAB costs but I would guess it is way out of my budget.

73,
Steve

2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:
 Steve

 The data file doesn't include the time interval between samples so do
 you set this in some way?
 If so you need to set it to 1s for the unaltered data, to 2s when you
 take every 2nd sample, and 1s when you take the first 200,000 samples.

 In principle you could use CANVAS (available on request from USNO -
 however you may have to wait a few days while they decide whether to
 grant your request.) for such analysis in Linux but you would then need
 the Linux version of Matlab.
 Or you could request that it be compiled for Linux - a fairly simple
 task if one has the Linux version of Matlab.

 In principle you should be able to port the m source files to Scilab,
 but there are some subtle differences between Scilab and Matlab so this
 may take a while.

 Bruce

 Steve Rooke wrote:
 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:

 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:

 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
        2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
        5 tau, 6.2107e-010 adev(n=79998),    6.1983e-010 oadev(n=30)
       10 tau, 3.1578e-010 adev(n=39998),    3.1549e-010 oadev(n=399980)
       20 tau, 1.6531e-010 adev(n=19998),    1.6534e-010 oadev(n=399960)
       50 tau, 7.2513e-011 adev(n=7998),     7.3531e-011 oadev(n=399900)
      100 tau, 4.0029e-011 adev(n=3998),     4.0618e-011 oadev(n=399800)
      200 tau, 2.1512e-011 adev(n=1998),     2.1633e-011 oadev(n=399600)
      500 tau, 9.2193e-012 adev(n=798),      9.1630e-012 oadev(n=399000)
     1000 tau, 4.9719e-012 adev(n=398),      4.7750e-012 oadev(n=398000)
     2000 tau, 2.6742e-012 adev(n=198),      2.5214e-012 oadev(n=396000)
     5000 tau, 1.0010e-012 adev(n=78),       1.1032e-012 oadev(n=39)
    1 tau, 6.1333e-013 adev(n=38),       6.1039e-013 oadev(n=38)
    2 tau, 3.8162e-013 adev(n=18),       3.2913e-013 oadev(n=36)
    5 tau, 1.0228e-013 adev(n=6),        1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),        6.7597e-014 oadev(n=20)

 So far, so good. Now I delete every even line in the file which leaves
 me with 20 lines of data (40 lines in original gps.dat file).
 (awk 'and(NR, 1) == 0 {print}' gps.dat gps1.dat)

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps1.dat

 ** Sampling period: 1 s

 INCORRECT!!
 sampling period is now 2s.
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 20
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0257e-009 adev(n=18),   3.0257e-009 oadev(n=18)
        2 tau, 1.5373e-009 adev(n=8),    1.5345e-009 oadev(n=16)
        5 tau, 6.3147e-010 adev(n=39998),    6.3057e-010 oadev(n=10)
       10 tau, 3.3140e-010 adev(n=19998),    3.3067e-010 oadev(n=199980)
       20 tau, 1.7872e-010 adev(n=9998),     1.7810e-010 oadev(n=199960)
       50 tau, 7.9428e-011 adev(n=3998),     8.1216e-011 oadev(n=199900)
      100 tau, 4.2352e-011 adev(n=1998),     4.3265e-011 oadev(n=199800)
      200 tau, 2.2001e-011 adev(n=998),      2.2593e-011 oadev(n=199600)
      500 tau, 9.6853e-012 adev(n=398),      9.5441e-012 oadev(n=199000)
     1000 tau, 5.0139e-012 adev(n=198),      5.0387e-012 oadev(n=198000)
     2000 tau, 2.7994e-012 adev(n=98),      

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Bruce Griffiths
Steve

Therein lies your problem.
adev1  defaults to a sampling interval of 1 sec. (read the C source code).
TvB's documentation explicitly states that you should supply the
sampling interval. (its a command line argument for adev1.c).

adev1.c is a simple command line program that uses stdin and stdout so
porting it to a linux command line (non graphical) program should be
straightforward.
You can even use redirection and pipes should you need them.

You can try porting it to Scilab which is free courtesy of the French
Government.


Bruce

Steve Rooke wrote:
 Bruce,

 I set nothing, as indicated in my text, I just delete data points, IE.
 a file of 40 records now becomes 20. I'm trying to get my head
 round this as the absolute requirement for continuous data seems
 unneeded. What you have to remember here is that the data set I'm
 working with consists of discrete measurements of the period of each
 pulse. If it was timestamps, then there would be problems.

 I don't know how much MATLAB costs but I would guess it is way out of my 
 budget.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:
   
 Steve

 The data file doesn't include the time interval between samples so do
 you set this in some way?
 If so you need to set it to 1s for the unaltered data, to 2s when you
 take every 2nd sample, and 1s when you take the first 200,000 samples.

 In principle you could use CANVAS (available on request from USNO -
 however you may have to wait a few days while they decide whether to
 grant your request.) for such analysis in Linux but you would then need
 the Linux version of Matlab.
 Or you could request that it be compiled for Linux - a fairly simple
 task if one has the Linux version of Matlab.

 In principle you should be able to port the m source files to Scilab,
 but there are some subtle differences between Scilab and Matlab so this
 may take a while.

 Bruce

 Steve Rooke wrote:
 
 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:

   
 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:

 
 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
5 tau, 6.2107e-010 adev(n=79998),6.1983e-010 oadev(n=30)
   10 tau, 3.1578e-010 adev(n=39998),3.1549e-010 oadev(n=399980)
   20 tau, 1.6531e-010 adev(n=19998),1.6534e-010 oadev(n=399960)
   50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
  100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
  200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
  500 tau, 9.2193e-012 adev(n=798),  9.1630e-012 oadev(n=399000)
 1000 tau, 4.9719e-012 adev(n=398),  4.7750e-012 oadev(n=398000)
 2000 tau, 2.6742e-012 adev(n=198),  2.5214e-012 oadev(n=396000)
 5000 tau, 1.0010e-012 adev(n=78),   1.1032e-012 oadev(n=39)
1 tau, 6.1333e-013 adev(n=38),   6.1039e-013 oadev(n=38)
2 tau, 3.8162e-013 adev(n=18),   3.2913e-013 oadev(n=36)
5 tau, 1.0228e-013 adev(n=6),1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),6.7597e-014 oadev(n=20)

 So far, so good. Now I delete every even line in the file which leaves
 me with 20 lines of data (40 lines in original gps.dat file).
 (awk 'and(NR, 1) == 0 {print}' gps.dat gps1.dat)

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps1.dat

 ** Sampling period: 1 s

   
 INCORRECT!!
 sampling period is now 2s.
 
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 20
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0257e-009 adev(n=18),   3.0257e-009 oadev(n=18)
2 tau, 1.5373e-009 adev(n=8),1.5345e-009 oadev(n=16)
   

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread John Ackermann N8UR
I've compiled adev1 under Linux with no changes required; don't recall 
the exact gcc line I used but it was pretty much the obvious one.

Steve, one other point -- your results with every sample versus 
every-other-sample aren't hugely different because ADEV doesn't usually 
change dramatically over very short differences in tau (unless there's 
some sort of periodicity in the noise).  So, it would not be unusual to 
see that the result for tau=2 seconds (what you got when you removed 
every other sample) will be only slightly different than for tau=1 second.

John

Bruce Griffiths wrote:
 Steve
 
 Therein lies your problem.
 adev1  defaults to a sampling interval of 1 sec. (read the C source code).
 TvB's documentation explicitly states that you should supply the
 sampling interval. (its a command line argument for adev1.c).
 
 adev1.c is a simple command line program that uses stdin and stdout so
 porting it to a linux command line (non graphical) program should be
 straightforward.
 You can even use redirection and pipes should you need them.
 
 You can try porting it to Scilab which is free courtesy of the French
 Government.
 
 
 Bruce
 
 Steve Rooke wrote:
 Bruce,

 I set nothing, as indicated in my text, I just delete data points, IE.
 a file of 40 records now becomes 20. I'm trying to get my head
 round this as the absolute requirement for continuous data seems
 unneeded. What you have to remember here is that the data set I'm
 working with consists of discrete measurements of the period of each
 pulse. If it was timestamps, then there would be problems.

 I don't know how much MATLAB costs but I would guess it is way out of my 
 budget.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:
   
 Steve

 The data file doesn't include the time interval between samples so do
 you set this in some way?
 If so you need to set it to 1s for the unaltered data, to 2s when you
 take every 2nd sample, and 1s when you take the first 200,000 samples.

 In principle you could use CANVAS (available on request from USNO -
 however you may have to wait a few days while they decide whether to
 grant your request.) for such analysis in Linux but you would then need
 the Linux version of Matlab.
 Or you could request that it be compiled for Linux - a fairly simple
 task if one has the Linux version of Matlab.

 In principle you should be able to port the m source files to Scilab,
 but there are some subtle differences between Scilab and Matlab so this
 may take a while.

 Bruce

 Steve Rooke wrote:
 
 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:

   
 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:

 
 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
5 tau, 6.2107e-010 adev(n=79998),6.1983e-010 oadev(n=30)
   10 tau, 3.1578e-010 adev(n=39998),3.1549e-010 oadev(n=399980)
   20 tau, 1.6531e-010 adev(n=19998),1.6534e-010 oadev(n=399960)
   50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
  100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
  200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
  500 tau, 9.2193e-012 adev(n=798),  9.1630e-012 oadev(n=399000)
 1000 tau, 4.9719e-012 adev(n=398),  4.7750e-012 oadev(n=398000)
 2000 tau, 2.6742e-012 adev(n=198),  2.5214e-012 oadev(n=396000)
 5000 tau, 1.0010e-012 adev(n=78),   1.1032e-012 oadev(n=39)
1 tau, 6.1333e-013 adev(n=38),   6.1039e-013 oadev(n=38)
2 tau, 3.8162e-013 adev(n=18),   3.2913e-013 oadev(n=36)
5 tau, 1.0228e-013 adev(n=6),1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),6.7597e-014 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Steve Rooke
Bruce,

But the sampling interval is still 1 sec and you can see by my notes
that I explicitly give this on the command line for adev1.exe.

OK, I'll have a go at compiling it as it is just a command line program.

73,
Steve

2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:
 Steve

 Therein lies your problem.
 adev1  defaults to a sampling interval of 1 sec. (read the C source code).
 TvB's documentation explicitly states that you should supply the
 sampling interval. (its a command line argument for adev1.c).

 adev1.c is a simple command line program that uses stdin and stdout so
 porting it to a linux command line (non graphical) program should be
 straightforward.
 You can even use redirection and pipes should you need them.

 You can try porting it to Scilab which is free courtesy of the French
 Government.


 Bruce

 Steve Rooke wrote:
 Bruce,

 I set nothing, as indicated in my text, I just delete data points, IE.
 a file of 40 records now becomes 20. I'm trying to get my head
 round this as the absolute requirement for continuous data seems
 unneeded. What you have to remember here is that the data set I'm
 working with consists of discrete measurements of the period of each
 pulse. If it was timestamps, then there would be problems.

 I don't know how much MATLAB costs but I would guess it is way out of my 
 budget.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:

 Steve

 The data file doesn't include the time interval between samples so do
 you set this in some way?
 If so you need to set it to 1s for the unaltered data, to 2s when you
 take every 2nd sample, and 1s when you take the first 200,000 samples.

 In principle you could use CANVAS (available on request from USNO -
 however you may have to wait a few days while they decide whether to
 grant your request.) for such analysis in Linux but you would then need
 the Linux version of Matlab.
 Or you could request that it be compiled for Linux - a fairly simple
 task if one has the Linux version of Matlab.

 In principle you should be able to port the m source files to Scilab,
 but there are some subtle differences between Scilab and Matlab so this
 may take a while.

 Bruce

 Steve Rooke wrote:

 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:


 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:


 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
        2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
        5 tau, 6.2107e-010 adev(n=79998),    6.1983e-010 oadev(n=30)
       10 tau, 3.1578e-010 adev(n=39998),    3.1549e-010 oadev(n=399980)
       20 tau, 1.6531e-010 adev(n=19998),    1.6534e-010 oadev(n=399960)
       50 tau, 7.2513e-011 adev(n=7998),     7.3531e-011 oadev(n=399900)
      100 tau, 4.0029e-011 adev(n=3998),     4.0618e-011 oadev(n=399800)
      200 tau, 2.1512e-011 adev(n=1998),     2.1633e-011 oadev(n=399600)
      500 tau, 9.2193e-012 adev(n=798),      9.1630e-012 oadev(n=399000)
     1000 tau, 4.9719e-012 adev(n=398),      4.7750e-012 oadev(n=398000)
     2000 tau, 2.6742e-012 adev(n=198),      2.5214e-012 oadev(n=396000)
     5000 tau, 1.0010e-012 adev(n=78),       1.1032e-012 oadev(n=39)
    1 tau, 6.1333e-013 adev(n=38),       6.1039e-013 oadev(n=38)
    2 tau, 3.8162e-013 adev(n=18),       3.2913e-013 oadev(n=36)
    5 tau, 1.0228e-013 adev(n=6),        1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),        6.7597e-014 oadev(n=20)

 So far, so good. Now I delete every even line in the file which leaves
 me with 20 lines of data (40 lines in original gps.dat file).
 (awk 'and(NR, 1) == 0 {print}' gps.dat gps1.dat)

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps1.dat

 ** Sampling period: 1 s


 INCORRECT!!
 sampling period is now 2s.

 ** Phase 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Steve Rooke
John,

OK, I see what you mean if I re-run with a 2 second period:-

C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 2 gps1.dat

** Sampling period: 2 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 20
** Normal and Overlapping Allan deviation:

   2 tau, 1.5129e-009 adev(n=18),   1.5129e-009 oadev(n=18)
   4 tau, 7.6863e-010 adev(n=8),7.6727e-010 oadev(n=16)
  10 tau, 3.1574e-010 adev(n=39998),3.1529e-010 oadev(n=10)
  20 tau, 1.6570e-010 adev(n=19998),1.6533e-010 oadev(n=199980)
  40 tau, 8.9359e-011 adev(n=9998), 8.9051e-011 oadev(n=199960)
 100 tau, 3.9714e-011 adev(n=3998), 4.0608e-011 oadev(n=199900)
 200 tau, 2.1176e-011 adev(n=1998), 2.1633e-011 oadev(n=199800)
 400 tau, 1.1001e-011 adev(n=998),  1.1296e-011 oadev(n=199600)
1000 tau, 4.8426e-012 adev(n=398),  4.7721e-012 oadev(n=199000)
2000 tau, 2.5069e-012 adev(n=198),  2.5194e-012 oadev(n=198000)
4000 tau, 1.3997e-012 adev(n=98),   1.3545e-012 oadev(n=196000)
   1 tau, 7.1400e-013 adev(n=38),   6.1070e-013 oadev(n=19)
   2 tau, 3.7441e-013 adev(n=18),   3.2907e-013 oadev(n=18)
   4 tau, 3.8259e-013 adev(n=8),1.8627e-013 oadev(n=16)
  10 tau, 1.2349e-014 adev(n=2),6.7697e-014 oadev(n=10)

Although it diverges a bit at the end.

73,
Steve
2009/4/9 John Ackermann N8UR j...@febo.com:
 I've compiled adev1 under Linux with no changes required; don't recall
 the exact gcc line I used but it was pretty much the obvious one.

 Steve, one other point -- your results with every sample versus
 every-other-sample aren't hugely different because ADEV doesn't usually
 change dramatically over very short differences in tau (unless there's
 some sort of periodicity in the noise).  So, it would not be unusual to
 see that the result for tau=2 seconds (what you got when you removed
 every other sample) will be only slightly different than for tau=1 second.

 John
 
 Bruce Griffiths wrote:
 Steve

 Therein lies your problem.
 adev1  defaults to a sampling interval of 1 sec. (read the C source code).
 TvB's documentation explicitly states that you should supply the
 sampling interval. (its a command line argument for adev1.c).

 adev1.c is a simple command line program that uses stdin and stdout so
 porting it to a linux command line (non graphical) program should be
 straightforward.
 You can even use redirection and pipes should you need them.

 You can try porting it to Scilab which is free courtesy of the French
 Government.


 Bruce

 Steve Rooke wrote:
 Bruce,

 I set nothing, as indicated in my text, I just delete data points, IE.
 a file of 40 records now becomes 20. I'm trying to get my head
 round this as the absolute requirement for continuous data seems
 unneeded. What you have to remember here is that the data set I'm
 working with consists of discrete measurements of the period of each
 pulse. If it was timestamps, then there would be problems.

 I don't know how much MATLAB costs but I would guess it is way out of my 
 budget.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:

 Steve

 The data file doesn't include the time interval between samples so do
 you set this in some way?
 If so you need to set it to 1s for the unaltered data, to 2s when you
 take every 2nd sample, and 1s when you take the first 200,000 samples.

 In principle you could use CANVAS (available on request from USNO -
 however you may have to wait a few days while they decide whether to
 grant your request.) for such analysis in Linux but you would then need
 the Linux version of Matlab.
 Or you could request that it be compiled for Linux - a fairly simple
 task if one has the Linux version of Matlab.

 In principle you should be able to port the m source files to Scilab,
 but there are some subtle differences between Scilab and Matlab so this
 may take a while.

 Bruce

 Steve Rooke wrote:

 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:


 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:


 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Bruce Griffiths
Steve Rooke wrote:
 Bruce,

 But the sampling interval is still 1 sec and you can see by my notes
 that I explicitly give this on the command line for adev1.exe.

   
The sampling interval is indeed 1sec in the original data.
However, if you delete every second sample the sampling interval in the
resultant data is then 2 sec.

 OK, I'll have a go at compiling it as it is just a command line program.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:
   
 Steve

 Therein lies your problem.
 adev1  defaults to a sampling interval of 1 sec. (read the C source code).
 TvB's documentation explicitly states that you should supply the
 sampling interval. (its a command line argument for adev1.c).

 adev1.c is a simple command line program that uses stdin and stdout so
 porting it to a linux command line (non graphical) program should be
 straightforward.
 You can even use redirection and pipes should you need them.

 You can try porting it to Scilab which is free courtesy of the French
 Government.


 Bruce

 Steve Rooke wrote:
 
 Bruce,

 I set nothing, as indicated in my text, I just delete data points, IE.
 a file of 40 records now becomes 20. I'm trying to get my head
 round this as the absolute requirement for continuous data seems
 unneeded. What you have to remember here is that the data set I'm
 working with consists of discrete measurements of the period of each
 pulse. If it was timestamps, then there would be problems.

 I don't know how much MATLAB costs but I would guess it is way out of my 
 budget.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:

   
 Steve

 The data file doesn't include the time interval between samples so do
 you set this in some way?
 If so you need to set it to 1s for the unaltered data, to 2s when you
 take every 2nd sample, and 1s when you take the first 200,000 samples.

 In principle you could use CANVAS (available on request from USNO -
 however you may have to wait a few days while they decide whether to
 grant your request.) for such analysis in Linux but you would then need
 the Linux version of Matlab.
 Or you could request that it be compiled for Linux - a fairly simple
 task if one has the Linux version of Matlab.

 In principle you should be able to port the m source files to Scilab,
 but there are some subtle differences between Scilab and Matlab so this
 may take a while.

 Bruce

 Steve Rooke wrote:

 
 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:


   
 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:


 
 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
5 tau, 6.2107e-010 adev(n=79998),6.1983e-010 oadev(n=30)
   10 tau, 3.1578e-010 adev(n=39998),3.1549e-010 oadev(n=399980)
   20 tau, 1.6531e-010 adev(n=19998),1.6534e-010 oadev(n=399960)
   50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
  100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
  200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
  500 tau, 9.2193e-012 adev(n=798),  9.1630e-012 oadev(n=399000)
 1000 tau, 4.9719e-012 adev(n=398),  4.7750e-012 oadev(n=398000)
 2000 tau, 2.6742e-012 adev(n=198),  2.5214e-012 oadev(n=396000)
 5000 tau, 1.0010e-012 adev(n=78),   1.1032e-012 oadev(n=39)
1 tau, 6.1333e-013 adev(n=38),   6.1039e-013 oadev(n=38)
2 tau, 3.8162e-013 adev(n=18),   3.2913e-013 oadev(n=36)
5 tau, 1.0228e-013 adev(n=6),1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),6.7597e-014 oadev(n=20)

 So far, so good. Now I delete every even line in the file which leaves
 me with 20 lines of 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Steve Rooke
Yes, it compiles cleanly with gcc -o adev1 -lm adev1.c and works
just like a bought one, thanks.

73,
Steve

2009/4/9 John Ackermann N8UR j...@febo.com:
 I've compiled adev1 under Linux with no changes required; don't recall
 the exact gcc line I used but it was pretty much the obvious one.

 Steve, one other point -- your results with every sample versus
 every-other-sample aren't hugely different because ADEV doesn't usually
 change dramatically over very short differences in tau (unless there's
 some sort of periodicity in the noise).  So, it would not be unusual to
 see that the result for tau=2 seconds (what you got when you removed
 every other sample) will be only slightly different than for tau=1 second.

 John
 
 Bruce Griffiths wrote:
 Steve

 Therein lies your problem.
 adev1  defaults to a sampling interval of 1 sec. (read the C source code).
 TvB's documentation explicitly states that you should supply the
 sampling interval. (its a command line argument for adev1.c).

 adev1.c is a simple command line program that uses stdin and stdout so
 porting it to a linux command line (non graphical) program should be
 straightforward.
 You can even use redirection and pipes should you need them.

 You can try porting it to Scilab which is free courtesy of the French
 Government.


 Bruce

 Steve Rooke wrote:
 Bruce,

 I set nothing, as indicated in my text, I just delete data points, IE.
 a file of 40 records now becomes 20. I'm trying to get my head
 round this as the absolute requirement for continuous data seems
 unneeded. What you have to remember here is that the data set I'm
 working with consists of discrete measurements of the period of each
 pulse. If it was timestamps, then there would be problems.

 I don't know how much MATLAB costs but I would guess it is way out of my 
 budget.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:

 Steve

 The data file doesn't include the time interval between samples so do
 you set this in some way?
 If so you need to set it to 1s for the unaltered data, to 2s when you
 take every 2nd sample, and 1s when you take the first 200,000 samples.

 In principle you could use CANVAS (available on request from USNO -
 however you may have to wait a few days while they decide whether to
 grant your request.) for such analysis in Linux but you would then need
 the Linux version of Matlab.
 Or you could request that it be compiled for Linux - a fairly simple
 task if one has the Linux version of Matlab.

 In principle you should be able to port the m source files to Scilab,
 but there are some subtle differences between Scilab and Matlab so this
 may take a while.

 Bruce

 Steve Rooke wrote:

 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:


 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:


 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
        2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
        5 tau, 6.2107e-010 adev(n=79998),    6.1983e-010 oadev(n=30)
       10 tau, 3.1578e-010 adev(n=39998),    3.1549e-010 oadev(n=399980)
       20 tau, 1.6531e-010 adev(n=19998),    1.6534e-010 oadev(n=399960)
       50 tau, 7.2513e-011 adev(n=7998),     7.3531e-011 oadev(n=399900)
      100 tau, 4.0029e-011 adev(n=3998),     4.0618e-011 oadev(n=399800)
      200 tau, 2.1512e-011 adev(n=1998),     2.1633e-011 oadev(n=399600)
      500 tau, 9.2193e-012 adev(n=798),      9.1630e-012 oadev(n=399000)
     1000 tau, 4.9719e-012 adev(n=398),      4.7750e-012 oadev(n=398000)
     2000 tau, 2.6742e-012 adev(n=198),      2.5214e-012 oadev(n=396000)
     5000 tau, 1.0010e-012 adev(n=78),       1.1032e-012 oadev(n=39)
    1 tau, 6.1333e-013 adev(n=38),       6.1039e-013 oadev(n=38)
    2 tau, 3.8162e-013 adev(n=18),       3.2913e-013 oadev(n=36)
    5 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Steve Rooke
2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:
 The sampling interval is indeed 1sec in the original data.
 However, if you delete every second sample the sampling interval in the
 resultant data is then 2 sec.

Doesn't that imply that the data point should correspond to the whole
sampling period and not just half of it?

73,
Steve

 OK, I'll have a go at compiling it as it is just a command line program.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:

 Steve

 Therein lies your problem.
 adev1  defaults to a sampling interval of 1 sec. (read the C source code).
 TvB's documentation explicitly states that you should supply the
 sampling interval. (its a command line argument for adev1.c).

 adev1.c is a simple command line program that uses stdin and stdout so
 porting it to a linux command line (non graphical) program should be
 straightforward.
 You can even use redirection and pipes should you need them.

 You can try porting it to Scilab which is free courtesy of the French
 Government.


 Bruce

 Steve Rooke wrote:

 Bruce,

 I set nothing, as indicated in my text, I just delete data points, IE.
 a file of 40 records now becomes 20. I'm trying to get my head
 round this as the absolute requirement for continuous data seems
 unneeded. What you have to remember here is that the data set I'm
 working with consists of discrete measurements of the period of each
 pulse. If it was timestamps, then there would be problems.

 I don't know how much MATLAB costs but I would guess it is way out of my 
 budget.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:


 Steve

 The data file doesn't include the time interval between samples so do
 you set this in some way?
 If so you need to set it to 1s for the unaltered data, to 2s when you
 take every 2nd sample, and 1s when you take the first 200,000 samples.

 In principle you could use CANVAS (available on request from USNO -
 however you may have to wait a few days while they decide whether to
 grant your request.) for such analysis in Linux but you would then need
 the Linux version of Matlab.
 Or you could request that it be compiled for Linux - a fairly simple
 task if one has the Linux version of Matlab.

 In principle you should be able to port the m source files to Scilab,
 but there are some subtle differences between Scilab and Matlab so this
 may take a while.

 Bruce

 Steve Rooke wrote:


 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:



 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 
 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:



 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

        1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 oadev(n=38)
        2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 oadev(n=36)
        5 tau, 6.2107e-010 adev(n=79998),    6.1983e-010 oadev(n=30)
       10 tau, 3.1578e-010 adev(n=39998),    3.1549e-010 oadev(n=399980)
       20 tau, 1.6531e-010 adev(n=19998),    1.6534e-010 oadev(n=399960)
       50 tau, 7.2513e-011 adev(n=7998),     7.3531e-011 oadev(n=399900)
      100 tau, 4.0029e-011 adev(n=3998),     4.0618e-011 oadev(n=399800)
      200 tau, 2.1512e-011 adev(n=1998),     2.1633e-011 oadev(n=399600)
      500 tau, 9.2193e-012 adev(n=798),      9.1630e-012 oadev(n=399000)
     1000 tau, 4.9719e-012 adev(n=398),      4.7750e-012 oadev(n=398000)
     2000 tau, 2.6742e-012 adev(n=198),      2.5214e-012 oadev(n=396000)
     5000 tau, 1.0010e-012 adev(n=78),       1.1032e-012 oadev(n=39)
    1 tau, 6.1333e-013 adev(n=38),       6.1039e-013 oadev(n=38)
    2 tau, 3.8162e-013 adev(n=18),       3.2913e-013 oadev(n=36)
    5 tau, 1.0228e-013 adev(n=6),        1.5074e-013 oadev(n=30)
   10 tau, 5.8577e-014 adev(n=2),        6.7597e-014 oadev(n=20)

 So far, so good. Now I delete every even line in the file which leaves
 me with 20 lines of data (40 lines in original 

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Bruce Griffiths
Steve Rooke wrote:
 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:
   
 The sampling interval is indeed 1sec in the original data.
 However, if you delete every second sample the sampling interval in the
 resultant data is then 2 sec.
 

 Doesn't that imply that the data point should correspond to the whole
 sampling period and not just half of it?

   
The total measurement time is only deceased by 1 sec at the most if you
delete every second line.
The resampled data now has a sampling interval of 2 sec for the entire
measurment time.

The original data samples are phase differences measured on the second
every second.
The resampled data are phase differences measured every 2 seconds on the
corresponding second transition.
 73,
 Steve

   
 OK, I'll have a go at compiling it as it is just a command line program.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:

   
 Steve

 Therein lies your problem.
 adev1  defaults to a sampling interval of 1 sec. (read the C source code).
 TvB's documentation explicitly states that you should supply the
 sampling interval. (its a command line argument for adev1.c).

 adev1.c is a simple command line program that uses stdin and stdout so
 porting it to a linux command line (non graphical) program should be
 straightforward.
 You can even use redirection and pipes should you need them.

 You can try porting it to Scilab which is free courtesy of the French
 Government.


 Bruce

 Steve Rooke wrote:

 
 Bruce,

 I set nothing, as indicated in my text, I just delete data points, IE.
 a file of 40 records now becomes 20. I'm trying to get my head
 round this as the absolute requirement for continuous data seems
 unneeded. What you have to remember here is that the data set I'm
 working with consists of discrete measurements of the period of each
 pulse. If it was timestamps, then there would be problems.

 I don't know how much MATLAB costs but I would guess it is way out of my 
 budget.

 73,
 Steve

 2009/4/9 Bruce Griffiths bruce.griffi...@xtra.co.nz:


   
 Steve

 The data file doesn't include the time interval between samples so do
 you set this in some way?
 If so you need to set it to 1s for the unaltered data, to 2s when you
 take every 2nd sample, and 1s when you take the first 200,000 samples.

 In principle you could use CANVAS (available on request from USNO -
 however you may have to wait a few days while they decide whether to
 grant your request.) for such analysis in Linux but you would then need
 the Linux version of Matlab.
 Or you could request that it be compiled for Linux - a fairly simple
 task if one has the Linux version of Matlab.

 In principle you should be able to port the m source files to Scilab,
 but there are some subtle differences between Scilab and Matlab so this
 may take a while.

 Bruce

 Steve Rooke wrote:


 
 Bruce,

 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

 73,
 Steve

 2009/4/8 Bruce Griffiths bruce.griffi...@xtra.co.nz:



   
 Steve

 If you delete every second measurement then your effective minimum
 sampling time is now 2s and you can no longer calculate ADEV for tau 
 2s.
 You can still calculate ADEV for tau = 100,000 sec.

 If you delete all but the first 200,000 lines then you can calculated
 ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.

 You shouldn't lose sight of the fact that ADEV and OADEV are both
 estimates of the Allan deviation.


 Bruce

 Steve Rooke wrote:



 
 Tom,

 I understand fully the points that you have made but I have obviously
 not made my point clear to all and i apologise for my poor
 communication skills.

 This is what I'm getting at:

 Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
 and processing various forms of gps.dat from
 http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.

 C:\Documents and Settings\Steve Rooke\Desktopadev1.exe 1 gps.dat

 ** Sampling period: 1 s
 ** Phase data scale factor: 1.000e+000
 ** Total phase samples: 40
 ** Normal and Overlapping Allan deviation:

1 tau, 3.0127e-009 adev(n=38),   3.0127e-009 
 oadev(n=38)
2 tau, 1.5110e-009 adev(n=18),   1.5119e-009 
 oadev(n=36)
5 tau, 6.2107e-010 adev(n=79998),6.1983e-010 
 oadev(n=30)
   10 tau, 3.1578e-010 adev(n=39998),3.1549e-010 
 oadev(n=399980)
   20 tau, 1.6531e-010 adev(n=19998),1.6534e-010 
 oadev(n=399960)
   50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 
 oadev(n=399900)
  100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 
 oadev(n=399800)
  200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 
 oadev(n=399600)
  500 tau, 9.2193e-012 adev(n=798),  9.1630e-012 
 oadev(n=399000)
 1000 tau, 4.9719e-012 adev(n=398),  4.7750e-012 
 oadev(n=398000)
 2000 tau, 2.6742e-012 adev(n=198),  

Re: [time-nuts] Characterising frequency standards

2009-04-08 Thread Tom Van Baak
 But how does that explain the output of Tom's adev1 program which
 still seems to give a a good measurement at tau = 1s?

The first argument to the adev1 program is the sampling interval t0.
The program doesn't know how far apart the input file samples are
taken so it is your job to specify this. The default is 1 second.

If you have data taken one second apart then t0 = 1.
If you have data taken two seconds apart then t0 = 2.
If you have data taken 60 seconds apart then t0 = 60, etc.

If, as in your case, you take raw one second data and remove
every other sample (a perfectly valid thing to do), then t0 = 2.

Make sense now? It's still continuous data in the sense that all
measurements are a fixed interval apart. But in any ADEV
calculation you have to specify the raw data interval.

/tvb


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-07 Thread Bruce Griffiths
Steve Rooke wrote:
 A while back when we were discussing the performance of the Shortt
 free pendulum clock a reference was made to tvb's paper on allen
 deviation, http://www.leapsecond.com/hsn2006/ch2.pdf, which I found to
 be an excellent primer on the subject. It was interesting to see that
 with only a subset of the data, the allen deviations up to about the
 total of the data collection period could be calculated with
 reasonable accuracy. This had me thinking that if just a proportion of
 the data covering up to a specific averaging time gave good results,
 would disconnected data amounting to the same period give the same
 results. To me it seems that accuracy of the results is not related to
 the need to capture every event consecutively, it is more a case of
 collecting the same size data set even though samples were not
 consecutive. My reasoning behind this is that any set of data for a
 DUT should give the same results even though the data sets are not
 related time wise. OK, there are affects caused by different
 environmental conditions and drift but these can be calculated out.
 The only think that would shoot a big hole in this is if there was a
 repeatable difference between alternate cycles.

 So why am I saying this, well from what I have read on this group and
 on the web, I have been left with a feeling that it was vital to
 capture every event over a samplig period to ensure an accurate
 measurement. This requires equipment capable of time-stamping each
 event or employing such techniques as picket-fence. This is due to the
 limitations of most counters being unable to reset in time to measure
 the next time period of an input. At this stage I cannot see why it is
 not possible to just measure a cycle, let the counter/timer reset and
 then let it measure the next full cycle that follows. Agreed this
 would mean that alternate cycles were lost (assuming the counter/timer
 can reset within the space of one cycle) but the measurement could
 still collect the same amount of data points, it would just take twice
 as long. In fact, it could be possible to make the counter/timer
 measure alternate cycles on the opposite transitions, thereby reducing
 the total measurement time to just one and a half times the 'normal'
 time. With respect to any problem related to alternate cycles, the
 measurement system could be made to collect two data sets with single
 cycle skipped between each set.

 The difference will be that the data set would consist of measurements
 of each individual non-sequential cycle as opposed to a history of the
 start times of each cycle.

 So the short story is, does the data stream really have to consist of
 sequential samples or is it just a statistical thing so for the same
 size of data set, the results should be similar.

 73,
 Steve
   

Steve

It is essential to measure the phase differences between every Nth zero
crossing without missing any such cycles.
You don't have to time stamp every zero crossing every Nth one will
suffice but one then has no information for shorter time intervals than
N periods.
More accurate estimation  of the Allan deviation is possible if the time
interval between time stamps is shorter.

The reason that you can't omit one of the time stamps in the sequence
(if you wish to accurately characterise the frequency stability of the
source under test) is that the process isn't stationary.
Estimates of classical measures such as the mean and standard deviation
from the samples diverge as the number of samples increase.
Whilst attempts have been made to estimate the error due to deadtime,
the corrections require that the phase noise characteristics of the 2
(or more) sources being compared are accurately known.

Avoiding deadtime problems is fairly easy if you use an instrument that
can timestamp events on the fly.
It is almost trivial to build such an instrument within a single FPGA or
CPLD.

Bruce

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Characterising frequency standards

2009-04-07 Thread Poul-Henning Kamp
In message 49db496e.6030...@xtra.co.nz, Bruce Griffiths writes:
Steve Rooke wrote:

It is essential to measure the phase differences between every Nth zero
crossing without missing any such cycles.

And he does, except it is only every 2N instead of 1N.

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.