Tom,

On 02/28/2015 06:10 PM, Tom Van Baak wrote:
A good paper to read about the trouble when DUT is close (or "equal" to REF) is:
http://literature.cdn.keysight.com/litweb/pdf/5990-9189EN.pdf
"Isolating Frequency Measurement Error and Sourcing Frequency Error near the 
Reference Frequency Harmonics"

Nice reading and illustration of the problem. It does not go into
explain where these errors come from.

I especially like that he worked on an offset frequency to handle source
issues and that he elaborates with both time-base time and frequency
offset, as well as average and peak-to-peak values.

Have you seen any papers going into depth about that?

The Robert Leiby (5990-9189EN) paper was a real find. Agilent sent it to me 
after I ran tests of their new 53230A counter. I had two of them on loan (TCXO 
and OCXO) and the closer I looked the less I was impressed. The one feature 
that was a show-stopper for me was that the TCXO version would not outperform 
the OCXO version even if you gave it a BVA or maser as external reference to 
the counter.

That means in order to get decent performance out of the 53230A you must buy 
the overpriced OCXO version of the counter. I wonder if anyone else has run 
into this? Maybe my eval units were out of spec. Or, I wonder if anyone has 
opened their 53230A and hacked the timebase PLL to overcome this problem?

Anyway, that led to me checking out a pile of 53132A counters to see how well 
they performed.

Yes, it makes good sense to follow up with those.

The paper is indeed a good find.

In these tests I like to use slightly drifting, ultrastable, independent inputs instead 
of the old "BNC tee" trick where CH1=CH2 or CH1=CH2=REF. What you want is to 
see is not only the RMS noise in a measurement, but also how consistent the TI 
measurements are across the entire fundamental period of the inputs or timebase.

Indeed. The time error over the time-base period is relevant to measure, but also you have cross-talk from time-base into channels as well as cross-talk between channels. For the frequency case, you might not expect there to be cross-talk, but there will be if the frequency measure uses the start and stop channels.

For my test I used two ultrastable sources with 1e-12 or 1e-11 frequency 
offset. At 1e-11 you can scan an entire 100 ns period in 10,000 seconds (under 
3 hours). I'd have to look at my notes to see what I did with the REF input. I 
think I tried REF=CH1 and REF=CH2 and REF=3rd independent source. But the main 
goal was to see the interaction between CH1 and CH2 because that's the mode 
used by any TI measurement.

Indeed. It can be hard to separate the non-linearity of a channel (over the time-base period) from the cross-talk from reference to channel. The non-linearity usually has a periodicity over the time-base reference due to the use of a coarse counter frequency (such as 90, 100, 200 and 500 MHz) where as some (HP5335A, HP5334A/B) uses the time-base directly as coarse counters.

Enrico have been looking at post-processing filtering, and it's effects.

I just haven't seen any paper giving much about how cross-talk and such
affects non-linearity and post-processing such as frequency reading and
ADEV. I have my own model from experience and various sources, but not
seen anything comprehensive.

Cheers,
Magnus

Right, there's no paper that I could find yet. Instead I was planning on taking several 
models of popular high-end TI counters (SR620, HP5370, CNT-91, 53132A, 53230A) and run 
them all through the same offset/linearity test to investigate the "fine 
structure" of their measurement errors, as was done in the Leiby paper. If you have 
some measurement setup suggestions beyond what I already did with the 53132 counters, let 
me know.

Leiby's paper is focused on the frequency measurement and in particular the non-linearities of the hardware in relation to frequency measurements. The underlying model is time errors and considering that the frequency estimate f = events/(t_stop - t_start) the non-linearities in the time estimations t_stop and t_start will be subtracted from each other. As the number of complete cycles of error increases with tau, the remaining time error within such a period will be divided by a tau. Consider the period formula
t_period = (t_stop - t_start)/events
Consider that events = tau*f_1, t_start = 0 + te_start, t_stop = tau + te_stop (approximation for understanding) we get
t_period = 1/f_1 + (te_stop - te_start)/(tau*f_1)

No wonder that the period (and thus frequency) measure errors varies with tau and measured frequency.

Rather than measuring with free-floating oscillators, I have been considering using frequency syntesizers as in Leiby's article and programmable delays. That way I can stay at a particular delay and bang the same point and get statistics. The histogram will give me information about the offset and coupling properties.

I think this could be an interesting paper maybe.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to