On 2/5/11 8:11 AM, Magnus Danielson wrote:
On 05/02/11 16:25, jimlux wrote:
Here's an interesting problem..

I've got a system at work with an internal clock oscillator that I want
to get some statistics on, but there's no direct visibility for the
oscillator, nor do I have a convenient test point that I can probe.

I can divide it down by an arbitrary number to generate pulses which I
then send out via SpaceWire timecodes. SpaceWire is a fast point to
point digital data link and it has a special capability that essentially
has a "tick in" signal at one end and a "tick out" signal at the other
end. The latency between tick in and tick out is random, but bounded and
discrete. the link runs at a clock rate derived from the same
oscillator, and you have to wait until the current character being sent
has been clocked out before you can send the special "timecode" token.

THat is, I can detect the "tick out" pulse and it has a random N*[0-14]
clock delay (distributed more towards 0 than 14) from when the "tick in"
(which is synchronous with the clock I want to measure). N is the ratio
between my clock and the data rate on the wire ( 7, in this case, so the
time step is about 100ns)

So, by making measurements of the time when the "tick out" appears (or
time between tick_out pulses) can I somehow "take out" the random
variability of the link.

It seems, since the clock isn't *terrible* that I could, for instance,
accumulate statistics, and throw out the ones that have more than 0
clock latency (which is probably a few 10s of percent of the ticks.. I
haven't looked yet). Or, given that the interval between ticks is one of
28 or 29 discrete values (plus the underlying clock variance), if the
clock variance on a given pulse is <<clock rate, the
histogram/probability distribution of times would look like a bunch of
little humps, each with variance =clock variance.

What you can do is you generate your tick clock at any division greater
than 7*14 (if I understood the timing correctly). Say you divide your
clock with 200 (about 3 us period if I got it right). Then you would get
about half the period between your ticks would have random delay and the
rest is silent.

yes, I'll probably use a divisor of 66 million to get 1pps ticks.


You then make TI measurements against some suitable clock. From that you
should then be able to rebuild your baseline time difference, detect the
random delay of 7x[0-14] and once you done that re-stamp those with the
suitable time and you now have a new time-series which all seems to be
experiencing the random delay of 0. From then on just do the ADEV or
whatever you want to do according to standard processing.


That's sort of what I was thinking.. all dependent on whether the variance is small enough to determine the N


or, is a real oscillator going to have an instantaneous variance that is
comparable to or greater than 1 clock pulse?

No, probably not unless you got a seriously bad design, which you would
probably know by now anyway. For short time-spans, deterministic noise
(such as your random buffer delay) and white noise will dominate.

To the lab.. (well, on Monday..)

_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to