On 9/16/12 10:20 AM, Magnus Danielson wrote:
On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote:

Dave Mills coined the term "allan intercept" as the cross over of
the two sources allan variances and it's a good google search for
his relevant papers.

I'm not entirely sure his rule of thumb for regulating to that point
is mathematically sound&  precise, but the concept itself is certainly
valid, even if you have to compensate for the timeconstant of the
PLL you use to regulate to that point.

Well, what is being used is phase-noise intercept. Conceptually a
similar intercept point will be available in Allan variance. However, as
you shift between noise-variants, the Allan (and Modified Allan)
variance has different scaling factor to the underlying phase noise
amplitudes. The danger of using the Allan variance variant is that you
get a bias in position compared to the phase-noise plots cross-overs.
However, the concept is essentially the same, and the relative slopes is
the same. You get in the right neighbourhood thought.

The concept has been in use in the phasenoise world of things, so you
would need to search the phase-noise articles to find the real source.
It's been used to generate stable high-frequency signals.

The analysis of PLL based splicing of ADEV curves is tricky, and I have
not seen any good comprehensive analysis even if the general concept is
roughly understood. The equivalent on phase-noise is however well
understood and leaves no magic too it.

I'm not sure that the theory of phase noise intercepts, in practical systems, is actually used. It seems that everyone I've talked to uses the theory to "get in the ballpark" and then does simulations at the design review, and ultimately, builds it and tests, and then tweaks the implementation to optimize (especially if the loop closure is implemented digitally in software/FPGA)

When talking real high performance, there's so many confounding error factors that it's not like you can build what theory says and hit the mark. The *actual* noise distributions follow the Leeson model in general, but have lumps and bumps, and there's always narrow band oddities (power supply filtering, noise from switching power converters, etc.)

Let's face it, real high performance source design has a lot of art and craft in it. You can't get to that point without sound engineering, but that last order of magnitude is all about suck it and see.


I spent a lot of time with the code in NTPns, to try to get that PLL
to converge on the optimum, and while generally good, it's not perfect.

The basic problem is that the data you have available for autotuning,
is the allan variance between your input and your steered source.


It's a complex field, and things like temperature dependencies helps to
confuse you.

Ain't that the truth..

And then, there's proving that what you built is actually doing what you claim. State of the art sources require beyond state of the art verification methods...

It's easy to write a spec for, say, incremental Allan Dev of 1E-16 at some tau. A bit harder to test at a constant frequency. Now throw in a varying frequency (say, because of temperature variation or Doppler)..



_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to