[time-nuts] Re: What about the frequency discrimination method? (offshoot from DIY PN analyzer)

2022-07-10 Thread Magnus Danielson via time-nuts

Hi Ed,

On 7/10/22 20:26, ed breya via time-nuts wrote:

Hi Magnus,

I know what you mean about not needing a quadrature splitter - if you 
have a very wide phase or delay tuning range - but I'm picturing 
getting most of the way to quadrature with a fixed structure for a 
given frequency, and only fine-tuning the phase over a narrow range, 
in order to minimize the PLL's overall noise contribution. This should 
also keep it monotonic - too wide a range may let it get stuck on the 
humps.


Well, the traditional method is to lock it up. When you do that 
Quadrature (PM) comes for free, but In-phase (AM) does not. The lock 
require either oscillator to be frequency steerable to achieve lock. If 
that is not what you want to do, well then you can use a non-synchronous 
method and then a quadrature splitter is the way to go. This is 
essentially what the modern oversampling measurement devices do anyway. 
They just sample the waveforms, convert the IQ into amplitude and phase, 
and then decimate down from that. Comparison with reference is then done 
in that context after the fact. Frequency errors can be handled within 
some fairly flexible range.


Similarly, just mixing unlocked sources with I/Q setup produces some 
beat-note and if you get that as I/Q you can do more or less the same thing.




For amplitude calibration, I'm picturing rearranging the splitter 
ports or guts somehow (as simply as possible) to present the DUT 
signal to the mixer at 0 or 180 degrees, which should give a maximum 
DC out.


For amplitude measurement, you want the in-phase component rather than 
the quadrature component, if they are locked that is. Using a normal 
lock, the direct lock phase-detector actually gives you the quadrature, 
so you need the splitter to get the In-phase too. For non-synchronous 
receivers, you need to track the beat-note to sort out what is the 
in-phase and quadrature for AM and PM respectively. An abs/arctan 
function (such as CORDIC) will convert I/Q samples nicely and the 
frequency error will be just a phase-ramp on the detected phase.




BTW I had never heard of the Tayloe detector, but it appears to be the 
method (4x f multiplier then digital quadrature divide) used in 
lock-in analyzers, and I have used the same in a number of projects.
Yes. It is fairly widely used these days. Seems to have good enough 
properties for many things.



Azelio wrote:
"How can you measure something, any type of measure, not only PN,
without a reference? Voltmeters need voltage references, "timemeters"
(and frequency meters) need time references."

Azelio, this is a well known technique - I haven't described anything 
new, just a particular implementation I've been pondering.


Indeed. There is many ways to measure without an actual reference. There 
might be other oscillators, but they are not "reference" in the way of a 
voltage reference for instance.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: DIY Low offset Phase Noise Analyzer (Erik Kaashoek)

2022-07-10 Thread Magnus Danielson via time-nuts

Hi Erik,

On 7/10/22 17:52, Erik Kaashoek via time-nuts wrote:
I've updated the schematic to include the latest additions and added 
some new measurements


Schematic: http://athome.kaashoek.com/time-nuts/PNA/Simple_PNA.pdf

The resistor values (many 18k) are a bit weird but I happen to have a 
big box of 18k resistors.
The value of the low pas filter after the mixer (C2,C3,L1) are 
probably wrong. Calculate yourself for the corner frequency you want.
I get 22,5 kHz which isn't completely off the charts. Sure helps to eat 
the 20 MHz and higher, as well as stray 10 MHz. For the 20 MHz it will 
in ideal have -180 dB damping, but in practice it will leak over but 
probably not too bad.
The elco's in the PI_controller and the input of the Audio_LNA are 
probably going to explode due to reverse polarity.


You want the resistor and capacitor to be in series and not in parallel 
in that negative feedback.


As you put a resistor in parallel you will drain the state of the 
capacitor and loose performance.


You can choose to either locate a 1 uF non-polar cap, or shift the 
values a bit to get into plastic caps such as polypropylene. 100 nF and 
220 nF should be easy enough to get hold off. You could even put a pair 
of 470 nF in parallel.


A generic note: Most if not all op-amps tends to operate better in terms 
of offset behavior as they see about the same resistance DC on both + 
and - inputs.


The output of the REF_Buffer acts as the virtual ground so care was 
taken (almost) not to draw any current, except for the input of the 
Audio_LNA.

The supply of the opamps is not drawn but its from Ground and Vcc (+12V)
I've tested symmetric supply but the combination of the REF output 
voltage from the DOCXO and the REF_Buffer provided the least noise.
The audio_LNA has a gain of 1 for DC and increasing to 100 for for 1Hz 
and above
The R/C values around the PI_Controller have not been optimized but 
they work.
As the Summer OPAMP inverts to 5-10V the Inverter OPAMP brings it back 
to 0-5V for the Vtune of the DOCXO
You could do away with the Summer and Inverter op-amps if you fed the 
TUNING into the + input of the inverter. By skewing the PI-controller 
balance the output will be suitably offset. The benefit will be that you 
avoid noise contribution from two op-amps and their resistors.
The LED's provide visual feedback on the tuning. IF both are just on 
the PLL is in lock. It may be better to have two LED's in series at 
each side to increase the dimming.


I would advice moving those LEDs off-board. Let that run on separate 
"dirty" power. I love the direct observation aspect, but I fear it just 
add noise to the measurement.


Keep up the good work!

Cheers,
Magnus



Some measurements.:
All indicated levels are 40dBc/Hz higher compared to actual.
The noise floor: 
http://athome.kaashoek.com/time-nuts/PNA/PN_baseline_3.JPG

This is measured without DUT input.

Rigol signal generator generating 10MHz Phase modulated with 60 
degrees noise at -80dBc/Hz: http://athome.kaashoek.com/time-nuts/PNA/


Rigol signal generator generating 10MHz phase modulated with 0.006 
degrees at 220Hz : 
http://athome.kaashoek.com/time-nuts/PNA/PN_Rigol_3_0.006.JPG
The 220Hz is under the cursor at -27dBc, at 0.006 degrees modulation 
it should be at -88dBc, so there must still be a big mistake somewhere.


AR60 Rubidium reference: 
http://athome.kaashoek.com/time-nuts/PNA/PN_Rb_3.JPG

All seems OK, a bit of 50Hz and harmonics.

OCXO : http://athome.kaashoek.com/time-nuts/PNA/PN_OCXO_3.JPG
very weird spurs between 40 and 50 Hz

The famous cheap Chines TCXO: 
http://athome.kaashoek.com/time-nuts/PNA/PN_TCXO_3.JPG
Not too bad for offsets of 100Hz and higher but at 10Hz and lower its 
20dB worse.


A home designed/build arduino GPSDO: 
http://athome.kaashoek.com/time-nuts/PNA/PN_GPSDO_3.JPG

The GPSDO has a good ADEV but is clearly very noisy!

I also measured a Marconi 2022 signal generator and it was possible to 
lock but the phase noise was terrible with strong factional PLL spurs.
I also tried to measure the phase noise of an old Philips analog 10Hz 
to 12MHz signal generator but it was impossible to get a lock because 
the generator output is jumping around several Hz at 10MHz output.


The noise floor of the simple PNA leaves a lot to improve (from 
-140dBc/Hz at 10kHz to -180dBc/Hz with better OCXO, LNA and 
correlation) but it proved to be able to do a first assessment of some 
not too good oscillator performance.


Feedback welcome as these are my first baby steps on phase noise nuttery.
Erik.




___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: What about the frequency discrimination method? (offshoot from DIY PN analyzer)

2022-07-10 Thread Magnus Danielson via time-nuts

Ed,

On 7/9/22 22:26, ed breya via time-nuts wrote:
I've been following the thread about Erik's DIY PN analyzer, and 
wondering if it might be easy enough to use a frequency discrimination 
method. I'm opening this in a different thread to avoid muddying the 
water on the original (and long) one.


What I'm picturing is putting the DUT's output into a quadrature power 
splitter that optionally has a voltage-tuned slight phase shift 
feature. The I and Q outputs would go into the DBM and produce the 
nearly-zero DC plus baseband signal for analysis as in the original 
story.


If the quadrature is precise and stable enough, the DC out should be 
close to zero, and since the baseband is ultimately AC coupled to the 
analyzer, small offset should be OK, within reason.


If this is not sufficient, then having a phase tuning feature could be 
used to form a PLL to hold the DC at zero. The big difference here is 
that instead of locking a separate reference source to the DUT, the 
relative phase at the mixer just has to be fine tuned to maintain the 
output DC. The same sorts of PLL requirements are encountered to get 
the results, but no external reference (and its noise and lock range 
etc issues) is needed.


The downside is that a different quadrature splitter would probably be 
needed for each DUT frequency to be applied - I'm picturing ones for 5 
and 10 MHz initially. Those 90 degree broadband splitters that Mike 
mentioned seem very interesting too.


There is still the necessity of calibration, either way.


There is no need for it. Using the PI loop, it will drive the phase 
detector into quadrature and as it does this, the DC component of the 
detector is cancelled as it is integrated into the integrator path I.


There is however use for quadrature splitter as you do a Costas loop, 
which is needed for some modulation schemes. Then again, you really do 
not need to use a quadrature splitter to achieve the needed quadrature 
pair, but there is other tricks to achieve the same thing. The Tayloe 
detector comes to mind, which uses a frequency 4 times higher, divides 
it down and then drive the detector for a S/H style of mixer. See for 
instance Elecraft KX3.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: DIY Low offset Phase Noise Analyzer (Erik Kaashoek)

2022-07-10 Thread Magnus Danielson via time-nuts

Erik,

On 7/9/22 22:06, Erik Kaashoek via time-nuts wrote:

Getting the simple PNA to lock was a bit difficult due to the overly
simplistic translation of the mixer output to the Vtune of the OCXO
To get some more flexibility I added a summing opamp that summed the mixer
output with the output of the coarse tuning potmeter. As the summing causes
inversion one extra inverting opamp was added. This made the loop gain
constant
To ensure the mixer is in quadrature another opamp was added that amplified
the mixer output into two LEDs. One LED on when below zero ouput from
mixer, the other on when above zero and both dim when zero output. This
made tuning the coarse frequency simple. Turn till the blinking stops and
both LED's light up dim. The fine frequency potmeter was no longer needed
and the frequency counter is also no longer needed to get into lock
With the summing opamp it is also possible to add an integrator but this
has not been done yet.


So, this is where you should attempt the PI loop.

In theory, you have one proportional path P and one integrating path I 
that sums to form the EFC. You can imagine this as two op-amps having 
inverted gain and then a summing amp to sum these two up. Thus, you have 
for the P path a resistor in the negative feedback path and for the I 
path a capacitor in the negative feedback path.


Such a setup is nice for testing, but a bit excessive as one progresses. 
One can actually reduce this to a single op-amp with the resistor and 
capacitor of the negative feedback to be in series, having a common 
input resistor.


The integrator part will hold the state that ends up being the DC part 
of EFC. The proportional path will provide the AC path and set the 
damping factor for the PLL, you want it well damped.


This would replace your normal loop filter. You would still want a 
filter to reject the sum-frequency out of the mixer.


The P gain is proportional to the PLL bandwidth time damping factor.

The I gain is proportional to the PLL bandwidth squared.

The capture range is for all practical purposes infiinte (it's wide 
enough). The capture time depends to the cube on the PLL bandwidth, so 
altering the PLL bandwidth between unlocked and locked conditions have 
proven very useful approach to speed things up if one has a need for 
larger lock-in frequencies. Rough-tuning with a trimmer can reduce it 
significantly. The lock-detection is very simple detection of the 
presence of beat-notes or not, that AC component dies away as it locks.


Anyway, the benefit of the PI loop filter is that you can be rather 
brutal with parameters, it will lock. So, it can be worth experimenting 
with it. I've found that one can ball-park things fairly quickly knowing 
how to change the P and I for wished PLL bandwidth and damping. Very 
experimentally friendly.


I should advice you that any PLL will provide a low-pass filter of the 
reference input, and a high-pass filter on the noise inside the loop, 
which includes that of the oscillator. This can help you identify likely 
sources of disturbances as per their frequency in relation to the PLL 
loop bandwidth.


Cheers,
Magnus


Shielding is now the biggest problem as any nearby coax connected to a
10MHz source will cause a huge amount of spurs when not at exactly the same
10MHz
Ultra low noise opamps have been ordered to hopefully reduce the internal
noise of the PNA but the reference OCXO may already be the limiting factor.
The REF voltage output of the OCXO turned out to be rather clean. Much
cleaner than a 8705 voltage regulator
Erik
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: DIY Low offset Phase Noise Analyzer (Erik Kaashoek)

2022-07-09 Thread Magnus Danielson via time-nuts

Hi Erik,

On 7/8/22 17:12, Erik Kaashoek via time-nuts wrote:
Not something I want to implement on short notice but maybe for the 
future.
The biggest limitation in this DIY PNA is the phase noise of the 
reference OCXO and the noise of the opamp amplifying the output of the 
mixer.

So I was wondering if it would make sense to do the following
1: Split the output of the DUT into two completely separate PNA's
2: Feed the output of the two PNA's into the PC left/right audio 
inputs where the noise of both ADC's gets added.

3: Do a cross correlation of the two inputs.
This should (as far as I have understood the feedback) eliminate both 
the phase noise of the two independent OCXO's used as reference and 
eliminate the noise of the opamps in the two PNA's and the ADC's, 
given enough time to do the correlation.


This makes perfect sense. You will not remove the noise of the two 
channels, but you will get a direct benefit and as you average the 
complex output of successive FFT-cross-correlations you will suppress 
the measurement noise even further.


Have you attempted doing a PI-loop as I've suggested?

However, you benefit greatly at optimizing the performance of a single 
channel first before going to the cross-correlation. Bob's many good 
suggestions should provide you directions enough. Cross-correlation is 
not a replacement for doing the homework well, it's to get the icing on 
the cake.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: DIY Low offset Phase Noise Analyzer (Erik Kaashoek)

2022-07-09 Thread Magnus Danielson via time-nuts

Hi Mike,

On 7/8/22 15:34, Mike Monett via time-nuts wrote:

You wrote:


Mike,
He was using an analog mixer, but your comment about XOR  mixer does
not apply  to  analog mixers.  Your  oversimplification  that analog
mixer and  XOR gates being the same thing does not  apply  here, and
thus the  assigned  missbehavior does not carry over  to  the analog
mixer case.
Cheers, Magnus

Magnus,

Thanks for your comment. Here are some attached files:

1. DBMS.PNG

This shows the schematic of a double-balanced mixer. Note the mixer output
is on pin U24. A low pass filter is at R1C1.

2. DBMWFM.PNG

These are the waveforms in quadrature lock. The bottom waveform in red is
the signal at pin U24. It is a square wave at twice the signal frequency.
This signal is identical to an XOR, such as a 7486 logic ic, except the
amplitude is much lower at only 900 mV p-p.

The top waveform in green is the signal at the low pass filter. It is a
triangle wave, the same as you would get from adding a low pass filter to
any square wave. Thus my statement that a double-balanced mixer is an XOR
is accurate.


No, it's not. You addresses this from the wrong side of things, 
considering that the waveform and amplitude is the only critical part, 
it's not. The way that the digital gate behaves is not providing the 
dynamics for the noise as the DBM does. A DBM has far less noise, which 
is why it is beneficial to use, as Bob pointed out.


So, while large-scale properties is similar between DBM and XOR, their 
noise behavior is quite different. Also their ability to handle signals 
of various amplitudes and the way they change behavior from it.


Also, all digital gates degrade their performance in face of higher 
amount of noise. On their way there they compress the noise. Stateful 
PFD can step state before they should, and that also compresses noise 
(and make it larger). See Gardner to cover part of this, I've done 
similar work that also reflect the same understanding.


As the goal here is to measure very low phase noise, DBM have proven to 
be the best technology until we started oversampling and digital radio 
style of processing, which is more expensive but state of art for wide 
frequency systems. More delicate systems with DBMs also achieves state 
of art, see the interferometric methods of Rubiola for instance and also 
the cross-correlator approach. I've contributed in that field myself by 
contributing the interferometric cross-correlation phase-noise setup, 
which combines the techniques to overcome a particular issue with 
cross-correlation at the thermal noise-floor.


So I continue to disagree about your generalization that DBM and XOR 
achieve the same thing, for this purpose they do not. If we where not 
looking for as deep noise floor, but only had moderate S/N needs, I 
would agreed. I've used XOR gates just fine for such applications, and 
there is plenty of such cases, it's just that this is not one of those.




3. DUBLBA01.ASC

This is the double-balanced mixer schematic input for the LTspice simulator.

4. DUBLBA01.PLT

This is the output waveforms from LTspice.

Ordinarily, the triangle ripple output from a double balanced mixer would
add considerable jitter to any PLL. Eric's application avoids this problem
since his loop bandwidth is so low, at much less than 1 Hz. This makes it
extremely difficult for him to obtain lock, which is why I proposed using a
phase/frequency detector.


I suggested a PI loop instead. It avoids all the issues while 
maintaining the noise behavior from the DBM.


The low capture range of Eric's PLL for sure indicate the lack of loop 
gain, which also gives low bandwidth, so the end to end range of phase 
only allows for small adjustment of EFC to steer the oscillator into 
lock. A PI loop consisting of two resistors, a capacitor and an op-amp 
is a fairly good way to orthogonalize out the capture range from the 
bandwidth issue.




The first block diagram I posted earlier, PNA.PNG, contained two errors. I
corrected them in PNA2.PNG, which I will post to Eric.

At first, I did not realize the significance of Eric's low loop bandwidth,
and I erroneously assumed the triangle wave ripple output would cause
significant jitter to his loop. It is now obvious the low loop bandwidth
will reduce the ripple amplitude to insignificance, and I now retract my
claim.


The ripple amplitude is also very limited to only cover a very narrow 
frequency range and those the beat note will be very low frequency. That 
take ages to lock unless one is very close at which is more the 
remaining lock-in.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: DIY Low offset Phase Noise Analyzer (Erik Kaashoek)

2022-07-08 Thread Magnus Danielson via time-nuts

Hi,

Well, both amplitudes can be measured. The method I refer to is one of 
several out of NIST, so it's not one of my own invention. See their AM 
and PM Calibration material.


Using multiple methods you can evaluate how well the method functions. 
The side-tone method generates known PM with the uncertainty in relative 
amplitude. It can be easier to validate than a phase modulator approach, 
as it needs calibration.


Cheers,
Magnus

On 2022-07-08 03:57, Bob kb8tq wrote:

Hi

One consideration:

If you do signal injection for calibration, you have the amplitude 
uncertainties on
both the “carrier” and injected signals. The slope at zero on the beat note is 
likely
to be *much* more accurate ( even if gain measurement at audio gets thrown in …)

Bob


On Jul 7, 2022, at 5:19 PM, Magnus Danielson via time-nuts 
 wrote:

Hi,

A well established method is to use a separate offset RF generator that you can 
steer frequency to form suitable offset and amplitude to form known level. You 
can now inject this ontop of a signal to measure. Consider that you steer your 
offset frequency to be +1 kHz of the carrier you measure, and you set the 
amplitude to be -57 dB from the carrier. This now becomes equivalent to having 
a -60 dBc phase modulation at 1 kHz.

The RF generator does not have to be ultra-clean in phase noise just reasonably 
steerable in frequency and amplitude.

Cheers,
Magnus

On 2022-07-07 12:47, Erik Kaashoek via time-nuts wrote:

Bob, others.
It has been explained that for the best phase noise level calibration on should 
use a signal with one radian phase modulation and measure the output voltage.
The problem with this approach is the unknown gain of the path into the PC. And 
due to the gain one can not modulate with one radian as this saturates the 
whole path.
An alternative method for phase noise level calibration could be to create an 
oscillator so bad its phase noise can be measured using a spectrum analyzer. To 
make such a bad oscillator a 10MHz signal was phase modulated with noise. The 
phase noise became visible on the spectrum analyzer just above 20 degrees of 
modulation. The phase noise level saturated between 55 and 60 degrees which is 
consistent with one radian (57 degrees). The spectrum analyzer could measure 
the phase noise at a flat -80dbc/Hz ( yes Bob, I better use the right 
dimensions)
The simple phase noise analyzer also measured the phase noise at -80dBc 
providing evidence the level calibration was done correctly.
I also tried to increase the DUT drive into the mixer further above saturation 
so see if this made any change in the measured level but once above 0dBm I did 
not observe any change up to +10dBm drive. Any higher levels felt too dangerous.
There is still a lot of work to be done to further increase accuracy.
Erik.
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: DIY Low offset Phase Noise Analyzer (Erik Kaashoek)

2022-07-07 Thread Magnus Danielson via time-nuts

Hi,

A well established method is to use a separate offset RF generator that 
you can steer frequency to form suitable offset and amplitude to form 
known level. You can now inject this ontop of a signal to measure. 
Consider that you steer your offset frequency to be +1 kHz of the 
carrier you measure, and you set the amplitude to be -57 dB from the 
carrier. This now becomes equivalent to having a -60 dBc phase 
modulation at 1 kHz.


The RF generator does not have to be ultra-clean in phase noise just 
reasonably steerable in frequency and amplitude.


Cheers,
Magnus

On 2022-07-07 12:47, Erik Kaashoek via time-nuts wrote:

Bob, others.
It has been explained that for the best phase noise level calibration 
on should use a signal with one radian phase modulation and measure 
the output voltage.
The problem with this approach is the unknown gain of the path into 
the PC. And due to the gain one can not modulate with one radian as 
this saturates the whole path.
An alternative method for phase noise level calibration could be to 
create an oscillator so bad its phase noise can be measured using a 
spectrum analyzer. To make such a bad oscillator a 10MHz signal was 
phase modulated with noise. The phase noise became visible on the 
spectrum analyzer just above 20 degrees of modulation. The phase noise 
level saturated between 55 and 60 degrees which is consistent with one 
radian (57 degrees). The spectrum analyzer could measure the phase 
noise at a flat -80dbc/Hz ( yes Bob, I better use the right dimensions)
The simple phase noise analyzer also measured the phase noise at 
-80dBc providing evidence the level calibration was done correctly.
I also tried to increase the DUT drive into the mixer further above 
saturation so see if this made any change in the measured level but 
once above 0dBm I did not observe any change up to +10dBm drive. Any 
higher levels felt too dangerous.

There is still a lot of work to be done to further increase accuracy.
Erik.
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Isolation amp transistors

2022-07-07 Thread Magnus Danielson via time-nuts

Hi,

On 2022-07-07 07:22, Bob kb8tq via time-nuts wrote:

Hi


On Jul 6, 2022, at 1:53 PM, Richard Karlquist via time-nuts 
 wrote:

The 2N5179 has high base spreading resistance (decreases isolation).

As does sticking a resistor (even a small one) in series with the base …. Yes, 
inductance
is even worse.

For “best isolation” in a cascode you very much want the base of the common base
stage nailed to ground. Typically “lower” Ft transistors with a decent base 
structure
are the best choice for the common base stage. Both stages benefit from low 1/F 
noise
in the audio range if this is for a phase noise test set.  This is why people 
use what would
normally be considered “audio” transistors ….


The NIST isolation amplifiers does exactly this. Looking for Fred Walls 
in the NIST T archive usually makes me find the article quick.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: DIY Low offset Phase Noise Analyzer (Erik Kaashoek)

2022-07-06 Thread Magnus Danielson via time-nuts

Mike,

He was using an analog mixer, but your comment about XOR mixer does not 
apply to analog mixers. Your oversimplification that analog mixer and 
XOR gates being the same thing does not apply here, and thus the 
assigned missbehavior does not carry over to the analog mixer case.


Cheers,
Magnus

On 2022-07-05 12:27, Mike Monett via time-nuts wrote:

Eric,

Another problem I forgot to mention, the exclusive-or phase detector has a
severe output ripple. This will cause frequency shift in the oscillator
frequency which will show up in the measurements.

The phase-frequency detector has zero ripple at lock. There is a small
transient at the sample time, but this is easily filtered with a simple low
pass filter.

With zero ripple in the output, the PFD will not cause any shift in the
oscillator frequency. This will not cause any error in the measurements.
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: DIY Low offset Phase Noise Analyzer (Erik Kaashoek)

2022-07-06 Thread Magnus Danielson via time-nuts

Hi,

On 2022-07-05 12:13, Mike Monett via time-nuts wrote:

You stated:

Mike,
The phase detector is an ADE-1 mixer, the IF output of the mixer goes
into a loop filter that has a corner frequency of about 0.2Hz to enable
Phase noise measurements down to 1Hz offset

That is your problem. A double balanced mixer is an exclusive-or phase
detector. The lock range is determined by the loop bandwidth, as you have
found.

The phase-frequency detector is completely different. It will lock to any
signal in the lock range, independent of loop bandwidth. You can have a
bandwidth of 0.001 Hz, and it will still lock. Think of what this could do
for your phase measurements.


Actually, there is two schools here.

There is the school of stateless phase-detectors (such as mixers) and 
the school of stateful phase-detectors (such as three-state mixers).


Now, in the school of stateless phase-detectors, mixers, XOR-gates, 
samplers etc. the capture range becomes dependent on the loop gain.


For passive lag filters, you will need a significant static 
phase-difference on the input to provide the state of the EFC to 
compensate on the frequency. It's very simply that the DC volt 
difference coming out of the detectors, through the DC gain of the 
filter is then what becomes the EFC.


In active lag filters, you add additional gain, and this requires lower 
phase detector voltage to support the same EFC error.


Both these actually have an implicit state in the phase detector to 
compensate the lack of state elsewhere. It is just not that the phase 
detector holds explicit state.


In PI filters, the state of the frequency error is moved from the phase 
detector to the filter. The integrator has close to infinity in DC gain 
(naturally limited in practice, but for many purposes we can assume it 
being infinite) such that it drives the DC phase offset out of the phase 
detector to zero and builds up the needed EFC state in the integrator 
capacitor. This have the benefit that capture range is in theory 
unlimited, but even if the actual range is in practice limited, it is so 
wide that we can treat it as infinite for most cases. The PI loop those 
do not need any form of aiding to lock up. However, aiding it can 
increase lock-up time. You could either pre-trim the EFC or you could 
increase the PLL bandwidth to achieve quick lockup. The later is 
actually very simple and has very huge impact.


The thing people do wrong with PI filters is to scale the bandwidth on 
the output side of the integrator. This is wrong, as one then needs to 
scale the output to maintain the acquired state to match the needed EFC. 
The right way to do it is to scale it on the input side. That way the 
scaling to EFC is maintained and no state-scaling is needed.


As one scales the bandwidth through I one needs to scale P accordingly 
to maintain good damping properties.


Fairly simple PI-loop setups allow for good lockup and stability properties.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Should a double oven XO be thermally isolated or just draft protected?

2022-07-01 Thread Magnus Danielson via time-nuts

Hi David,

We did a fairly simple measurement setup at work.

We had the oscillator sitting on a small test-board and measured the 
frequency from start. Then a few seconds in we shifted the direction of 
a fan at some distance onto the oscillator. We then did this with a 
variation of simple shields, and concluded that a fairly simple wind 
shield achieved most of the gains we where after. We then reapplied this 
in various incarnations since, and it has not provided us with any 
reasons to do things differently, but rather once the lesson was 
learned, it was shown effective in many places, as forces convection is 
an unfortunate needed thing in our products.


As most oscillators have a metal can, they conduct heat well and if 
there is no direct forced air convection onto it, it allows the 
radiation and still air conduction to be fairly well evened out and 
those provide less temperature gradients to the oscillator.


I've also seen the 5370A/B shield. It works and solves the problem, but 
often you can use simpler setups with good results too.


So, it comes down to not really shielding it from long-term temperature 
variations, but just not make the situation much worse than it needs to be.


If one has a box with relatively low power consumption per unit volume, 
forced air is not needed, and need for shielding can be relaxed. Just 
putting the oscillator of from heat sources and in particular heat 
sources that vary over time come far. The important part is that it is 
in a thermally quiet corner, which include air and air-flows.


We had a pair of students doing work during summer vacation. They where 
measuring the phase stability of one of our boxes. Three hours into the 
measurements the variations seemed to go away. They where completely 
puzzled. It was showing clear variations and then the systematic died 
away. So I just asked them when they started the measurement. "Around 
15:00", well that was all I needed. I informed them that the building AC 
turned off at 18:00, and what they was measuring was variations in 
ambient air condition. They where flabbergasted and wondered how it 
could have such an effect. So I pulled the board out of the chassi and 
showed them the oscillator location and showed them how the side-wise 
blowing air hit the can. I then found them some foam tape and advised 
them to apply it to the oscillator and redo the measurement. It was much 
flatter naturally. However, for that product it was good enough without 
the shielding, but we changed modus operandi for all our products since. 
For this very good reason.


Cheers,
Magnus

On 2022-07-01 22:31, Dr. David Kirkby via time-nuts wrote:

On Fri, 1 Jul 2022 at 20:11, Erik Kaashoek via time-nuts <
time-nuts@lists.febo.com> wrote:


I'm trying to build a stable reference for a phase noise meter project
and have acquired a double oven XO that boosts high short term stability
(below 1e-12/s). But the spec also states that, even with the double
oven, there is still substantial impact of environmental temperature
changes (below 1e-8 changes over the normal operating temperature range)
so I was wandering if its good practice to try to thermally isolate the
DOCXO or do you run the risk of overheating as it always may burn some
power and its better to only shield it from draft?


I removed an HP 10811A OCXO from a 5370B time interval counter the other
day and put it into a HP 5352B 40 GHz frequency counter. One thing that
really struck me is that in the 5370B there was a shroud around the OCXO,
which is around 5 mm away from the sides of the OCXO. It's made of
aluminium. But there's nothing like that in the frequency counter. The two
attached photographs show a significant difference. I took the photograph
from inside the 5352B frequency counter. The photo of the 5370B was one I
just found on the EEV blog site, as I did not want to have to mess around
taking another photograph.

I see Magnus respond to you.

My gut feeling is the designers of the 5370B were likely to have more
knowledge about the behaviour of oscillators than the frequency counter
designers, which makes me wonder if adding something around the oscillator
in the frequency counter, like in the 5370B time-interval counter, might be
a good idea.

Unfortunately I suspect it would be very time-consuming to evaluate the
difference a shield would make in the frequency counter, I have another HP
frequency counter where the fan blows over the oven, which does not seem a
very good idea.

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Should a double oven XO be thermally isolated or just draft protected?

2022-07-01 Thread Magnus Danielson via time-nuts

Hi Eric,

On 2022-07-01 16:40, Erik Kaashoek via time-nuts wrote:
I'm trying to build a stable reference for a phase noise meter project 
and have acquired a double oven XO that boosts high short term 
stability (below 1e-12/s). But the spec also states that, even with 
the double oven, there is still substantial impact of environmental 
temperature changes (below 1e-8 changes over the normal operating 
temperature range) so I was wandering if its good practice to try to 
thermally isolate the DOCXO or do you run the risk of overheating as 
it always may burn some power and its better to only shield it from 
draft?


You should be careful to isolate it too much.

OK, let's get the basics. The oven aims to maintain a certain 
temperature by running a heater continuously and balance the heating to 
the cooling of the surrounding, as it dissapates heat. This is equally 
true for double and tripple ovens, they just have different temperature 
settings.


Now, if you over-isolate any oven, the heat transfer will be too low so 
the heater will overshoot the heating. When this happens, the heater 
turn fully off and the oven will coast down unregulated until low enough 
temperature. What you then end up with is a bang-bang regulator causing 
a saw-tooth like heating profile. This is then worse situation than before.


Naturally, this all depends on the design of the oven, and how it's 
setpoints is done, but the ambient temperature specification gives the 
clue of how far you can go. You need to remain the thermal loading to 
maintain that minimum heat conduction out of the oven. For passive you 
need to respect the highest ambient temperature (of the oven) for all 
ambient temperature conditions of the device you build. Isolation needs 
to be done carefully, and passive stability is hard. Active measures 
naturally can help but you then need to handle cooling.


Rather than thinking isolation, you should rather avoid direct 
variations of forces convection air path. Essentially, put the OCXO in a 
draft-free corner. Essentially wind-shielding it but really not doing 
any actual isolation to maintain heat conduction away from the OCXO 
works really well. Either just a few metal walls or a plastic cap around 
it will suffice to cause much of the effect without excess danger of 
over-isolation.


For test-purposes, you find that I often put oscillators inside a 
cardboard box with some antistatic bubble-wrap around it. Not enough 
thermal mass for long-term things, but good enough to remove much of the 
quicker fluctuations. There is usually an ADEV bump at 500-1500 s 
traceable to the heating/AC. Similarly a beach towel have surved the 
purpose for larger things.


Cheers,
Magnus


___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: First PN measurement results at 1 Hz to 20 kHz from carrier

2022-06-27 Thread Magnus Danielson via time-nuts

Hi,

On 2022-06-27 18:06, Bob kb8tq via time-nuts wrote:

Hi


On Jun 27, 2022, at 1:43 AM, Erik Kaashoek via time-nuts 
 wrote:

Magnus, Bob,
When the mixer is operating in the linear region for the DUT input (0dBm or 
lower), would it be possible to use a calibrated noise sources  to do an extra 
verification of the noise level measurement?
Of course with a noise source you get 3dB as both sidebands fold.

The “normal” approach is to put the mixer into saturation. This gives you the
best noise floor. It also does a bit better at separating AM noise from PM noise
(since you are trying to measure phase noise …..).


It could be worth mentioning that there are both linear and non-linear 
AM->PM conversion as well as PM->AM conversion. They *should* be 
perfectly separate, but real life circuits is unable to maintain perfect 
symmetry.


Now, the AM noise tends to be stronger than the PM noise, so leakage 
tends to be more severe in that direction, in that it causes more harm. 
Running the signal into clipping is a non-linear way to cut away most of 
the AM and maintain the PM, and such limiter detectors turns out to be 
very good for FM reception (FM is just a variant of PM).


Modern high-speed ADC receivers with arctan CORDIC processing has 
supperiour AM and PM separation and avoids much of those problems. Just 
for reference, does not help here.


A linear filter provides AM->PM conversion when the upper side-band and 
the lower side-band does not have the same gain for the same offset 
frequency. This asymmetry converts AM (even) to PM (odd) as well as PM 
(odd to AM (even), while AM to AM and PM to PM reduces amplitude. Even a 
simple low-pass filter will do this, and to minimize it's effect the 
cut-off frequency should be higher than the carrier frequency so it is 
essentially flat and equal for both LSB and USB. Similarly should a 
resonator be tuned to be on the carrier frequency, or the upper and 
lower slopes will not match well.


Anyway, wanted to take the opportunity to explain a little more.




Verification steps:
Verify the DUT output level is correctly brought to 0dB (using attenuators) 
using a calibrated spectrum analyzer
Connect the DUT to the phase measurement setup and set the reference to a 500Hz 
offset to get a beat note and verify the beat note is registered at 0dB, change 
the DUT level some dB up and down to confirm its in a linear region.
Measure the per Hz output power of a noise source using a calibrated spectrum 
analyzer and a noise marker set to 10MHz.
Connect the noise source to the phase measurement setup and check if the noise 
level is measured at level measured by the spectrum analyzer + 3dB
This should work if the RBW of the phase measurement is indeed set to 1Hz.

If you do it this way, you still need to do conversion for the “one radian” 
reference
level that is used with phase modulation ( = the reference is *not* one cycle 
). Yes
that’s a bit weird / obscure.
But then again, it's exactly what the standard says. We even rewrote 
that section to clarify it.




Another verification option may be to use the phase modulation of a signal 
generator. This can not check the effective noise bandwidth of the FFT but it 
can check linearity over the whole range.
The output of the mixer is terminated with 50ohm so a factor of 10 in voltage 
should give a 20dB power step.

Audio termination at 50 ohms does not do much for isolation. (again a bit of an 
obscure
point). By terminating in 10X the nominal impedance ( so 500 ohms in this case) 
you get
another 6 db of gain in the system. Since this is ahead of the preamp, it might 
improve
your noise floor.


You may however want to have 50 Ohm termination for RF frequencies 
(sum-frequency as well as leakage of carrier frequency). The NIST T 
archive has illustrations of different mixer loading networks such that 
it is high impedance at LF but low impedance at RF. Loading down the LF 
which is detected signal makes no sense, as we amplify the voltage and 
not the power received. We do want to terminate the RF so that it is not 
reflected back into the mixerin a termal unstable fashion. If the RF 
termination can be close enough to the mixer, other impedance than 50 
Ohm can be considered for even more optimal result, but unless one knows 
what one does, I'd say stick with 50 Ohms for that is what the mixer is 
mast likely designed for.


Cheers,
Magnus



Bob



When operation in the linear range the phase noise measurement setup should 
measure 20dB less with every factor 10 reduction in phase modulation depth 
where 90 degrees is equal to 100% modulation depth so equal to the signal you 
get when measuring a beat note.
When measuring with modulation depth of 90,9,0.9,0.09 and 0.009 degrees the 
measured level should step from 0,-20,-40, -60 to -80dB

Any feedback?
Erik.

On 26-6-2022 20:52, Magnus Danielson via time-nuts wrote:

Hi Erik!

Great progress! Sure interesting to look at them phase-noise plots, ri

[time-nuts] Re: First PN measurement results at 1 Hz to 20 kHz from carrier

2022-06-27 Thread Magnus Danielson via time-nuts

Hi Erik,

On 2022-06-27 11:43, Erik Kaashoek via time-nuts wrote:

Magnus, Bob,
When the mixer is operating in the linear region for the DUT input 
(0dBm or lower), would it be possible to use a calibrated noise 
sources  to do an extra verification of the noise level measurement?


Yes. NIST build such calibrators for exactly that purpose.


Of course with a noise source you get 3dB as both sidebands fold.

Naturally.

Verification steps:
Verify the DUT output level is correctly brought to 0dB (using 
attenuators) using a calibrated spectrum analyzer
Connect the DUT to the phase measurement setup and set the reference 
to a 500Hz offset to get a beat note and verify the beat note is 
registered at 0dB, change the DUT level some dB up and down to confirm 
its in a linear region.
Measure the per Hz output power of a noise source using a calibrated 
spectrum analyzer and a noise marker set to 10MHz.
Connect the noise source to the phase measurement setup and check if 
the noise level is measured at level measured by the spectrum analyzer 
+ 3dB
This should work if the RBW of the phase measurement is indeed set to 
1Hz.


500 Hz is kind of arbitrary number you chose there. At the very least it 
should be verified, but I would assume there is an underlying goal which 
made you choose 500 Hz and that should be specified.


The RBW is not the noise BW. You need to correct the bin-bandwidth with 
the specific noise bandwidth correction for the window-filter you use. 
There is a neat article [1] on it which I also contributed into IEEE 1139.




Another verification option may be to use the phase modulation of a 
signal generator. This can not check the effective noise bandwidth of 
the FFT but it can check linearity over the whole range.
The output of the mixer is terminated with 50ohm so a factor of 10 in 
voltage should give a 20dB power step.
When operation in the linear range the phase noise measurement setup 
should measure 20dB less with every factor 10 reduction in phase 
modulation depth where 90 degrees is equal to 100% modulation depth so 
equal to the signal you get when measuring a beat note.
When measuring with modulation depth of 90,9,0.9,0.09 and 0.009 
degrees the measured level should step from 0,-20,-40, -60 to -80dB


For higher modulation depth it's very hard to do linear modulation well. 
There is a different variant you can use, to inject a signal to form a 
side-band signal next to the carrier. By creating a specific amplitude 
compared to the carrier, and for a particular offset, it's trivial 
exercise to know the AM and PM noise level. A single sideband sine will 
divide equally to AM and PM, thus reducing 3 dB. Thus, setting a 
side-band at 27 dB bellow the carrier, makes a -30 dBc phase modulation.


[1] 
https://pure.mpg.de/pubman/faces/ViewItemOverviewPage.jsp?itemId=item_152164


Cheers,
Magnus



Any feedback?
Erik.

On 26-6-2022 20:52, Magnus Danielson via time-nuts wrote:

Hi Erik!

Great progress! Sure interesting to look at them phase-noise plots, 
right? It's a really good tool in addition to the stability of ADEV 
and friends.


As I recall it, the ADE-1 is not documented to be isolated, but it is 
very obvious when you look down the backside of it. However, it has 
capacitive coupling and one should consider both common mode 
rejection and common mode loading it down for these to work well.


Word of caution when it comes to levels, as the windowing filter used 
causes shifts in noise-levels, so estimation of noise-levels becomes 
a little bit tricky as you try to get the nitty gritty right, but 
getting the overall shape view you already gained a lot with the 
things you achieved.


A technique used to push further down into lower noise-levels is the 
cross-correlation technique, where you split the signal into two 
channels, each being exactly what you have now, and then rather than 
squaring the output of the FFT from each channel, you multiply one 
with the completment of the other, then average on those. This allows 
you to supress the noise of each reference oscillator. You do not 
have to go there from start, as you already make very useful 
measurements, but I'm just suggesting what may lie up ahead.


Compared to some of the other sources, the Rigol SG does fairly well, 
but then again, things can be even more quiet. For the XO you can see 
the 15 dB/Oct slope as expected for flicker frequency. Try to locate 
the source of the peaks you see and see if you can clean it up. The 
XO seems to be a fairly good DUT for doing that.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: Repeatability of stability measurements

2022-06-27 Thread Magnus Danielson via time-nuts

Hi Hans-Georg,

Maybe it is better to refer to this as the inverse of 20,48 GHz, since 
that is the virtual clock rate of the interpolated coarse-clock. 
Considering that it is 2048 * 10 MHz, it is not hard to imagine that a 
coarse clock of say 80 MHz is then interpolated by 256 to achieve that, 
which is entierly reasonable. 160 MHz and 128 is another example. A look 
down the manual should indicate which one it is.


Cheers,
Magnus

On 2022-06-25 21:10, Hans-Georg Lehnard via time-nuts wrote:

Sorry, this was completely nonsense .. i correct the resolution factor
and forgot the "e" so  i get 4.88281248-11 as factor and scaled
the timelab plots with it. My interpretation is just as stupid.

The correct Factor is 4.88281248e-11.

Am 2022-06-25 17:15, schrieb Hans-Georg Lehnard via time-nuts:


First of all the resolution factor from my first post is a copy error
the correct value is 4.88281248-11 (less decimal places). In the
attached diagrams the correct factor was used.

I generated testfiles with 2046,2047,2048,2049,2050 intervals, loaded
into timelab and scaled them with 4.8828124998-11.

MDEV shows more noise as real measurements . Another testfile with 2000
intervals and scaled with 5e-11 shows similar results.

In the Frequenc difference plot you can see the difference grows
stepwise with time. The zoom shows where the 2048 intervals are already
in the next time step and the 2000 intervals are not yet. By zooming in
you can also see this between the 2046 and 2050 intervals.

Possibly an overflow or rounding error ?

I think the overlap of this effect with the white noise of the real
measurements creates my measured jumps.  More noise attenuates this
step-like progression.
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: First PN measurement results at 1 Hz to 20 kHz from carrier

2022-06-26 Thread Magnus Danielson via time-nuts

Hi Erik!

Great progress! Sure interesting to look at them phase-noise plots, 
right? It's a really good tool in addition to the stability of ADEV and 
friends.


As I recall it, the ADE-1 is not documented to be isolated, but it is 
very obvious when you look down the backside of it. However, it has 
capacitive coupling and one should consider both common mode rejection 
and common mode loading it down for these to work well.


Word of caution when it comes to levels, as the windowing filter used 
causes shifts in noise-levels, so estimation of noise-levels becomes a 
little bit tricky as you try to get the nitty gritty right, but getting 
the overall shape view you already gained a lot with the things you 
achieved.


A technique used to push further down into lower noise-levels is the 
cross-correlation technique, where you split the signal into two 
channels, each being exactly what you have now, and then rather than 
squaring the output of the FFT from each channel, you multiply one with 
the completment of the other, then average on those. This allows you to 
supress the noise of each reference oscillator. You do not have to go 
there from start, as you already make very useful measurements, but I'm 
just suggesting what may lie up ahead.


Compared to some of the other sources, the Rigol SG does fairly well, 
but then again, things can be even more quiet. For the XO you can see 
the 15 dB/Oct slope as expected for flicker frequency. Try to locate the 
source of the peaks you see and see if you can clean it up. The XO seems 
to be a fairly good DUT for doing that.


Cheers,
Magnus

On 2022-06-25 20:07, Erik Kaashoek via time-nuts wrote:
Thanks to all the great help from people on this list I was able to 
make some progress in doing close-in phase noise measurements.
The setup consists of a VC-OCXO going into the LO port of  an ADE-1 
mixer, The DUT into the RF port, the IF port is low pass filtered and 
used to steer the VC-OCXO and is send to a high quality 24 bit USB 
audio capture unit connected to a PC running ARTA.
The ADE-1 mixer was selected because that all ports are completely 
isolated from each other, there is no common ground, which helps to 
reduce ground loop problems a bit.
The log plots from ARTA confirm a 130dB dynamic range and the 
resolution bandwidth is supposed to be about 1Hz. Each plot was 
averaged over 10 measurements
All input signals where normalized by attenuators to have their 
carrier at 0dB in the plot.
Using a generator at variable frequency offset it was confirmed the 
audio input is flat down to 1Hz.
Using a generator with phase modulation down to 0.001 degree the 
sensitivity of the measurement chain was checked. (20dB level 
reduction with every factor 10 reduction in phase modulation depth)

It is expected to have at least +/-5dB level inaccuracy.
The DUTS measured where:
- a fairly clean XO (PN_XO.JPG)
- a rather bad GPSDO output (PN_GPSDO.JPG)
- The not so famous cheap Chinese TCXO (PN_TCXO.KPG)
- The output of a Rigol SG (PN_Rigol.JPG)
The XO is the cleanest
The TCXO shows odd spurs between 10 and 40 Hz and the PN does not drop 
down as it should (spec states: -135dBc/Hz at 1kHz offset)
The GPSDO is terrible, this demonstrates you can have a 1e-10 ADEV at 
1s tau from a bad oscillator.
The Rigol is not so clean and a PLL shoulder seems to be present just 
above 1kHz.


Next step is to add low noise gain close to the mixer LPF output to 
get more dynamic range and a better VC-OCXO (Morion MV170 (PN 
-100dBc/Hz at 1Hz offset) to lower the impact of the reference VC-OCXO

Erik.

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: Repeatability of stability measurements

2022-06-26 Thread Magnus Danielson via time-nuts

Hi Hans-Georg,

You're MDEV slope is not that of white noise, but that of a (correlated) 
systematic damping. You have 1/tau^2 rather than expected 1/tau^1.5. 
Also, your levels are way off. This steeper slope for systematics is not 
widely documented by the way, but direct consequence of the math.


This is where I slip between the wrapped-phase (w) and unwrapped-phase 
(p) in TimeLab and figure out what is going wrong. That can be one hint.


Looking on your Frequency difference it may be that you have multiple 
slips. Could it be that you loose data-samples and thus the phase-slope 
jumps?


Here my main concern is the continuity of your data. That gives this 
kind of severly distorted plot that swamps the real measurement quickly, 
and is a sure give-away.


Cheers,
Magnus

On 2022-06-25 17:15, Hans-Georg Lehnard via time-nuts wrote:

First of all the resolution factor from my first post is a copy error
the correct value is 4.88281248-11 (less decimal places). In the
attached diagrams the correct factor was used.

I generated testfiles with 2046,2047,2048,2049,2050 intervals, loaded
into timelab and scaled them with 4.8828124998-11.

MDEV shows more noise as real measurements . Another testfile with 2000
intervals and scaled with 5e-11 shows similar results.

In the Frequenc difference plot you can see the difference grows
stepwise with time. The zoom shows where the 2048 intervals are already
in the next time step and the 2000 intervals are not yet. By zooming in
you can also see this between the 2046 and 2050 intervals.

Possibly an overflow or rounding error ?

I think the overlap of this effect with the white noise of the real
measurements creates my measured jumps.  More noise attenuates this
step-like progression.

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Repeatability of stability measurements

2022-06-24 Thread Magnus Danielson via time-nuts

Hi Hans-Georg,

Does the E1740A also has the STP2945 as timing reference?

First of all, you have nice illustrations of the white phase modulation 
noise slopes, of tau^1.5 as expected in the MDEV. Trouble is, it's not 
pure WPM noise, but a mixture of the noise and the quantization effect. 
So, it would be good if the E1740A is known to either free-float or 
locked to one of the references.


By using MDEV you utilize the averaging benefits over the ADEV, giving a 
sqrt(1E7) benefit as you hit 1 s. However, while your slope will roughly 
coincide with the single-shot resolution, it's exact position is more 
complex. The relative frequency may cause the resolution to go up and 
down due to systematic nature, and the noise in addition to the signal 
will intermodulate with the systematics.


If you have both these long-term locked to a stable source, it should 
stabilize. Also, since you have a slow beating pattern of the 
systematics, exactly when you measure may decide if it is high or low on 
that valey.


You very clearly see systematic noise in the phase noise. These are for 
sure fighing against you.


Do more of these phase noise analysis and see if any of the systematics 
there coinside with high or low noise, or if indeed the noise floor 
moves up and down.


This slope is more tricky than it get credit for.

Cheers,
Magnus

On 2022-06-23 22:56, Hans-Georg Lehnard via time-nuts wrote:

I am trying to find out the limits of my measuring devices and
oscillators. Therefore I made 2 series of measurements with the same
conditions. In the first series I got 1 out of 10 and in the second 2
out of 10 results with much less noise. I looked at the results in
Timelab and then exported one good and one not so good result (recorded
4 minutes earlier) to stable32 for further analysis. Is this a problem
of my measurement setup or an artifact in the measurements. My
measurement device is a HP E1740A Time interval analyzer with about 50ps
(4.88281248046875e-11) resolution. A Samsung STP2945LF (free
running) from my GPSDO as reference and the reference output of my
BG7TBL FA2 as measurement object. Attached are the results from the 2
series and the 2 further analyzed from the first series.

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Fixing PN degradation via ADEV measurement

2022-06-20 Thread Magnus Danielson via time-nuts

Hi Jim,

On 2022-06-20 17:57, Lux, Jim via time-nuts wrote:

On 6/20/22 2:39 AM, Magnus Danielson via time-nuts wrote:



So, a counter is really like an ADC for phase, with wide bandwidth 
input and a sub-sampling mechanism (trigger/time-base). Through 
processing frequency estimates can be provided. Aliasing occurrs in 
the sub-sampling. Modern counters can provided estimation filters 
than goes from a higher sub-sampling rate to a lower, which to some 
degree removes aliasing, but not fully. These frequency estimation 
methods form a form of decimation filter.


Cheers,
Magnus 


An intruiging thought as I drink my first cup of coffee (meaning it's 
not well thought out)..

Enjoy!


jumping off from "counter is similar to an ADC for phase" - is there a 
time domain equivalent for Nyquist criterion?   Certainly there's the 
cycle ambiguity.. you know when the zerocrossing occurred, but not how 
many are in between (although a counter usually does). For everything 
else there is a frequency/time duality, so I suspect there is.  The 
criterion is usually explained in terms of information - so there 
should be an equivalent "has all the information" statement for 
counters/gate widths/precisions.


Well, considering that optimum phase/time sensitivity is at the 
through-zero of a sine, with the optimum slew-rate of the signal, you 
have two observation points per cycle. You can view that as having 
essentially two sample-points of phase per cycle. Similarly you will 
have two optimal sample-points for amplitude in quadrature on the peaks 
of the sine.


Now, using this fact, you have a Nyquistian type of relationship and 
also upper phase-information frequency being that of the cosine itself, 
since you can fit a modulaiton that pushes the rising edge one way and 
the falling edge the other way. As you attempt a higher modulation 
frequency you cannot distinguish that from the mirror frequency lower 
than that frequency. Thus, they Nyquist frequency of modulation is the 
carrier frequency.


But then again, the same can be said for any overtones, so you can 
support higher modulation frequencies there, with the same basic rule. 
However, sorting that out can be a bit tricky, considering non-linear 
functions and intermodulations.


PS. IEEE Std. 1139-2022 made it through a formal approval after 
balloting, so now it is off for last editorial touch-ups before 
publishing. Good news. Look forward to put it into use.


Cheers,
Magnus



___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: Fixing PN degradation via ADEV measurement

2022-06-20 Thread Magnus Danielson via time-nuts

Erik,

A counter actually measures a number of phase measurements. Then, as you 
process that you get a frequency readout based on the difference between 
them (event-count divided by time between phase measurements). Now, as 
you want to do frequency read-out, you can do a handful of filtering 
mechanisms, and the CNT-91 can do the linear regression. This filtering 
takes a number of samples and provides a filter to estimate frequency. 
The consequence of that is that variations you see now have a different 
scale than if you did the original calculation of only two 
phase-samples. This creates a bias function and variations needs to be 
corrected for to get numbers you can relate to the normal scale. It's 
great for giving better frequency readings, but if you aim to quantify 
the variations you end up fooling yourself.


Also, your assumption on observation frequency in Nyquist is wrong. 
Turns out that aliasing of higher frequencies is very problematic. It is 
only very recent instruments that can have the ability to avoid aliasing 
(by using digital decimation), but a counter is not one of them, it is 
fully exposed to the aliasing problem.


There is translations charts to convert the noise-amplitude of each 
noise type into phase-noise and ADEV readings. If you have truely random 
noise obeying the rules, you can convert between them. Toss in a spur, 
and it works differently, and well, you need to convert those too 
according to other rules. Look for "Enricos chart".


Noise types reaching for high frequencies compared to measurement tau0 
will affect the resulting ADEV for sure. The bandwidth of that even 
affects white phase modulation directly, and flicker phase modulation to 
some degree.


So, a counter is really like an ADC for phase, with wide bandwidth input 
and a sub-sampling mechanism (trigger/time-base). Through processing 
frequency estimates can be provided. Aliasing occurrs in the 
sub-sampling. Modern counters can provided estimation filters than goes 
from a higher sub-sampling rate to a lower, which to some degree removes 
aliasing, but not fully. These frequency estimation methods form a form 
of decimation filter.


Cheers,
Magnus

On 2022-06-20 08:45, Erik Kaashoek via time-nuts wrote:

Bob,
Many thanks for the guidance you provide and the phase noise 
measurement document.
Can you provide feedback on this reasoning: A counter is like an ADC 
but in the frequency domain. So if you measure with 0.01 s tau you 
basically average over 0.01 s so you can only observe "phase noise" 
(e.g. energy that is not at the exact requested frequency) up to 
maximum 50 Hz from the carrier. But as you measure the true frequency 
changes the sensitivity of this measurement is extremely high. 
Translating the amount of time spend at a certain frequency away from 
the carrier (ADEV?) into a phase noise number in dBc is something I do 
not yet understand.
With a (very good) spectrum analyzer you may be able to come close to 
the carrier but as there is so much energy in the carrier it will be 
difficult to observe phase noise energy closer than say 1 or 10 kHz 
(at least not with the equipment I can afford) so any phase noise plot 
created using a spectrum analyzer can not be better than the combined 
phase noise of all LO's in the spectrum analyzer and will start at say 
1 or 10 kHz.
For the frequencies between 50 Hz and 20 kHz the simplest option is to 
use a second LO and a mixer and a slow  (loop BW below 10 Hz)PLL to 
keep the mixer in quadrature and feed the output of the mixer, after 
low pass filtering, into a PC soundcard for FFT processing.

Erik.

On 19-6-2022 22:45, Bob kb8tq via time-nuts wrote:

Hi

As HP found out back around 1973 or so, translating ADEV to phase noise
is not possible. This is true, even if you have the ADEV numbers for 
a variety

of Tau values as opposed to some sort of “average” kind of number.

There are a number of things ( like spurs ) that can strongly 
influence a counter
based ADEV reading, and have very little impact on a phase noise ( or 
signal to
noise reading.  There also are ways the shape of the phase noise 
curve can
impact ADEV and have very little signal to noise impact for a 
specific signal.


By far the best way to do this is to properly measure phase noise at 
various
offsets from carrier. You can then look at the dbc/Hz numbers at each 
offset.
This lets you see what your devices are doing to the signal. You can 
then track

down the offending bit or piece and fix the problem.

The easiest way I know of to do phase noise is to quadrature lock two 
identical
sources into a double balanced mixer. You then put in a simple 
amplifier stage
to drive the mix down output into a sound card or spectrum analyzer. 
Total cost
if you already have a sound card should be < $50 ( US dollars …) for 
a DIY version.
That assumes you have the usual junk box parts and do a point to 
point wire

version.

Some example ADEV plots:


[time-nuts] Re: Is SC the most stable cut for lowest phase noise?

2022-06-13 Thread Magnus Danielson via time-nuts

Hi,

So, one of my hydrogen masers have a tiny leak. The outer vacuum does 
not pump down as it should, and it has consumed the ion pump. This have 
massive effect on the thermal balance and causes heat to leak out much 
faster from the outer heaters to surrounding wall. This prohibits its 
controllers of reaching the temperature but also prohibits next layer to 
achieve it and finally inside that the actual resonator temperature 
stabilization. That is in itself 5 temperature control-loops in 3 
layers. There is also passive heat-shields in it. So, it is crazy 
efficient heat transfer. It is so efficient method, that it's the basis 
for pressure gauges, such as the Pirani gauge.


Now, while it is annoying it does not work, I can say that it has 
significantly increased my knowledge about temperature stabilitzation of 
hydrogen masers and workings of vacuum systems. Not to say I am fully 
skilled, but at least not as ignorant as before, so there is a good 
start. I also learned how to write some Python to log the serial port, 
toss it into a InfluxDB and plots it with Graphana.


So the effect is real, very real.

We are looking into bringing a masspectrometer over, pump it down and 
then do helium leakage tests to see if we can locate the joint that is 
leaking. While there is many possible joints in the vacuum system, my 
testing have helped to narrow it down to three. Of those, in particular 
one could be suspect as it could potentially take more hit than the 
others upon transport. Yes, I need a turbopump setup and spare parts.


Cheers,
Magnus

On 2022-06-13 03:30, Bob kb8tq via time-nuts wrote:

Hi

Tear into some of your SC cut based OCXO’s. Take a look at the crystal package. 
For
bonus points, open up the crystal package. If you have the gear to test it, 
take a look
at what the gas *is* inside the package. ( Good luck with that :) :) :) )

If you had the gear and the willingness to scrap out OCXO’s you would find that 
a number
of fast warmup OCXO’s have a *tiny* amount of He in the package. Measuring this 
would
be tough ( it’s that small). Go through the thermal modeling and it’s *way* 
more conductive
(thermal wise) than a *perfect* vacuum ……

Bob


On Jun 12, 2022, at 9:18 AM, Ross P via time-nuts  
wrote:

I have seen that manufacturers seal their crystals in a vacuum, maybe air 
interaction affects Q. The point that vacuum inhibits heat flow is something I 
have never considered in ovenized units. My ovenized crystals take about an 
hour to settle. I have some WW2 surplus crystals in non-sealed packages that I 
have not tested... something to do.rp

On Sunday, June 12, 2022 at 07:26:19 AM PDT, Louis Taber via time-nuts 
 wrote:

I have been of the impression for years now that most "better" crystals are
in a vacuum.  And the electrical and mechanical connections to the quartz
itself place as little mechanical load on the crystal as possible.
Thermal conductivity from the oven to the crystal itself would be both
hard to model and hard to speed up.

IR transmission of energy to the crystal also seems problematic considering
the IR transmission of quartz and the IR reflectivity of gold
contact plating.

Is any of this an issue?

   - Louis

On Fri, Jun 10, 2022 at 9:53 PM Bob kb8tq via time-nuts <
time-nuts@lists.febo.com> wrote:


Hi


On Jun 10, 2022, at 2:38 PM, Lux, Jim via time-nuts <

time-nuts@lists.febo.com> wrote:

On 6/10/22 1:57 PM, Dr. David Kirkby wrote:

On Fri, 10 Jun 2022 at 17:39, Lux, Jim via time-nuts <

time-nuts@lists.febo.com> wrote:

 On the subject of rapid warm up. I suppose if you had a need, one
 could
 dump as much power as you need into the heater. Turn on oscillator,
 lights in room dim for a few moments.


Is that not likely to damage a crystal? Different parts of the crystal

and likely to be at significantly different temperatures at the same time,
putting a lot of stress on the crystal due to a thermal gradient. It's
probably a bit academic, as nobody is going to make an oven that heats up
in fractions of a second, but if one did, I suspect it might not do the
crystal a lot of good. This is only an educated guess - I don't have
anything to back it up.

Oh, it would be disastrous, although quartz is pretty strong, all the

rest of the mounting components might not be.

Indeed, breaking a quartz blank via thermal stress would be very hard to
do.
The “rest of the parts” actually are pretty durable as well. Most of it is
metal and
it is quite able to handle thermal issues.

The big issue in a fast warm up AT turned out to be designing the heater
and the
mount to get the energy to the blank quickly….. If you use a small enough
package
and blank, the amount of power turns out to be surprisingly small.

If you want to go bonkers, you mount the heaters *inside* the crystal
package. This
does indeed create some issues in various areas.

Bob


At the other extreme,  would there be any advantage in actually heating

the crystal very slowly, over 

[time-nuts] Re: Is SC the most stable cut for lowest phase noise?

2022-06-10 Thread Magnus Danielson via time-nuts

Hi,

It is in this context one should look at the CSAC, as the low power 
consumption can enable it to be powered on much earlier. It's not that 
it's performance is stellar, but when you consider power consumption and 
what that can enable, it becomes impressive and another approach to 
solve things.


Cheers,
Magnus

On 2022-06-11 00:00, Bob kb8tq via time-nuts wrote:

Hi

Well …. folks have made AT based OCXO’s that heat up in “seconds” ( as in under
a minute ). Back in the 1980’s they stabilized to < 1x10^-7 at least as fast as 
the
then typical SC based OCXO’s did ….  ( < 6 minutes ). Collins bought quite a 
few of
them over the years.

Bob


On Jun 10, 2022, at 1:25 PM, Dr. David Kirkby via time-nuts 
 wrote:

On Fri, 10 Jun 2022 at 17:39, Lux, Jim via time-nuts <
time-nuts@lists.febo.com> wrote:


On the subject of rapid warm up. I suppose if you had a need, one could
dump as much power as you need into the heater. Turn on oscillator,
lights in room dim for a few moments.


Is that not likely to damage a crystal? Different parts of the crystal and
likely to be at significantly different temperatures at the same time,
putting a lot of stress on the crystal due to a thermal gradient. It's
probably a bit academic, as nobody is going to make an oven that heats up
in fractions of a second, but if one did, I suspect it might not do the
crystal a lot of good. This is only an educated guess - I don't have
anything to back it up.

At the other extreme,  would there be any advantage in actually heating the
crystal very slowly, over the course of an hour/day/week, so the
temperature gradient across the crystal is very small? Of course, if an
oven took ages to reach the correct temperature, it would be inconvenient
for most applications, but for some applications, the advantages might
outweigh the disadvantages. Of course, if one does this, I suspect one
would have to cool the crystal slowly too to prevent a significant thermal
gradient across the crystal.

I know it's a bit different, but I have a 600 mm f4 Nikon camera lens. I
was told that Nikon cools the front element over a period of 6 months to
reduce stresses in the glass.

Dave
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: Is SC the most stable cut for lowest phase noise?

2022-06-09 Thread Magnus Danielson via time-nuts

Hi,

The ability to stabilize at that temperature is indeed an issue. While 
the resonators Q shift, the loaded Q I would not expect to shift as 
much. Futher, the noise of the supporting amplifier also needs to be 
reduced. Naturally, the design could be adjusted to the different condition.


I've seen people taking "the best" oscillator (SC-cut) and go cryogenic, 
and then have severe temperature issues, because they did not understand 
that the benefit of the cut really manifest itself at some specific 
temperature. Luckily that they was able to get that feedback as the 
benefit of presenting to their peers at a conference.


Cryogenic provides an oppertunity, but as with any such condition, it 
takes a number of considerations to be able to harvest the benefits.


Cheers,
Magnus

On 2022-06-09 02:12, Bob kb8tq via time-nuts wrote:

Hi

Lower turning point has been done, both with AT’s (back in ~ the 1950’s) and
with SC’s. Neither one showed any significant benefit.

Taking a crystal down to sub 20K sort of temps does ramp up the Q. The gotcha
is that the frequency vs temp curve is so steep that very minor temperature 
variations
utterly trash the stability of the device.

Bob


On Jun 8, 2022, at 1:49 PM, Gerhard Hoffmann via time-nuts 
 wrote:

Am 2022-06-08 21:53, schrieb Tom Van Baak:

Would it be advantageous, then, to run a high-performance laboratory
oscillator at its lower turnover point? Or at -78 C (CO2) or 77 K
(liquid Nitrogen)?

I have no idea about the crystal itself. Maybe Bernd or the SC (SantaClara)
veterans can help?

When I measured the Q of the recovered SC crystal from that Morion MV89A,
there was not much of a difference in the wanted resonance between room
temperature and +89°C. I think I have published the data here a year ago.
My deep freezer in the basement can do -36°C, but the VNA is so heavy...

Infineon boasted that their SIGET transistors work nicely at a few Kelvin,
so it would probably not fail for semiconductor availability (BFP640 & friends).
OTOH, Ulrich Rohde wrote that the noise figure of the sustaining amplifier would
take a hit under large signal conditions, but I don't know hard numbers.
That would not disappear.

But then, in a Driscoll for example, you can give the 2 transistors enough
current so they run class A and do the little bit of limiting on the output side
with Schottkys. For the amplifier, that is not large signal.

That might be different for an amplifier in Lee-Hajimiri style.
This is Dirac pulse excitation at the peak of the cycle to avoid phase 
modulation,
that is optimized for mixing up 1/f noise.  :-)

Anyway, with a noise figure of the sustaining amplifier of a dB or even a few,
there is no game changer to be expected from cooling.

Whispering gallery saphire, anyone? I was at the precious stones museum
in Idar-Oberstein here in the 'hood and saw all these huge saphires.
I left with the head full of ideas...

Cheers, Gerhard
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: Is SC the most stable cut for lowest phase noise?

2022-06-08 Thread Magnus Danielson via time-nuts

Hi Gerhard,

On 2022-06-08 20:42, g...@hoffmann-hochfrequenz.de wrote:

Am 2022-06-08 13:27, schrieb Magnus Danielson via time-nuts:


As far as I remember and know, you can achieve about the same
phase-noise properties as you hit about the same bandwidth from the Q,
and noise contribution is about the same. So, it boils down to do the
supporting amplifier well.


But SC can tolerate more power, so you may get more distance to the
thermal noise floor.


Good point. It shifts the drive-level issue compared to AT-cut.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Is SC the most stable cut for lowest phase noise?

2022-06-08 Thread Magnus Danielson via time-nuts

Hi,

I agree in general. However, I do see that other work to get good 
resulst have been done when SC-cut is considered, so rather than SC-cut 
as a cut is better, it becomes somewhat of a tell-tale of that other 
work being done properly. I.e. it is meaningless to take the step to 
SC-cut when other defects dominate so the SC-cut properties only makes 
things more expensive than the AT-cut.


As far as I remember and know, you can achieve about the same 
phase-noise properties as you hit about the same bandwidth from the Q, 
and noise contribution is about the same. So, it boils down to do the 
supporting amplifier well.


Cheers,
Magnus

On 2022-06-08 06:27, Bob kb8tq via time-nuts wrote:

Hi

Simple answer is: no.

More complete answer is: no

There is a lot more to stability than just the crystal cut. Having this or that 
cut is
in no way a guarantee that the result is “better” than some other cut. Indeed 
there
are more exotic cuts than the SC that improve on this or that. There are also 
mounting
/ fabrication techniques that improve on this or that, regardless of cut.

All that said, the “typical” SC cut based OCXO is likely newer than an AT or BT 
cut
alternative. Various improvements here or there are likely to make it a bit 
better than
the other examples …. ( but not always )

Bob


On Jun 7, 2022, at 6:04 PM, Ross P via time-nuts  
wrote:

Hello,My first post.I have created a 64-bit frequency counter, 15.9 digits 
after converting to floating point.
Oscillator random walk is +- 0.01 ppm with an SC cut crystal at 10 Hz filtered, 
and 0.1 ppm with at cut.Is it the crystal or the oscillator electronics (inside 
a can) that determines the noise?The oscillators I am using are 1 double oven 
SC 10 MHz vs 1 single oven AT cut 10 MHz in one test,and 2 generic crystal 
oscillators (on a Terasic DE1 cyclone II FPGA board) for the other test.I 
assume the single oven oscillator will have better stability than commodity 
oscillators.I am able to chart random walk at up to a few thousand samples per 
second at full double precisionresolution, and FFT shows some alien tones in 
the walk pattern that come and go suddenly, I thinkdue to oscillating mode 
changes in the oscillator itself, mostly show in the commodity crystals.My 
question is: is the SC quartz the most stable for random walk.I would like to 
know if such a frequency counter / alien to detector is useful enough to be 
producedfor sale? It would require at least 3 separate frequencies of refer
ence time standards and > 50Klogic elements in the FPGA for 3 cross coupled 
monitors to cover a range of 0 to 50 MHz.
Quite a risk if no one needs it. 3 separate high stability reference 
oscillators are expensive.rp

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: Identifying GPSDO phase disturbers

2022-06-07 Thread Magnus Danielson via time-nuts

Hi Erik,

On 2022-06-06 19:05, Erik Kaashoek via time-nuts wrote:
During measurement  of a  GPSDO there was some concern about very 
short term phase stability. E.g. for tau between 0.001 and 1 second. 
It proved to be possible to measure the stability for tau larger than 
0.1 s using a frequency counter but neither the counter (limited 
accuracy for very short tau) nor Timelab (shortest tau was 0.02 s) 
where able to reach a tau of 0.01 s.
This is where you transition over to phase-noise measurements rather 
than counter measurements.
Looking at the old HP/Agilent application notes a phase detector 
approach was selected.
The output of the GPSDO was send to the RF port of a mixer. The LO 
port was connected to the output of a VC-TCXO and the IF port output 
was low pass filtered (to remove the 10MHz and higher) and added to 
the Vtune to the VC-TCXO. Course tuning of the VC-TCXO was done using 
a 10 turn potmeter supplied from a very stable linear supply.
It proved to be possible to set the V-tune with the potmeter such that 
the GPSDO and VC-TCXO frequencies where in phase and the loop locked.
Using a dual input frequency counter the ratio of the GPSDO and 
VC_TXCO was measured to confirm they where in lock.
An oscilloscope with FFT was also connected to the LPF output to 
monitor short term phase disturbances. No high frequency (above 10Hz)  
components where observed in the FFT suggesting the initial concern 
was not justified


This is the tight PLL phase noise measurement technique. Care should be 
taken to calibrate it's response, which can be done with an auxillary RF 
generator that inject offset signals with known amplitude-relation and 
offset to the carrier frequency. PLL bandwidth will filter responce, but 
you are recommended to look at the output of the mixer for the high 
frequency part, as this will be supressed by the lowpass filter.


If you use a PI-loop rather than a low-pass filter, it will always lock 
up. Trimming of oscillator EFC offset only controls how fast it locks. 
Considering that a PI-loop is an op-amp, two resistors and a capacitor, 
it can usually be motivated.


Choose your PLL bandwidth to not obstruct your frequency range.


Three main phase disturbances where observed.
1: The GPSDO was in phase lock with the GPS PPS and every time the 
tuning DAC was updated a change in frequency resulting in a change in 
angle on the scope of the mixer output was observed. These changes 
where also visible on the frequency counter
2: Any temperature changed caused strong phase fluctuations. Even 
tough the GPSDO uses a TCXO there is still a large temperature 
sensitivity. Thermal isolation (adding some towels) helped to remove 
fast temperature fluctuations.


If you just use a low-pass filter and not have an integrator, the 
limited DC gain require there to be a phase offset to compensate for the 
frequency offset change. Using a PI-loop where you have a full 
integrator, the high gain of the integrator will work to nullify the DC 
offset of the phase detector and move the needed offset of frequency 
into the integrator state. This will also eat up any changes that 
occurs. Sure, there will be slight deviations, but long-term they will 
be compensated. The phase response will be a high-pass filter of the 
steered oscillator, and the higher frequency of the PLL, the better 
surpression of any local effects. Also, as mentioned before, the PI-loop 
also locks up on itself. You can aid it with a trimmer only to reduce 
the initial frequency offset which significantly reduces the lock-in 
time. A high bandwidth on the PLL also has a very good effect on lock-in 
time.


For other purposes, you want a more narrow PLL, which put more 
requirements on the locked oscillator. For the measurement it's mainly 
the widebandwidth noise which is a limiting factor.


3: Mechanical shock caused clearly visible phase variations. The 
VC-TCXO acted as a sensitive microphone and, to a lesser extend, also 
the TCXO of the GPSDO. Tapping on the workbench with one finger was 
visible. The net effect of the mechanical shock was about zero phase 
change which made it difficult to see on the frequency counter with 
0.1 s gate time but the higher BW of the phase detector allowed to 
observe this. It is yet unclear how to isolate the TCXO in the GPSDO 
from mechanical shock


Sure, it's the nature of the piezoelectric nature of quartz crystals. It 
converts mechanical stress to voltage and back, so it is expected that 
it would also be sensitive in it's oscillator setup, and the tension 
vector is important for it's acoustical properties. It will be sensitive 
to both gravitational forces as well as shock and vibration.


Providing shock-mount as well as vibration reduced mount may be needed 
for some environments.


For environmental effects, the IEEE Std 1193 is the relevant document, 
and it is going through it's balloting process after revision and I 
expect it to be published in the fall.


Cheers,

[time-nuts] Re: Build a 3 hat timestanp counter (hans-ge...@lehnard.de)

2022-06-07 Thread Magnus Danielson via time-nuts

Hi,

It looks as if you have a higher noise floor with the TLV3501. I see two 
effects, both higher slope (usually but not always due to gaussian 
noise) and then also a higher systematic noise. The later could be from 
power-suppy for instance, but any form of RF and LF frequency pickup.


The actual noise of the TLV3501 I just fail to spot in the datasheet on 
a quick look, but also the bandwidth.


I assume that the trigger point is good, that is that you trigger on the 
highest slew-rate point of the curve. For a sine that would be at the 
through-zero point, but for square-wave it is actually closer to the 
previous level for actual signals. A DC blocker and keeping the trigger 
close to zero usually suffice. Then just not loosing amplitude going in, 
as amplitude convert to slew-rate.


You might benefit of doing spectrum analysis on the data to locate RF 
frequencies and track them down in the analog domain.


Consider using amplification stages to increase slew-rate before hitting 
an input.


I remember once a design where the hardware guys had an ECL "comparator" 
setup so it in one state gave a solid signal but the other acted as a 
linear amplifier of all the noise on the board. While it may seem like 
adding hysteresis would cure it, it will only cure it for the non-timing 
parts (which will be the amplitude part) of the signal where as the 
timing part would still be affected. Also, hysteresis shifts the trigger 
point to one which has somewhat less ideal slew-rate for timing 
purposes. As always the timing/phase and amplitude parts of the signal 
is on orthogonal parts.


Cheers,
Magnus

On 2022-06-06 15:19, Hans-Georg Lehnard via time-nuts wrote:

Hi,
I tested the TLV3501 with the HP E1740A TIA and there is a visible
difference. First test an OCXO on reference and directly on the input.
Second test OCXO via the TLV32501 on the input.

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Realtime comparing PPS of 3 GPS

2022-05-30 Thread Magnus Danielson via time-nuts

Erik,

The NEO-7M may have sawtooth correction output, have you checked that 
and made compensations?


Since the oscillator is not steered and free-floating, the 
cycle-assignment of the PPS may be less than optimal so just measuring 
that without the compensation can cause a wider range of PPS than the 
actual receiver time stability represents.


In particular, check chapter 12 and the TIM-TP message of [1].

[1] 
https://content.u-blox.com/sites/default/files/products/documents/u-blox7-V14_ReceiverDescriptionProtocolSpec_%28GPS.G7-SW-12001%29_Public.pdf


Do notice that the TIM-TP message is documented to be issued before the 
(PPS) pulse it report on.


The variations you report is consistent with what the datasheet report 
for the pulse assignment, which may not be representative of the 
receivers performance.


Cheers,
Magnus

On 2022-05-30 13:00, Erik Kaashoek via time-nuts wrote:
Further evaluation did shown the time differences between the 3 GPS 
modules was due to difference in the trigger level setting of the 
timer/counter and difference in length of GPS antenna cables.
After removal of the phase drift due to Rb frequency offset the 
attached image shows the phase differences of the 3 modules versus a 
Rb reference.
The two ATGM modules are very consistent over a 2.8 hours period. The 
NEO-7M varies wildly  with phase errors above 100 ns. Possibly due to 
a somewhat less optimal antenna position.
It seems phase variations over time in the order of 10-20 ns are 
indeed unavoidable, even with a good antenna.

Erik.

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: Build a 3 hat timestanp counter

2022-05-30 Thread Magnus Danielson via time-nuts

Hi Hans-Georg,

On 2022-05-28 17:04, Hans-Georg Lehnard via time-nuts wrote:

Hello Magnus,

I understood that simply sampling 3 channels fast and averaging does not
solve all the problems ;-).
Sure, I just want to illustrate how various approaches could allow you 
to get the most out of the hardware you have.

I have a HP E740A time interval analyzer that I might use for my
oscillators.
The HP-TIA has 50ps resolution, can sample 10 Mhz directly but has only
512K sample memory. For longer recordings I can use the histogram
function. I still need to do some work on my software for that, but that
might be the fastest way to get results.
Does the E1740A have a high speed time-stamping port? The 5371A and 
5372A have that as an option, so if one could have some suitable 
hardware process on those time-stamps, it would be something.


Attached is a picture of my HP10811 and another one that someone made
for me as a comparison to other TIAs.

The third pic shows another meter i have  with TLV3501 and TDC7200
without averaging compared to an FA2.

I will keep trying with the TDC7200 and maybe better with the LTC6957
and only one channel.


Yes, to try different approaches. The LTC6957 is a very cool little chip.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Optimizing GPSDO for phase stability

2022-05-30 Thread Magnus Danielson via time-nuts

Hi Erik,

On 2022-05-28 10:29, Erik Kaashoek via time-nuts wrote:

Hi Magnus,

I've insufficient understanding of PLL's to grab the full meaning of 
your remark on "shift of the resonance"


OK, so a PI-controlled PLL has two basic characteristics, it's resonance 
frequency and it's damping factor (reciprocal of Q-factor).


You will get a frequency where there is a positive gain, giving 
jitter-peaking, as the phase-noise (aka jitter) from the reference port 
get's increased gain over to the output. The I factor of the PI-looped 
PLL is proportional to the square of this characteristics. The P factor 
is then proportional to the resonance frequency times the damping factor.


Now, this peak of noise energy will have a tell-tail in the ADEV plot as 
being similar to the wavey pattern you get from a pure sine of the same 
frequency as the mid-point of the jitter-peaking. What I was observing 
was how that peak moved in the ADEV plot, and suggested that a better 
view could be given in the phase-noise domain.


For jitter-peaking, see for instance Wolaver "Phase-locked loop circuit 
design".


Attached are the 3 phase PSD plots from stable32. Is that what you 
where looking for?

Tick_01 is for Kp=0.1, Tick_004 is for Kp=0.04, etc...
With Kp=0.01 there seems to be a peak at 3e-3Hz, for the other Kp it 
seems to be less evident if there is a resonance peak in the phase.
Also attached are the Frequency PSD plots (Freq_001, Freq_004, 
etc...)  and these show a clear shift of the peak.

Indeed, as I suspected

Does this shift imply the loop is not yet tuned optimal?


I wonder how your model and parameters work.

I tend to label the phase-detector to EFC gain factor as P and the 
phase-detector into the integrator (who's output is added to the EFC) 
gain factor as I.


VD = PhaseDetector output
VI = VI + VD*I
VF = VI + VD*P
EFC = VF

I tend to model it as analog continuous time, but similar enough 
properties occurs in digital discrete time.


In such a model, the steering parameters is resonance frequence f0 and 
damping factor d.


I = KI * f0^2
P = KP * f0 * d

The fixed constants KI and KP can be derived from loop and scaling 
parameters.


Notice that there is no single gain-point which will only dial for f0, 
but both I and P need appropriate scaling.


To keep jitter peaking reasonable, the damping factor d should be 3 or 
higher. However, for test purposes it can be set lower to make jitter 
peaking and thus resonance frequency easier to observe.


Cheers,
Magnus


Erik.


On 27-5-2022 21:30, Magnus Danielson via time-nuts wrote:

Dear Erik,

On 2022-05-27 18:02, Erik Kaashoek via time-nuts wrote:
The GPSDO/Timer/Counter I'm building also is intended to have a 
stabilized PPS output (so with GPS jitter removed).
The output PPS is created by multiplying/dividing the 10MHz of a 
disciplined TCXO up and down to 1 Hz using a PLL and a divide by 
2e8. No SW or re-timing involved.
The 1 PPS output is phase synchronized with the PPS using a SW 
control loop and thus should be a good basis for experiments that 
require a time pulse that is stable and GPS time correct.
As I have no clue how to specify or evaluate the performance of such 
a PPS output I've done some experiments.
In the first attached graph you can see the ADEV of the GPS PPS (PPS 
- Rb) and the 1 PPS output with three different control parameters 
(Tick - RB)
As I found it difficult to understand what the ADEV plot in practice 
means for the output phase stability I also added the Time Deviation 
plot as I'm assuming this gives information on the phase error 
versus the time scale of observation.


The ADEV plot is the frequency stability plot, so it can be a bit 
challenging to use it for phase stability.


The TDEV plot is the phase stability plot, so it is more useful for 
that purpose.


There is a technical difference between these beyond the difference 
of frequency vs phase stability, and that is that ADEV is the 
frequency stability for a Pi-counter where as TDEV is the phase 
stability for a Lambda-counter, where MDEV is the frequency stability 
for the Lambda-counter. There is no standardized phase-stability for 
Pi-counter. For a nit-pick like me it is significant, but for others 
it may be mearly a little confusing.


Lastly a plot is added showing the Phase Difference. All plots where 
created using the linear residue as the Rb used as reference is a 
bit out of tune.

Also the TIM files are attached
The "PPS - RB" and "Tick - RB Kp=0.04" where measured simultaneously 
and should show the extend to which the GPS PPS is actually drifting 
in phase versus the Rb and how this impacts the output phase of the 
stabilized output PPS.
My conclusion is that a higher then expected Kp of 0.1 gives the 
most stable output phase performance where the best frequency 
performance is realized with a Kp = 0.04
I welcome feedback on the interpretation of these measurements and 
the application o

[time-nuts] Re: Optimizing GPSDO for phase stability

2022-05-27 Thread Magnus Danielson via time-nuts

Dear Erik,

On 2022-05-27 18:02, Erik Kaashoek via time-nuts wrote:
The GPSDO/Timer/Counter I'm building also is intended to have a 
stabilized PPS output (so with GPS jitter removed).
The output PPS is created by multiplying/dividing the 10MHz of a 
disciplined TCXO up and down to 1 Hz using a PLL and a divide by 2e8. 
No SW or re-timing involved.
The 1 PPS output is phase synchronized with the PPS using a SW control 
loop and thus should be a good basis for experiments that require a 
time pulse that is stable and GPS time correct.
As I have no clue how to specify or evaluate the performance of such a 
PPS output I've done some experiments.
In the first attached graph you can see the ADEV of the GPS PPS (PPS - 
Rb) and the 1 PPS output with three different control parameters (Tick 
- RB)
As I found it difficult to understand what the ADEV plot in practice 
means for the output phase stability I also added the Time Deviation 
plot as I'm assuming this gives information on the phase error versus 
the time scale of observation.


The ADEV plot is the frequency stability plot, so it can be a bit 
challenging to use it for phase stability.


The TDEV plot is the phase stability plot, so it is more useful for that 
purpose.


There is a technical difference between these beyond the difference of 
frequency vs phase stability, and that is that ADEV is the frequency 
stability for a Pi-counter where as TDEV is the phase stability for a 
Lambda-counter, where MDEV is the frequency stability for the 
Lambda-counter. There is no standardized phase-stability for Pi-counter. 
For a nit-pick like me it is significant, but for others it may be 
mearly a little confusing.


Lastly a plot is added showing the Phase Difference. All plots where 
created using the linear residue as the Rb used as reference is a bit 
out of tune.

Also the TIM files are attached
The "PPS - RB" and "Tick - RB Kp=0.04" where measured simultaneously 
and should show the extend to which the GPS PPS is actually drifting 
in phase versus the Rb and how this impacts the output phase of the 
stabilized output PPS.
My conclusion is that a higher then expected Kp of 0.1 gives the most 
stable output phase performance where the best frequency performance 
is realized with a Kp = 0.04
I welcome feedback on the interpretation of these measurements and the 
application of output phase stabilization.


Since Kp is proportional to the damping-factor, this is completely 
expected result for me. As the damping factor increases, the jitter 
peaking decreases, and thus the positive gain at the loop resonance 
frequency.


What I seem to notice is that the resonance seems to move with Kp 
shifts, rather than having a peak of fixed frequency/tau. Doing 
phase-noise plots of the data in Stable32 should be a way to see if this 
is an actual shift or just an apparent shift.


The details of the PI-loop control may be relevant to correct for if the 
f_0 shifts as consequence of changing Kp rather than changing Ki.


The trouble one faces with a PLL is that optimum phase stability and 
optimum frequency stability comes at different PLL bandwidth settings. 
Keeping the damping factor high to keep jitter peaking low is however a 
common optimization.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Build a 3 hat timestanp counter

2022-05-25 Thread Magnus Danielson via time-nuts
e the data to arbitrary 
narrow bandwidth, which is similar to the tricks I've described, but you 
avoid how noise eats you before you can filter it, since you look at 
more classical signal-to-noise rather than the more dubious slew-rate. 
However, it can be good to know that similar effects can be achieved 
using a bit of knowledge.


Also, at some point will the white and flicker noise limitations of the 
measurement cease to be a limitation and one can focus more on the 
stability between ones sources, which is what you want to do.


Hope it has been readable and illustrative.

Now, get some measurements done so we see where you are, and then can 
see your progress as you approach the various methods that I described 
and see how it pays off, or not.


Cheers,
Magnus

On 2022-05-25 16:37, Hans-Georg Lehnard via time-nuts wrote:

Thanks for your answer and the many suggestions what can be improved.

The first picture shows my concept for a prototype.

The input shaper consists of a 4:1 transformer with differential output
and a TLV3501 comparator. The digital part with divider and start/stop
logic fit for all 3 channels into a XCR3064XL CPLD. Maybe it is better
to separate the channels later. The MC is a STM32H743 and runs with 450
MHz (pll), clocked from the reference frequency. TDC7200 used as TDC.
The measurement with spi readout takes about 5µs, so I decided for 10µs
(100 kHz) sample time.

The second picture shows the measuring timing inside CPLD.

The TDC7200 runs in mode 1 and supplies only the fine time. The MC runs
a 10 MHz (reference) counter with 3 capture channels as coarse time. So
I only have to read the fine timer and the calibration register from the
TDC.
The TDC cannot measure from 0, so a reference cycle is added. (t = x
+100ns).

For the averaging I had thought of a linear regression.

Hans-Georg

Am 2022-05-25 01:18, schrieb Magnus Danielson via time-nuts:


Hi,

The first limit you run into is the 1/tau slope of the measurement setup. This 
is often claimed to be white phase modulation noise, but it is also the effect 
of the single-shot resolution of the counter, and the actual slope level 
depends on the interaction of these two.

So, you might want to try a simple approach first, just to get started. Nothing 
wrong with that. You will end up want to get better, so I will try to provide a 
few guiding comments for things to think of and improve.

So, in general, try to use as high frequency as you can so that as you average 
down, your sqrt(f/f0) gets as high as possible as the benefit will be 
1/sqrt(f/f0) where f is the oscillator frequency and f0 is the rate after 
average.

As you do ADEV, the f0 frequency will control your bandwidth.

The filter effect of the averaging as you reduce and sub-sample will help to 
some degree with anti-aliasing, but rather than doing averaging, consider doing 
proper anti-aliasing filtering as the effect of aliasing into these measures is 
established and improvements into the upcoming IEEE Std 1139 reflect this. In 
short, aliasing folds the white noise and straight averaging tends to be a poor 
suppressor of aliasing noise.

For white phase modulation (WPM) the expected ADEV response depends linearly 
with the bandwidth of the measurement filter. It's often modelled as a 
brick-wall filter, which it never is. For classical counters, the input 
bandwidth is high, then the sampling rate forms a Nyquist sampling frequency, 
but wide band noise just aliase around that. Anti-aliasing filter helps to 
reduce or even remove the effect, and then the bandwidth of the anti-aliasing 
filter replace the physical channel bandwidth. If the anti-aliasing is done 
digitally after the counter front-end, you already got some aliasing wrapping, 
but keeping that rate as high as possible keep the number of overlays low and 
then filtering-wise reduce it will get you better result.

For aliassing effects, see Claudio Calosso of INRIM. Great guy.

This is where the sub-sampling filter approach is nice, since a filter followed 
by sub-sampling removes the need to produce all the outputs of the original 
sample rate, so filter processing can operate on the sub-sampled rate.

As your measures goes for higher taus in ADEV, the significant amount of the 
ADEV power will be well within the pass-band of the filter, so just making sure 
you have a flat top avoids surprises. For shorter taus, the anti-aliasing 
filter will be dominant, so assume first decade of tau to be waste.

I say this to guide you to get the best result with the proposed setup.

The classical three-cornered hat calculation has a limitation in that it 
becomes limited by noise and can sometimes result in non-stable results. The 
Grosslambert analysis is more robust, since it is essentially the same as doing 
the cross-correlation measurement. The key is that you average down before 
squaring where as in the three-cornered hat to square early and is unable to 
surpress noise of the other sources with as go

[time-nuts] Re: Build a 3 hat timestanp counter

2022-05-24 Thread Magnus Danielson via time-nuts

Hi,

The first limit you run into is the 1/tau slope of the measurement 
setup. This is often claimed to be white phase modulation noise, but it 
is also the effect of the single-shot resolution of the counter, and the 
actual slope level depends on the interaction of these two.


So, you might want to try a simple approach first, just to get started. 
Nothing wrong with that. You will end up want to get better, so I will 
try to provide a few guiding comments for things to think of and improve.


So, in general, try to use as high frequency as you can so that as you 
average down, your sqrt(f/f0) gets as high as possible as the benefit 
will be 1/sqrt(f/f0) where f is the oscillator frequency and f0 is the 
rate after average.


As you do ADEV, the f0 frequency will control your bandwidth.

The filter effect of the averaging as you reduce and sub-sample will 
help to some degree with anti-aliasing, but rather than doing averaging, 
consider doing proper anti-aliasing filtering as the effect of aliasing 
into these measures is established and improvements into the upcoming 
IEEE Std 1139 reflect this. In short, aliasing folds the white noise and 
straight averaging tends to be a poor suppressor of aliasing noise.


For white phase modulation (WPM) the expected ADEV response depends 
linearly with the bandwidth of the measurement filter. It's often 
modelled as a brick-wall filter, which it never is. For classical 
counters, the input bandwidth is high, then the sampling rate forms a 
Nyquist sampling frequency, but wide band noise just aliase around that. 
Anti-aliasing filter helps to reduce or even remove the effect, and then 
the bandwidth of the anti-aliasing filter replace the physical channel 
bandwidth. If the anti-aliasing is done digitally after the counter 
front-end, you already got some aliasing wrapping, but keeping that rate 
as high as possible keep the number of overlays low and then 
filtering-wise reduce it will get you better result.


For aliassing effects, see Claudio Calosso of INRIM. Great guy.

This is where the sub-sampling filter approach is nice, since a filter 
followed by sub-sampling removes the need to produce all the outputs of 
the original sample rate, so filter processing can operate on the 
sub-sampled rate.


As your measures goes for higher taus in ADEV, the significant amount of 
the ADEV power will be well within the pass-band of the filter, so just 
making sure you have a flat top avoids surprises. For shorter taus, the 
anti-aliasing filter will be dominant, so assume first decade of tau to 
be waste.


I say this to guide you to get the best result with the proposed setup.

The classical three-cornered hat calculation has a limitation in that it 
becomes limited by noise and can sometimes result in non-stable results. 
The Grosslambert analysis is more robust, since it is essentially the 
same as doing the cross-correlation measurement. The key is that you 
average down before squaring where as in the three-cornered hat to 
square early and is unable to surpress noise of the other sources with 
as good quality. For Grosslambert analysis, see François Vernotte series 
of papers and presentation. François is another great guy. I spent some 
time discussing the Grosslambert analysis with Demetrios the other week. 
I think I need to also say that Demetrios is a great guy too, not to 
single him out, but he really is.


There is another trick up the sleeve thought. If you do the modified 
Allan deviation (MDEV) processing, it actually integrate the sqrt() 
trick with measurement, achieving a 1/tau^1.5 slope for the WPM. This 
will push it down quicker if you let it use enough high rate of samples, 
so that you hit the flicker phase-modulation slope (1/tau), the white 
frequency modulation slope (1/tau^0.5) and finally flicker frequency 
modulation (flat) quicker. The reference levels will be different from 
ADEV for the various noise-types, but that you can look up in tables and 
correct for.


Cheers,
Magnus

On 2022-05-24 18:37, Hans-Georg Lehnard via time-nuts wrote:

Hi,

my Name is Hans-Georg Lehnard from Germany and I'm new here, worked as a
developer for hardware then for software and last as a system developer.
Now I'm retired and I can play with hardware again ;-).

I have:

4 x 20MHz Rubium (TEMEX MCFRS-1),
2 x 10MHz HP10811-60111
1 x Samsung UCCM GPSDO
1 x FA2 counter.
lots of OCXO

and try to build a house standard that I can trust and qualify my
oscillators.
Reproducible measurements with the FA2 in 10s precision mode I trust to
10E-11.
The short-term stability of the HP oscillators cannot be measured with
it, or both are defective.
The FA2 is not suitable for short-term measurements of 0.01 ... 1s.

For measurements against a reference frequency, the stability of the
reference must be 5 to 10 times better than the measured frequency, and
I don't have that. Now there are 2 options DMTD mixer or 3-hat
measurements.
Because I'm a digital person I chose the 

[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-15 Thread Magnus Danielson via time-nuts

Hi Matthias,

On 2022-05-14 12:30, Matthias Welwarsky wrote:

On Samstag, 14. Mai 2022 18:43:13 CEST Carsten Andrich wrote:

However, even for the 2^16 samples used by the CCRMA snippet, the filter
slope rolls off too quickly. I've attached its frequency response. It
exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but
it's essentially flat over the remaining two orders of mag. The used IIR
filter is too short to affect the lower frequencies.

Ah. That explains why the ADEV "degrades" for longer tau. It bends "down". For
very low frequencies, i.e. long tau in ADEV terms, the filter is invisible,
i.e. it passes on white noise. That makes it indeed unusable, for my purposes.


I agree. Good that we come to the same conclusion.

I just have not had time to run simulation and check, and I would check 
both spectrum and ADEV, but there is other tests to do such as 
autocorrelation function. A more unusual one is the increase of 
deviation of an ensamble of simlations, and thus the spread it can take. 
It depends clearly on the noise-type and length of sequence.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-15 Thread Magnus Danielson via time-nuts

Hi Carsten,

On 2022-05-14 11:38, Carsten Andrich wrote:

Hi Magnus,

On 14.05.22 08:59, Magnus Danielson via time-nuts wrote:
Do note that the model of no correlation is not correct model of 
reality. There is several effects which make "white noise" slightly 
correlated, even if this for most pratical uses is very small 
correlation. Not that it significantly changes your conclusions, but 
you should remember that the model only go so far. To avoid aliasing, 
you need an anti-aliasing filter that causes correlation between 
samples. Also, the noise has inherent bandwidth limitations and 
futher, thermal noise is convergent because of the power-distribution 
of thermal noise as established by Max Planck, and is really the 
existence of photons. The physics of it cannot be fully ignored as 
one goes into the math field, but rather, one should be aware that 
the simplified models may fool yourself in the mathematical exercise.


Thank you for that insight. Duly noted. I'll opt to ignore the 
residual correlation. As was pointed out here before, the 5 component 
power law noise model is an oversimplification of oscillators, so the 
remaining error due to residual correlation is hopefully negligible 
compared to the general model error.


Indeed. My comment is more to point out details which becomes relevant 
for those attempting to do math exercises and prevent unnecessary insanity.


Yes, I keep riminding that the 5 component power law noise model is just 
that, only a model, and it does not really respect the "Leeson effect" 
(actually older) of resonator folding of noise, which becomes a 
systematic connection of noise of different slopes.





Here you skipped a few steps compared to your other derivation. You 
should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).

Given the variance of X[k] and E{X[k]} = 0 \forall k, it follows that

X[k] = Var(Re{X[k]})^0.5 * N(0, 1) + 1j * Var(Im{X[k]})^0.5 * N(0, 1)

because the variance is the scaling of a standard Gaussian N(0, 1) 
distribution is the square root of its variance.

Reasonable. I just wanted it to be complete in the thread.



This is a result of using real-only values in the complex Fourier 
transform. It creates mirror images. Greenhall uses one method to 
circumvent the issue.
Can't quite follow on that one. What do you mean by "mirror images"? 
Do you mean that my formula for X[k] is missing the complex conjugates 
for k = N/2+1 ... N-1? Used with a regular, complex IFFT the 
previously posted formula for X[k] would obviously generate complex 
output, which is wrong. I missed that one, because my implementation 
uses a complex-to-real IFFT, which has the complex conjugate implied. 
However, for a the regular, complex (I)FFT given by my derivation, the 
correct formula for X[k] should be the following:


   { N^0.5 * \sigma *  N(0, 1)    , k = 0, N/2
X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2 - 1
   { conj(X[N-k]) , k = N/2 + 1 
... N - 1


If you process a real-value only sample list by the complex FFT, as you 
did, you will have mirror fourier frequencies of opposite sign. This 
comes as e^(i*2*pi*f*t)+e^(-i*2*pi*f*t) is only real. Rather than using 
the optimization to remove half unused inputs (imaginary) and half 
unused outputs (negative frequencies) with N/2 size transform, you can 
use the N-size transform more straightforward and accept the losses for 
simplicity of clarity. This is why Greenhall only use upper half 
frequencies.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-15 Thread Magnus Danielson via time-nuts

Hi Matthias,

On 2022-05-14 08:58, Matthias Welwarsky wrote:

On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote:

Dear Matthias,

Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.

One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.

I went "window shopping" on Google and found something that would probably fit
my needs here:

https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html

Matlab code:

Nx = 2^16;  % number of samples to synthesize
B = [0.049922035 -0.095993537 0.050612699 -0.004408786];
A = [1 -2.494956002   2.017265875  -0.522189400];
nT60 = round(log(1000)/(1-max(abs(roots(A); % T60 est.
v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1)
x = filter(B,A,v);% Apply 1/F roll-off to PSD
x = x(nT60+1:end);% Skip transient response

It looks quite simple and there is no explanation where the filter
coefficients come from, but I checked the PSD and it looks quite reasonable.


This is a variant of the James "Jim" Barnes filter to use lead-lag 
filter to approximate 1/f slope. You achieve it within a certain range 
of frequency. The first article with this is available as a technical 
note from NBS 1965 (available at NIST T archive - check for Barnes and 
Allan), but there is a more modern PTTI article by Barnes and Greenhall 
(also in NIST archive) that uses a more flexible approach where the 
spread of pole/zero pairs is parametric rather than fixed. The later 
paper is important as it also contains the code to initialize the state 
of the filter as if it has been running for ever so the state have 
stabilized. A particular interesting thing in that article is the plot 
of the filter property aligned to the 1/f slope, it illustrates very 
well the useful range of the produced filter. This plot is achieved by 
scaling the amplitude responce with sqrt(f).


I recommend using the Barnes & Greenhall variant rather than what you 
found. It needs to be adapted to the simulation at hand, so the fixed 
setup you have will fit only some needs. One needs to have the filter 
covering the full frequency range where used flicker noise is dominant 
or near dominant. As one uses flicker shaping for both flicker phase 
modulation as well as flicker frequency modulation, there is two 
different frequency ranges there they are dominant or near dominant.


There is many other approaches, see Attilas splendid review the other day.

Let me also say that I find myself having this question coming even from 
the most senioric researches even the last week: "What is your favorite 
flicker noise simulation model?". They keep acknowledge my basic message 
of "Flicker noise simulation is hard". None model fit all applications, 
so no one solution solves it all. One needs to validate that it fit the 
application at hand.


Cheers,
Magnus



The ADEV of a synthesized oscillator, using the above generator to generate 1/
f FM noise is interesting: it's an almost completely flat curve that moves
"sideways" until the drift becomes dominant.

Regards,
Matthias


___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-14 Thread Magnus Danielson via time-nuts

Hi Carsten,

On 2022-05-13 09:25, Carsten Andrich wrote:

On 11.05.22 08:15, Carsten Andrich wrote:


Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, [...]
I had the same question when I first saw this. Unfortunately I don't 
have a good
answer, besides that forward + inverse ensures that the noise looks 
like it is
supposed to do, while I'm not 100% whether there is an easy way to 
generate

time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.

Got an idea on that, will report back.


Disclaimer: I'm an electrical engineer, not a mathematician, so 
someone trained in the latter craft should verify my train of though. 
Feedback is much appreciated.


Turns out the derivation of the DFT of time-domain white noise is 
straightforward. The DFT formula


X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pi*k*n/N}

illustrates that a single frequency-domain sample is just a sum of 
scaled time-domain samples. Now let x[n] be N normally distributed 
samples with zero mean and variance \sigma^2, thus each X[k] is a sum 
of scaled i.i.d. random variables. According to the central limit 
theorem, the sum of these random variables is normally distributed.


Do note that the model of no correlation is not correct model of 
reality. There is several effects which make "white noise" slightly 
correlated, even if this for most pratical uses is very small 
correlation. Not that it significantly changes your conclusions, but you 
should remember that the model only go so far. To avoid aliasing, you 
need an anti-aliasing filter that causes correlation between samples. 
Also, the noise has inherent bandwidth limitations and futher, thermal 
noise is convergent because of the power-distribution of thermal noise 
as established by Max Planck, and is really the existence of photons. 
The physics of it cannot be fully ignored as one goes into the math 
field, but rather, one should be aware that the simplified models may 
fool yourself in the mathematical exercise.


To ascertain the variance of X[k], rely on the linearity of variance 
[1], i.e., Var(a*X+b*Y) = a^2*Var(X) + b^2*Var(Y) + 2ab*Cov(X,Y), and 
the fact that the covariance of uncorrelated variables is zero, so, 
separating into real and imaginary components, one gets:


Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pi*k*n/N})}^2
  = \Sum_{n=0}^{N-1} \sigma^2  * cos(2*\pi*k*n/N)^2
  = \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pi*k*n/N)^2

Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pi*k*n/N})}^2
  = ...
  = \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pi*k*n/N)^2

The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and 
k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real 
DC and Nyquist components as is to be expected for a real x[n].
Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the 
following variance:


    { N   * \sigma^2, k = 0
Var(Re{X[k]}) = { N   * \sigma^2, k = N/2
    { N/2 * \sigma^2, else


    { 0 , k = 0
Var(Im{X[k]}) = { 0 , k = N/2
    { N/2 * \sigma^2, else

Therefore, a normally distributed time domain-sequence x[n] ~ N(0, 
\sigma^2) with N samples has the following DFT (note: N is the number 
of samples and N(0, 1) is a normally distributed random variable with 
mean 0 and variance 1):


   { N^0.5 * \sigma *  N(0, 1)    , k = 0
X[k] = { N^0.5 * \sigma *  N(0, 1)    , k = N/2
   { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else

Here you skipped a few steps compared to your other derivation. You 
should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).
Greenhall has the same results, with two noteworthy differences [2]. 
First, normalization with the sample count occurs after the IFFT. 
Second, his FFT is of size 2N, resulting in a N^0.5 factor between his 
results and the above. Finally, Greenhall discards half (minus one) of 
the samples returned by the IFFT to realize linear convolution instead 
of circular convolution, fundamentally implementing a single iteration 
of overlap-save fast convolution [3]. If I didn't miss anything 
skimming over the bruiteur source code, it seems to skip that very 
step and therefore generates periodic output [4].


This is a result of using real-only values in the complex Fourier 
transform. It creates mirror images. Greenhall uses one method to 
circumvent the issue.


Cheers,
Magnus



Best regards,
Carsten

[1] https://en.wikipedia.org/wiki/Variance#Basic_properties
[2] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5
[3] https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method
[4] 
https://github.com/euldulle/SigmaTheta/blob/master/source/filtre.c#L130

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to 

[time-nuts] Re: Time-nuts at WSTS conference

2022-05-13 Thread Magnus Danielson via time-nuts

Tom,

We set new standards all the time!

(Misspelling was unintentional)

Cheers,
Magnus

On 2022-05-12 16:29, Tom Holmes wrote:

"scotsh whiskey"

Was this intentional to indicate that more than a little bit of imbibing was 
involved in arriving at the new standards?


Tom Holmes, N8ZM

-Original Message-
From: Lux, Jim 
Sent: Thursday, May 12, 2022 5:54 PM
To: time-nuts@lists.febo.com
Subject: [time-nuts] Re: Time-nuts at WSTS conference

On 5/12/22 2:48 PM, Gary Woods wrote:

On Thu, 12 May 2022 12:01:48 -0600, you wrote:


I've found that corridor discussions have included redefinition of
SI-second, quantum computers, optical clocks, security on PTP clocks,
time-scale algorithms, uncertainty of different measures. Oh, and I just
won a bottle of scotsh whiskey.

Excellent, Magnus!  I remember a tech seminar years ago, where the
instructor avowed that 80% of the education took place during the
coffee breaks!


The best international standards start as a discussion the hallway or
bar, with scribbles on a paper napkin.

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Time-nuts at WSTS conference

2022-05-12 Thread Magnus Danielson via time-nuts

Fellow time-nuts,

This week I spend my time at the WSTS conference in Denver. While here 
in my commercial capacity, there is a lot of professionals lurking the 
list that shows up to shake the hand and say hi. Turns out that many 
lurk out here and read and learn lot of useful stuff.


While they may work and focus on some things, you learn on a broader 
spectrum of issues of just lurking here.


One thing I want to say to those that may not be as vocal here, asking 
questions here is good, as many others learn from the questions and answers.


Hearing from these lurkers is also how much they appreciate reading the 
discussions and the answers from the big dragons and magicians sweeping 
in with their wisdom to share. It is good to keep this in mind as one 
provide answers, I try to and sometimes achieve it.


I've found that corridor discussions have included redefinition of 
SI-second, quantum computers, optical clocks, security on PTP clocks, 
time-scale algorithms, uncertainty of different measures. Oh, and I just 
won a bottle of scotsh whiskey.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Can the ADEV of a GPSDO output ever be lower than the minimum of the ADEV of the internal oscilator and the ADEV of the GPS PPS?

2022-05-11 Thread Magnus Danielson via time-nuts

Hi,

On 2022-05-09 15:05, Attila Kinali wrote:

On Fri, 29 Apr 2022 16:53:58 +0200
André Balsa  wrote:


Mathematically, no, a GPSDO cannot have a lower uncertainty (ADEV) than the
minimum observable uncertainty (ADEV) of the combined oscillator
(disciplined clock) and PPS (disciplining clock) from the GPS receiver.
Unless there is some magic trick to remove the uncertainty in a clock that
I am not aware of. ;)

This is not quite true.

Keep in mind that the *DEV metrics all implicitly assume that the noise
is Gauss distributed and has a PSD of the form of 1/f^a, a ∊ [0,4]
and a high-frequency cut-off. The moment you leave this relatively
restrictive class of functions you have to validate that the *DEV metric
you are using is still producing what you think it does. One common function
for which we have done this are quadratic functions (with noise), also
known as "linear frequency drift". But we have done so for a scant few
other functions.

If you have read my mail a few days ago, then you might have noticed that
few oscillators we have actually fit into this class. And the "worse" they
are, the less they fit. An OCXO can have sudden phase and frequency jumps.
Not to mention its temperature dependency which will lead to some phase
function which looks noise like, even slightly self-similar (another
characteristic of 1/f^a noise), but actually isn't. There is some periodic
behaviour in it, at different repetition rates, together with linear,
quadratic and cubic components. Go to a TCXO or even a simple XO and
things get even worse.

I can't go into the mathematical details as I don't have nearly enough
knowledge about the nitty gritty stuff of *DEV. But we have people here
who know way more than I do, who could chip in.


OK, so I'll give it a shot. This works better on whiteboard.

ADEV and friends is not the most direct approach when discussing locked 
oscillators, you need to understand it in terms of phase-noise and then 
you can map that to ADEV and friends.


As you build a PLL, you will low-pass filter the reference with the loop 
bandwidth, and you will highpass the noise of the steered oscillator. A 
PLL will have the unfortunate property of jitter peaking, so you will 
have gain in excess of 1 at the PLL resonance frequency. This 
jitter-peaking will occurr for both the reference noise and the 
oscillator noise, and it will then add up together. You can approximate 
what this will do, but the ADEV and friends will see the energy added 
both from both reference and oscillator, as well as the colouring of the 
jitter peaking. The disturbance of the peak at the PLL natural/resonance 
frequency will for the ADEV be quite similar to adding a sine frequency 
of the same frequency as the PLL natural frequency, and thus causing the 
ADEV and friends to see an additional peaks on top of the underllying 
noise slopes.


Trouble is, at the cut-over frequency you will get a slight peaking 
however you go, and your ADEV will suffer accordingly. What you can do 
is to keep the damping factor high, and thus jitter peaking low. That helps.


You never "win" this game, you only limit your losses.



As for the case at hand. There has been a plot of the TCXO's free running
behaviour earlier. In which one could see that the TCXO had some quite
distinct frequency steps, presumably from the temperature compensation.
Between these the phase was pretty stable. Which means the ADEV gets
detoriated by the frequency steps and doesn't see these "flat" portions
inbetween, not to mention it breaks with the assumption which ADEV is
built upon. Now, if the control loop hits a sweet spot where the loop
compensates these frequency steps quickly but without degrading the
"flat" portions inbetween, then the ADEV of the combined TCXO + PPS + control 
loop
could indeed be lower than the individual components. But without a closer
look at what happens to the phase, it is hard to tell whether this is a
genuine effect of the control loop, an artifact of the simulation or simply
a bug somewhere.
A step in phase or frequency will "kill" your ADEV plot. You learn to 
set things up to avoid outliners when using TimeLab. The raised floor 
from it will take a long time to average out.


Attila Kinali


PS: Please, for the sake of all that is ticking, whenever you post an *DEV plot,
add error bars. *DEV are statistical figures. And like all statistical figures
they have an uncertainty. Without the error bars it is hard to judge whether the
values are statistically significant or just some randomly thrown dice because
of not enough data.


+1: This is in line with what IEEE Std 1139 recommendations. Actually, 
there is more things to include, including bandwidth, number of samples, 
any removal of linear drift etc.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-05 Thread Magnus Danielson via time-nuts

Hi,

Could not agree more on this point.

It's even to the point we have two standards for it, the IEEE Std 1139 
for the basic measures and noises, and then IEEE Std 1193 for the 
"environmentals", or rather, the rest.


Both is being revisioned and 1139 just went out for re-balloting process 
after receiving ballotting comments and 1193 is just to get approved to 
be sent to balloting. The work have been lead by NIST T department 
chief Elizabeth Donley, who also got an award at EFTF-IFCS 2022 for her 
contribution to the field, and these standards in particular. Very well 
desirved I might add.


While simple models help to test and analyze specific things in 
separation, as you bring things together, going towards real life 
application the reality is always somewhat different to the models. It 
takes some time to learn just how much of the things you can pick up 
from models and what to use them for to adapt for the expected real life 
situation.


The two IEEE standards have tons of references, and it is well worth 
following those. The IEEE UFFC also have quite a bit of educational 
reference material at hand which can also be worthwhile reading.


Cheers,
Magnus

On 2022-05-03 23:23, Bob kb8tq wrote:

Hi

The gotcha is that there are a number of very normal OCXO “behaviors” that are 
not
covered by any of the standard statistical models. Coping with these issue is 
at least
as important at working with the stuff that is coved by any of the standard 
statistical
models ….

Bob


On May 3, 2022, at 3:57 AM, Matthias Welwarsky  wrote:

Dear all,

thanks for your kind comments, corrections and suggestions. Please forgive if
I don't reply to all of your comments individually. Summary response follows:

Attila - yes, I realize temperature dependence is one key parameter. I model
this meanwhile as a frequency shift over time.

Bob - I agree in principle, real world data is a good reality check for any
model, but there are only so few datasets available and most of the time they
don't contain associated environmental data. You get a mix of effects without
any chance to isolate them.

Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.

I've some Timelab screenshots attached, ADEV and frequency plot of a data set
I generated with the following matlab function, plus some temperature response
modeled outside of this function.

function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
# low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);

# aging
phase = (((1:samples)/86400).^2)*da;
# white phase noise
phase += (randn(1, samples))*wpn;
# white frequency noise
phase += cumsum(randn(1, samples))*wfn;
# 1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
# 1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);
end

osc = synth_osc(40, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);

Thanks.

On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is reasonable,
but I would like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)
# aging
phase = (((1:samples)/86400).^2)*da;
# white noise
phase += (rand(1,samples)-0.5)*wn;
# flicker noise
phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far
off of reality to be useful? What could be changed or improved?

Best regards,
Matthias


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-le...@lists.febo.com To unsubscribe, go to and follow
the instructions there.

___
time-nuts mailing list -- 

[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-03 Thread Magnus Danielson via time-nuts

Dear Matthias,

On 2022-05-03 10:57, Matthias Welwarsky wrote:

Dear all,

thanks for your kind comments, corrections and suggestions. Please forgive if
I don't reply to all of your comments individually. Summary response follows:

Attila - yes, I realize temperature dependence is one key parameter. I model
this meanwhile as a frequency shift over time.

Bob - I agree in principle, real world data is a good reality check for any
model, but there are only so few datasets available and most of the time they
don't contain associated environmental data. You get a mix of effects without
any chance to isolate them.
Environmental effects tends to be recognizeable by their periodic 
behavior, i.e. period of the day and the period of the heating/AC. Real 
oscillator data tends to be quite relevant as you can simulate what it 
would mean to lock that oscillator up. TvB made a simulator on those 
grounds. Good exercise.


Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.


Happy to get you up to speed on that.

One particular name to check out articles for is Charles "Chuck" 
Greenhall, JPL.


For early work, also look att James "Jim" Barnes, NBS (later named NIST).

Both these fine gentlement is recommended reading almost anything they 
write on the topic actually.



I've some Timelab screenshots attached, ADEV and frequency plot of a data set
I generated with the following matlab function, plus some temperature response
modeled outside of this function.

function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
# low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);


Notice that 1/f is power-spectrum density, straight filter will give you 
1/f^2 in power-spectrum, just as an integration slope.


One approach to flicker filter is an IIR filter with the weighing of 
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to 
"flush out" state before you use it so you have a long history to help 
shaping. For a 1024 sample series, I do 2048 samples and only use the 
last 1024. Efficient? No. Quick-and-dirty? Yes.


The pole/zero type of filters of Barnes let you synthesize an 1/f slope 
by balancing the properties. How dense and thus how small ripples you 
get, you decide. Greenhall made the point of recording the state, and 
provides BASIC code that calculate the state rather than run an infinite 
sequence to let the initial state converge to the 1/f state.


Greenhall published an article illustrating a whole range of methods to 
do it. He wrote the simulation code to be used in JPL for their clock 
development.


Flicker noise is indeed picky.

Cheers,
Magnus


# aging
phase = (((1:samples)/86400).^2)*da;
# white phase noise
phase += (randn(1, samples))*wpn;
# white frequency noise
phase += cumsum(randn(1, samples))*wfn;
# 1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
# 1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);
end

osc = synth_osc(40, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);

Thanks.

On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is reasonable,
but I would like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)
# aging
phase = (((1:samples)/86400).^2)*da;
# white noise
phase += (rand(1,samples)-0.5)*wn;
# flicker noise
phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far
off of reality to be useful? What could be changed or improved?

Best regards,
Matthias


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-le...@lists.febo.com To unsubscribe, go 

[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-02 Thread Magnus Danielson via time-nuts

Hi Jim,

Thanks for the corrections. Was way to tired to get the uniform and 
normal distributions right.


rand() is then by classical UNIX tradition is generated as a unsigned 
integer divided by the suitable (32th) power of two, so the maximum 
value will not be there, and this is why a small bias is introduced, 
since 0 can be reached but not 1.


In practice the bias is small, but care is taken never the less.

Cheers,
Magnus

On 2022-05-03 03:43, Lux, Jim wrote:

On 5/2/22 6:09 PM, Magnus Danielson via time-nuts wrote:

Matthias,

On 2022-05-02 17:12, Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO 
that I can
parametrize to experiment with a GPSDO simlator. For now I have the 
following
matlab function that "somewhat" does what I think is reasonable, but 
I would

like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)
# aging
phase = (((1:samples)/86400).^2)*da;
# white noise
phase += (rand(1,samples)-0.5)*wn;
# flicker noise
phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and 
flicker noise,

with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic 
aging
function. It can be parametrized e.g. from an OCXO datasheet, daily 
aging

given in s/s per day.

The second term models white noise. It's just a random number scaled 
to the

desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a 
random

walk scaled to the desired magnitude.






Another thing. I think the rand function you use will give you a 
normal distribution rather than one being Gaussian or at least 
pseudo-Gaussian.


rand() gives uniform distribution from [0,1). (Matlab's doc says 
(0,1), but I've seen zero, but never seen 1.) What you want is 
randn(), which gives a zero mean, unity variance Gaussian distribution.


https://www.mathworks.com/help/matlab/ref/randn.html


A very quick-and-dirty trick to get pseudo-Gaussian noise is to take 
12 normal distribution random numbers, subtract them pair-wise and 
then add the six pairs. 


That would be for uniform distribution. A time-honored approach from 
the IBM Scientific Subroutine Package.



The subtraction removes any bias. The 12 samples will create a 
normalized deviation of 1.0, but the peak-to-peak limit is limited to 
be within +/- 12, so it may not be relevant for all noise 
simultations. Another approach is that of Box-Jenkins that creates 
much better shape, but comes at some cost in basic processing. 

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe 
send an email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Simple simulation model for an OCXO?

2022-05-02 Thread Magnus Danielson via time-nuts

Matthias,

On 2022-05-02 17:12, Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the following
matlab function that "somewhat" does what I think is reasonable, but I would
like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)
# aging
phase = (((1:samples)/86400).^2)*da;
# white noise
phase += (rand(1,samples)-0.5)*wn;
# flicker noise
phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker noise,
with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.


What you have shown is 2 out of the standard 5 noise-types. The Leeson 
model describes how 4 noise-types can be generated, but in addition to 
that the Random Walk Frequency Modulation is included. For standard, see 
IEEE Std 1139. For Leeson model, see David Leeson articles from special 
issue of Feb 1966, as I believe you can find on IEEE UFFC site. The 
Wikipedia Allan Deviation article may also be of some guidance.


Noise-types:

White Phase Modulation - your thermal noise on phase

Flicker Phase Modulation - your 1/f flicker noise on phase

Because we have oscillators that integrate inside the resonators 
bandwidth, we also get their integrated equivalents


White Frequency Modulation - your thermal noise on frequency - forms a 
Random Walk Phase Modulation


Flicker Frequency Modulation - Your 1/f flicker noise on frequency

Random Walk Frequency Modulation - Observed on disturbed oscillators, a 
random walk in frequency, mostly analyzed for finding rare faults.


You have only modelled the first two. This may or may not be relevant 
depending on the bandwidth of your control loop and the dominant noise 
there. It may not be relevant. The phase-noise graph will illustrate 
well where the power of the various noise-types intercept and thus 
provide a range over frequency for which one or the other is dominant. 
You control loop bandwidth will high-pass filter this for your locked 
oscillator, and low-pass filter for your reference oscillator/source. 
The end result will be a composit of both. The Q-value / damping factor 
will control just how much jitter peaking occurrs at the cut-over 
frequency, and the basic recommendation is to keep it well damped at all 
times.


The ADEV variant of this is similar.

Now, to simulate flicker you have a basic problem, whatever processing 
you do will end up needing the full time of your simulation data to 
maintain the slope. You can naturally cheat and do a much reduced setup 
that only provides the 1/f slope of PSD at and about the area where it 
dominates. Notice that this can occurr both for the FPM and FFM cases. 
Also notice that if we respect the model, these should be completely 
independent.


Simulation wise, you can turn WPM into WFM by integration, and FPM into 
FFM by integration. Similarly the WFM becomes RWFM through a second 
integration. Just source each of these independent to respect the model.


The Leeson model for an oscillator does not have these fully 
independent, but for most uses, simulate fully independent, and you will 
not make more fool of yourself than anyone else usually does, present 
company included.


As you see, flicker and random-walk is not the same thing. Often "random 
walk" implies Random Walk Frequency Modulation.


Another thing. I think the rand function you use will give you a normal 
distribution rather than one being Gaussian or at least pseudo-Gaussian. 
A very quick-and-dirty trick to get pseudo-Gaussian noise is to take 12 
normal distribution random numbers, subtract them pair-wise and then add 
the six pairs. The subtraction removes any bias. The 12 samples will 
create a normalized deviation of 1.0, but the peak-to-peak limit is 
limited to be within +/- 12, so it may not be relevant for all noise 
simultations. Another approach is that of Box-Jenkins that creates much 
better shape, but comes at some cost in basic processing.



As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far off
of reality to be useful? What could be changed or improved?


As I've stated above, I think it misses key noise-types. I really wonder 
if the flicker noise model you have is 

[time-nuts] Re: What time difference to expect from two clocks using internal GPS receivers?

2022-05-01 Thread Magnus Danielson via time-nuts

Hi Erik,

On 2022-04-30 12:32, Erik Kaashoek wrote:
The PPS jitter of a cheap Chinese GPS module was measured at about +/- 
10 ns.

But the phase of the PPS compared to a Rb varied substantial more.
To verify if this was possibly due to ionospheric or atmospheric 
conditions the time difference between the PPS of two identical 
modules using two identical rooftop antenna was measured. Both only 
used the GPS constellation.
This showed difference of up to 100 ns. Switching to GPS+GLN did not 
make a visible difference.
It was tried to set both GPS modules into fixed position mode but the 
reported position still kept moving a bit (within 3 m) and the fixed 
mode did not have a visible impact on the time difference variations.
Is a time difference of up to 100 ns to be expected when using two GPS 
receivers or is this difference possibly due to bad application or 
performance of the cheap Chinese GPS modules


Well, there are many sources of bias both in hardware and firmware.

As mentioned already, delays of antennas and cables remains 
uncompensated. Seeing a difference of 100 ns is equivalent of 20 m of 
cable. If you know you have about the same length of cable, then that is 
not your culprit.


There is a peculiar effect in that the experienced delay in receiver 
becomes different depending on the PN code used, so per satellite. This 
should be lower. At the same time, considering that a single chip of the 
PN code is just shorter than 1 us, so maybe there is a narrow-band 
effect there. Still, that should not give such huge difference, so it is 
really currious.


There could be some peculiar issue on how state is set up as it locks 
up. Try restarting one of them a couple of times and see if the offset 
varries or is consistent.


Try swapping antenna cables to see if the offset follows the receiver or 
antenna/coax.


Try using another receiver in parallel.

Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] A little report from EFTF-IFCS 2022

2022-04-30 Thread Magnus Danielson via time-nuts

Fellow time-nuts,

I thought I would drop a few notes of high-light from the EFTF-IFCS 2022 
conference in Paris that ended yesterday.


It has been an intense week for those of us attending. Turns out there 
where more fellow time-nuts there than what made them selfes know when 
asked beforehand on the list. We have a few lurkers out there. Which is 
nice anyway.


Several of the tutorials was really good, such as the "Allan deviation 
and friends" of François Vernotte, "Satellite Time-Transfer" of Pascale 
Defraignes and also the one on optical frequency transfer.


From NIST there was a presentation on a 30 GHz divide by 3 
re-generative divider. That turned out to be really interesting talk at 
Archita Hati told the story of the many steps and iterations that went 
into it and showed many phase noise plots on the way. From the 
commercial test-board (for a 0.5-40 GHz divider) to the many steps of 
making improvements. She ended up building new amplifiers that brought 4 
amplifiers in parallel to lower the flicker noise. A flip of the initial 
mixer (that takes 30 GHz input and 10 GHz output and produces a 20 GHz) 
allowed the limited bandwidth of the LO port to remove the sum frequency 
(40 GHz) rather than taking a loss in a separate filter. Really 
impressive performance. It failed to start despite Barkhausen conditions 
to be there, so the digital divider solution was used to inject a 
boot-strap 10 GHz for a short while and then remove it and it would go 
into it's low-noise mode.


Another interesting presentation was from JHU APL on a early development 
on their Ultra Stable Oscillator, where they wanted to reduce the size 
and weight by removing thermal isolation material. What they did was 
they put temperature sensors around the crystal and then added the basic 
ovenizing drive. They then used a DDS to correct the output and trained 
a shallow AI system to provide the correction steering. It showed some 
impressive early results.


Both of these two talks I was fortunate enough to have in the session I 
chaired.


One session and also a poster which was interesting was on the 
redefinition of the SI second. Essentially three options have been 
considered 1) Select one species 2) Use an ensamble of species or 3) 
Alter some natural constant. It has now been concluded that the later is 
impractical, as there is no single constant that fit the bill. So, 
therefore either selecting one species or doing ensamble of them 
remains. I think the ensamble approach is best in the long run, as it 
allows a richness of measurements to improve the precission as we go.


There where talks about MEMS that shown that it has started to mature 
more and more. Many lessons learned.


There where talks about miniatured atomic clocks, and things progress 
nicely there. Low-noise version of CSAC for instance.


One poster of Pascale Defraignes was very interesting, as it was on 
improved measurement on Beido results in R2CGGTSS and it turns out that 
there are offsets between new and old system. Turns out that there is 
also offsets between different GPS satellites that needs to be 
calibrated. Calibration of receivers becomes harder and harder as you 
aim towards low offsets.


While for sure not complete, these are some of the things that at least 
I got impression of. I am sure others have their impressions to share.


Lots of friendly faces, some new faces.

Next years conference will be in Japan. We where told that the 
Shinkansen train from Tokyo airport takes 2 hours and 8 minutes. I will 
have to protect that part of the calender already now.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Underlying math of cross-correlation PN Test Sets

2022-04-25 Thread Magnus Danielson via time-nuts

Hi,

On 2022-04-25 01:57, Joseph Gwinn wrote:

I'm digging into the basic math of how cross-correlation phase noise
test sets (like the 53100A) work, so I'm looking for good articles on
the complex-exponentials math.

One item of interest is the effects of imperfections in the power
dividers that make a pair of perfect copies of the reference signal
in a residual-PN setup.  I gather from Walls 1992 that the limit on
cancellation of signal-source PN is the imperfect isolation of the
power divider.

I'm more interested in clarity of exposition and the physics than in
solid walls of pure math.


I recommend you to look at the NIST T archive on the topic. Also their 
AM and PM calibration document.


The field have developed over the years, covering a range of aspects, 
including 2014 article describing the noise cancelling errors and 2016 a 
proposal for how to address that.


The 2014 paper (David Howe, Craig Nelson and Archita Hati) is a good 
paper as it sums thing up. The 2016 paper provide new hints in addition.


Quick explanation: you cross-correlate with two independent channels, so 
only the DUT noise correlate. Averaging on the complex outputs of the 
FFT based cross-correlation output removes the channel noises. Your 
result will be on the real axis and your imaginary axis should be a 
small vector of noise. When the real axis goes negative, as it does when 
reaching thermal noise level, do not believe the measurement.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: GPSDO Control loop autotuning

2022-04-22 Thread Magnus Danielson via time-nuts

Hi Matthias,

On 2022-04-21 23:20, Matthias Welwarsky wrote:

On Dienstag, 12. April 2022 00:52:42 CEST Magnus Danielson via time-nuts
wrote:


A trivial state-space Kalman would have phase and frequency. Assuming
you can estimate the phase and frequency noise of both the incoming
signal and the steered oscillator, it's a trivial exercise. It's
recommended to befriend oneself with Kalman filters as a student
exercise. It can do fairly well in various applications.

I'm wondering, can the frequency really be part of the state? It is not a
property we can measure independently, we can only derive it from the TIC
measurements. There is the EFC value of course, but that doesn't directly
correspond with the output frequency...


You confuse state with observability. It is true that you make the 
observation in phase, but the system matrix include that the estimated 
frequency updates the phase


phase = phase + frequency * delta_t

frequency = frequency

So matrix becomes

[1 delta_t]

[0 1]

for a [phase frequency]^T state vector in and out.

With such a system matrix, the Kalman filter when stabilize degenerate 
to a PI-loop system, which is what was concluded again in a mail just 
recently, they get same performance.


It works.

Not, you should make models for the noise of the reference and for the 
local oscillator. This is used to update the estimated uncertainty of 
the states, but this is where flicker noise-types does not work well, 
and all you can expect is approximations. This was also concluded again 
in a separate email.


Kalman is a nice tool, but does not bite very well on flicker noise. You 
could potentially get it better by attemping to filter the noise to be 
whiter, but it does not really work well. Kalman is not well adapted to 
this problem, but form a nice alternative to PI when it comes to 
avoiding heuristics to step PI loop bandwidth.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: GPSDO Control loop autotuning

2022-04-12 Thread Magnus Danielson via time-nuts

Tobias,

On 2022-04-09 18:13, Pluess, Tobias via time-nuts wrote:

Hi all,

My reply to this topic is a bit late because I have been busy with other
topics in the meantime.

To Erik Kaashoek,
You mentioned my prefilter. You are absolutely right, I looked again at my
prefilter code and decided it was garbage. I have removed the prefilter.
Thanks to your hint I also found another mistake in my PI controller code,
which led to wrong integration times being used. I corrected that and the
controller is now even more stable. I made some tests and the DAC output
changes less than before, so I guess the stability is even better!

To all others.
I discussed the topic of improving my GPSDO control loop with a colleague
off-list and he pointed out that, on this list, there was a while ago a
post about Kalman filters.
I totally forgot about this, I have looked at them at university a couple
years ago, but never used it and therefore forgot most of it. But I think
it would be interesting to try to implement the Kalman filter and compare
its performance with the PI controller I currently have.

I guess the first step before thinking about any Kalman filters is to find
the state space model of the system. I am familiar with the state space,
but one topic I was always a bit struggling with is the question which
variables I should take into account for my state. In the case of the
GPSDO, all I can observe is the phase difference between the locally
generated 1PPS and the 1PPS that comes from the GPS module. On the other
hand, I am not only interested in the phase, but also in the frequency, or,
probably, in the frequency error, so I think my state vector needs to use
the phase difference (in seconds) and the frequency error (?). So my
attempt for a GPSDO state space model is:

phi[k+1] = phi[k] - T * Delta_f[k]
Delta_f[k+1] = Delta_f[k] + K_VCO * u

y[k] = phi[k]


I see you already got some help, but hopefully this adds some different 
angles.


A trivial state-space Kalman would have phase and frequency. Assuming 
you can estimate the phase and frequency noise of both the incoming 
signal and the steered oscillator, it's a trivial exercise. It's 
recommended to befriend oneself with Kalman filters as a student 
exercise. It can do fairly well in various applications.


This simple system ends up being a PI-loop which is self-steered in 
bandwidth. Care needs to be taken to ensure that the damping constant is 
right. The longer one measures, the narrower bandwidth the filter has as 
it weighs in the next sample, and the next.


However, we know from other sources that we not only have white phase 
and white frequency noise in sources, both reference and steered 
oscillator. We also have flicker noise. That won't fit very well into 
the model world of a normal Kalman filter. If we know the tau of a 
filter, we can approximate the values from TDEV and ADEV plots, but the 
Kalman filter will self-tune it's time-constant to optimize it's 
knowledge of the state, thus sweeping over the tau plots. Now, we can 
make approximations to where we expect them to end up and take out 
suffering in a somewhat suboptimal path to that balance points.


So, you can go the Kalman path, or choose pseudo-Kalman and have 
separate heuristics that does coarser tuning of bandwidth on a straight 
PI loop. Pick and choose what fits you best. It can vary from 
application to application.


Keep damping high or else the Q will cause an ADEV bump.

Increasing the degree to include linear drift has the benefit of 
tracking in frequency drift. Regardless if this is done as a PII^2 or 
3-state Kalman, care should be taken to ensure stability as it is not 
guaranteed that root-loci stays on the stable path. Worth doing thought.


I seem to recall a few papers in the NIST T archive as well as in PTTI 
archive on Kalman filter models for oscillator locking. The HP 
"SmartClock" paper should also be a good read.


A downside to Kalman filter is numerical precision issue, as core 
parameters is rescaled. There is a certain amount of precision loss 
there. I see the same problem occurring in more or less all least square 
and linear regression approaches I've seen. As I have been working on a 
different approach to least square estimation, it turns out that it is 
beneficial also on the numerical precision issue if you want, since the 
accumulation part can be made loss-less (by tossing bits on it) and only 
the estimation phase has issues, but since the estimation equations does 
not alter core accumulation it does not polute that state. Turns out 
there is multiple way of doing the same thing, and the classical 
school-book approach is not always the wisest approach.


Some time back there was some referene to papers relating to verifying 
performance by monitoring the noise out of the phase detector. It is 
applicable to a number of lock-up PLL/FLLs. I found it an interesting 
read. I've also looked at that state stabilize many hours in the lab. Be 

[time-nuts] Re: Catching range of GPSDO

2022-03-03 Thread Magnus Danielson via time-nuts

Erik,

On 2022-03-03 21:36, Erik Kaashoek wrote:

The GPSDO I'm building started with frequency locking but now I'm adding
phase locking so the time stamping counter can be on GPS time.
A first version works with a PI controller setting the vc-tcxo Vtune DAC
based on the phase difference of the 10 MHz with the PPS phase. Due to
tolerances the tcxo frequency range is big and is set by a 16 bit DAC where
1 bit is about 2e-11 frequency change.
Once the DAC value is close to the correct frequency the loop catches
nicely but if the setting is far off catching takes a long time.
A possible solution is to use the frequency error to set the DAC close to
the optimal frequency for catching.
Speed of catching is important as the design is intended to only be
switched on when needed.
Does anyone have pointers to info on how to do quick catching in such a
control loop?


The track-in time scales proportionally with the square of the frequency 
error and inversely cube of the loop bandwidth. The formula becomes 
garbled in ascii-math and I am too tired to convey it in readable form.


The lesson is: Make bandwidth high for quick lock-in so frequency 
acquisition is achieved, then switch over to a different bandwidth of 
the loop from that point.


The second lesson is: Pre-setting the frequency from previous learning 
can quick the process up.


Notice that the PI-loop will lock if the frequency error is within 
capture range, with is in effective that the oscillator can be steered 
to align.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Validating GPSDO control loop with simulation and measurement. Is this amount of difference between measurement and simulation to be expected?

2022-03-02 Thread Magnus Danielson via time-nuts

Hi Erik,

On 2022-02-26 17:36, Erik Kaashoek wrote:

Magnus, Bob,
Thanks for the replies.
@Magnus, I need to study your input a bit longer. What I currently 
fail to understand is : Should there be a low pass filter with a 1/100 
s corner frequency in series with the Vtune when the ADEV of the 
VC-TCXO and the PPS intersect at 100 s? Why not have a Kp of 0.01 
instead of an additional low pass filter? I understand how the low 
pass filter can help reduce noise from the DAC if needed or is there 
another reason?


The reason I did so was that the modulation, which always occurs as you 
step back and forth to interpolate, was intentionally moved to as high 
frequency as possible to make it as easy to filter out variations from 
as possible. This way the filter to remove most of that noise can have a 
bandwidth higher than the loop bandwidth, which is needed for loop 
stability. When they come near each other, this third pole may create 
unwanted pole-splitting moving a pole-pair into the forbiden/unstable 
region (right-hand side of Laplace domain or outside the unity circle of 
the z-transform-domain).


Hope it makes sense.

Cheers,
Magnus

@Bob, the reason for not converging may be some infrequent 
disturbances of the control loop caused by something not yet 
identified. Need to do some investigation into the cause.

Erik.
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe 
send an email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Validating GPSDO control loop with simulation and measurement. Is this amount of difference between measurement and simulation to be expected?

2022-02-25 Thread Magnus Danielson via time-nuts

Hi Erik,

A few comments and suggestions on the way.

Please note that for a PLL, a PID is equivalent to a PI regulator. Turns 
out that the P and D factors of the PID sums to form the P of the PI 
regulator, while the I transfer right over. I have seen no benefit in 
real implementation of adding the D factor, it did not add any magic to 
convergence.


Also note that P is proportional to f0*d and I is proportional to f0^2 
where f0 is the PLL resonance-frequency/cut-over-frequency and d is the 
damping factor. This is trival to derive and I usually do it on a white 
board or a single sheet of  A4 paper. I strongly advice you to use f0 
and d as your steering parameters rather than P and I, it will make the 
work better. Also, I recommend to keep d at least to 3 to avoid jitter 
peaking issues.


The actual formulas contain a little more details with a few constants 
here and there, but those can be lumped into a single aggregate constant 
for each of the P and I, and knowing the basic relationship allows for 
quick test and tweak. I've taught this to several colleagues and they 
ended up testing their way to production values without too much effort, 
they simply tweak them until they are satisfied with all the tests readings.


The gpsdo simulator tool is very fun and educational to learn from. I've 
written several similar simulation tools and they have proven very 
useful. I recommend to do tests to validate that the model and the 
actual implementation match up.


One technique to decide f0 (or it's reciprocal the timeconstant) is to 
plot the phase-noise of the reference and the controlled oscillator and 
simply choose f0 to be at the intercept point of the two graphs. A 
similar approach have been used on ADEV plots, in which case it is 
called the Allan intercept point. The Allan intercept point is maybe a 
little less well founded than the phase-noise variants, but try both and 
see how you end up, it will be a learning experience.


For a PI PLL, you can usually test overshot reaction to a phase or 
frequency step. The amount of overshot is directly related to the actual 
damping factor, so that is easy to validate. Further, forcing it into a 
low damping factor, you get a high Q so if you stress the loop with a 
phase or frequency step, the period of the resulting ringing will 
disclose the actual f0 of the loop. Using these two measurements, you 
can fairly trivially establish the actual constants of this equation system:


P = Kp * f0 * d
I = Ki * f0^2

Anyway, knowing the actual f0 and d for your implementation and being 
able to control those through the above equations will be of immense 
help as you then aim to optimize performance.


Notice that scaling factor includes EFC input sensitivity, so if you 
have different oscillators, break out that as the Ko factor and set it 
properly for each oscillator. It is actually fairly simple to measure 
the actual sensitivity by intentionally steer the EFC and see how much 
frequency change, and you will trivially calculate the steering factor 
Ko. With a little bit of work, one can make it self-calibrate this for 
the oscillator.


As for your question, for your simulation and your implementation to 
match up, you need to validate that your model of your implementation 
match up and have know scale factors. Spending time to make sure things 
work as you expect then saves you tons of work and you can use this 
knowledge to simulate and then verify in actual implementation (which 
tends to be slow) only a few times.


Cheers,
Magnus

On 2022-02-25 16:18, Erik Kaashoek wrote:
Inspired by the gpsdo simulator written by Tom Van Baak  I am trying 
to use simulation to validate the PID loop parameters for a cheap and 
simple GPS referenced timer/counter I'm building.
The following one hour measurements where done using a Rbd reference 
as input to the timer counter(see Timelab.gif plot attached)

1: Open loop (P=0, I=0) VC-TCXO versus the Rbd reference (green trace)
2: PPS of the internal GPS versus the Rbd reference. (PPS trace)
3: Rbd reference versus the closed loop (P=0.02, I=0) VC-TCXO (dark 
blue trace)
4: Rbd reference versus the closed loop (P=0.01, I=0,5) VC-TCXO 
(red trace)

All .tim files also attached.
The performance of the TCXO was remarkably good for a sub 1$ device,  
great care went into  a stable, low noise, Vtune. The noisy supply did 
not have a big influence
The open loop TCXO frequencies where divided by 1e7 (SW PICdiv)and 
together with the raw PPS frequencies loaded into an Excel speadsheet 
(see attached)
An identical PID controller  (P=0.01, I=0.0005) was implemented in 
Excel that took as input the frequency difference between the PPS and 
the TCXO and calculated a correction for the TCXO frequency. The 
corrected TCXO frequencies was exported from excel (PID output) and 
imported in Timelab (PID, light blue trace)
As can be seen in the Timelab plot the controller implemented in Excel 
did a better job then the 

[time-nuts] Re: Types of noise (was: Phase Station 53100A Questions)

2022-02-21 Thread Magnus Danielson via time-nuts

Hi Joe,

On 2022-02-21 20:52, Joseph Gwinn wrote:

time-nuts Digest, Vol 214, Issue 22
On Sun, 20 Feb 2022 03:30:27 -0500, time-nuts-requ...@lists.febo.com
wrote:


Message: 5
Date: Sun, 20 Feb 2022 01:13:50 +0100
From: Magnus Danielson 
Subject: [time-nuts] Re: Types of noise (was: Phase Station 53100A
Questions)
To: time-nuts@lists.febo.com
Message-ID: <9b7416ef-1ead-72eb-1010-0bf355c80...@rubidium.se>
Content-Type: text/plain; charset=UTF-8; format=flowed

Hi,

On 2022-02-20 00:08, Joseph Gwinn wrote:

Message: 14
Date: Sat, 19 Feb 2022 01:12:05 +0100
From: Magnus Danielson 
Subject: [time-nuts] Re: Types of noise (was: Phase Station 53100A
Questions)
To: time-nuts@lists.febo.com
Message-ID: 
Content-Type: text/plain; charset=UTF-8; format=flowed

Dear Joe,

On 2022-02-13 23:31, Joseph Gwinn wrote:

On Sun, 13 Feb 2022 03:30:30 -0500, time-nuts-requ...@lists.febo.com
wrote:
time-nuts Digest, Vol 214, Issue 15

Attila,



Amplitude and phase noise are looking at noise from two different
perspective. One is how large the variation of the peak of a sine
wave is, the other is how much the zero crossing varies in time.
Note that all natural noise sources will be both amplitude and
phase noise.

Hmm.  One case I'm interested in is where the path attenuation varies
according to a random telegraph waveform, due to for instance a loose
connector or cracked center conductor rattling under heavy
vibration.  In this, the electrical length does not change.  While
the source of the carrier whose PN is being measured will have some
mixture of AM and PM characteristic of that source, the residual
(added) PN will be characteristic of the transit damage encountered
between source and PN test set.  So wouldn't this randomly varying
attenuation yield mostly residual AM PN and little residual PM PN?

Actually, measure vibration impact like this have a long tradition and
is indeed possible.

I thought as much.  Can you cite any articles on this?

Well, in the audio industry, wow and flutter measurements have been
taken a step further to use such a recording and then analyze the
side-band and use that to identify which wheel etc. I have not seen an
article about it, but I've been told about it being used in the late
1980s to diagnose professional quarter inch tape machines.

I had not heard of this approach, but it certainly makes sense.  No
gear teeth in such systems, but an eccentric rubber wheel would leave
a signature for sure. I would guess that a modulation time series
would show the wobble quite clearly, allowing its period to be read
directly.


The experience was very clearly that it gave very direct indication of 
what needed to be fixed. The story was that they had some 50+ machines 
to work through in a few weeks at the customer. The japanese engineer 
brough with him some extra tools and they used these to diagnose, fix 
and then verify all the machines, and it was very efficient approach.


The take-away is that it makes sense and it's essentially doing the 
phase-measurement and spectrum as we do, and was able to detect small 
variations, and being trained on them it pin-points the issue.



You will also find that similar have FFT spectrum analyzers been used
for quite some time for similar mechanical analysis to diagnose large
machinery and pin-point which cog-wheel or whatever is having an issue.
Often used with vibration sensors. I know that HP featured it in a few
of their catalogs etc.

I have read many articles in IEEE Instrumentation and Measurement
Society publications on diagnosing geared machinery for (impending)
bearing and/or gear failure by looking for tones whose frequencies
are in the same rational-number angular-speed ratios as the various
parts of that gear train.

Which makes sense.
  

Regardless, it's fundamentally the same principle involved.

Yes.

  

It may or may not be an effective method thought. As suggested by
others, TDR may very well be more effective method to locate impedance
errors. Could be that they add good information for different errors.

TDR units may have some difficulties with an unstable contact under
vibration.  When one has determined that there is a problem somewhere
using the 53100A is when the TDR equipment comes out, if the root
cause isn't obvious on inspection.

Impedance variations when they exists will be measureable, and smoothed
out by the TDR for sure, but location bump should be clear enough when
detectable.

Also, recall that erroneous connectors can create passive
intermodulation distortion (PIM), which is readily measured using the
two-tone method.

The signal levels are pretty low for PIM to be important.  And the
connectors are generally gold plated.  A cracked copper conductor
could in theory do PIM, but I have not seen this.  Even if it is
happening, so long as the AM component jumps, it will serve to warn
the experimenter.

I was suggesting it as an active diagnosis approach if your signals does
not provide suitable PIM 

[time-nuts] Re: Testing GPSDOs

2022-02-20 Thread Magnus Danielson via time-nuts

Hi Hal,

On 2022-02-20 09:41, Hal Murray wrote:

kb...@n1k.org said:

Can you build this or that from scratch? Sure you can. Being sure that it
does indeed work correctly .. not so easy.

Let's change the discussion a bit.  Assuming I have a GPSDO, home built or
eBay, how can I test it with a limited budget?

There is another possible tangle in here.  What if I don't have a good antenna 
location?  Is there a simple way to measure/plot the goodness of an antenna?  
How does the goodness of a GPSDO depend on the goodness of its antenna?

Well, it is usually hard to measure the absolute offset errors yourself, 
but you can get started with stability.


So, let's assume you have a rubidium clock, which is usually available 
for reasonable money for most hobbyists.


The phase and frequency will be off naturally, and we can assume that 
there is a linear drift in there too. There will be both environmental 
effects and random noise effects, but let's assume that we can live with 
those limits to start with.


In this context, we can assume that the GPSDO is nominally tied in 
frequency and drift to UTC over the GPS link, assuming we do not have a 
major design-flaw which would become apparent anyway, so we can then 
assign the detected frequency error and linear drift terms to the 
rubidium. Similarly we assume the phase error comes from there. By doing 
out measurement, then removing the quadratic trend from the measurement, 
we end up with the variations of the GPSDO and the variations of the 
rubidium. Having a reference trace of the rubidium alone will help to 
see what is reasonably additional instability from the GPSDO. You can 
view this as phase and frequency variations as well as the many ADEV 
variants of your liking. Essentially, this is exactly what we do with 
TimeLab in a straight setup.


Once you hit the floor of your rubidium, getting a better reference or 
visit a friend with a better reference becomes worth the effort. I'd say 
you can come fairly far in this approach. I strongly suggest to log all 
the state of the GPSDO into a InfluxDB database and illustrate it in 
Grafana. If you can include the rubidium phase measures in that, you 
have a lot of useful in-loop and out-of-loop data to ponder over. Toss 
in additional environmental sensor readings to help with characterizing 
environmental effects.


Pulling in GPS/GNSS receiver state into the Grafana can help to identify 
events happening there to deviations.


First you will find a number of bugs, some will be harder than others. 
You are bound to learn a number of practicalities of implementing 
real-time control systems. Most of which a rubidium would be just fine 
for a long time.


So, you can learn a lot this way, for reasonable money. Once you've 
covered enough of those corners, improved performance and corner cases, 
that's when you need to step up testing further.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Types of noise (was: Phase Station 53100A Questions)

2022-02-19 Thread Magnus Danielson via time-nuts

Hi,

On 2022-02-20 00:08, Joseph Gwinn wrote:



Message: 14
Date: Sat, 19 Feb 2022 01:12:05 +0100
From: Magnus Danielson 
Subject: [time-nuts] Re: Types of noise (was: Phase Station 53100A
Questions)
To: time-nuts@lists.febo.com
Message-ID: 
Content-Type: text/plain; charset=UTF-8; format=flowed

Dear Joe,

On 2022-02-13 23:31, Joseph Gwinn wrote:

On Sun, 13 Feb 2022 03:30:30 -0500, time-nuts-requ...@lists.febo.com
wrote:
time-nuts Digest, Vol 214, Issue 15

Attila,



Amplitude and phase noise are looking at noise from two different
perspective. One is how large the variation of the peak of a sine
wave is, the other is how much the zero crossing varies in time.
Note that all natural noise sources will be both amplitude and
phase noise.

Hmm.  One case I'm interested in is where the path attenuation varies
according to a random telegraph waveform, due to for instance a loose
connector or cracked center conductor rattling under heavy
vibration.  In this, the electrical length does not change.  While
the source of the carrier whose PN is being measured will have some
mixture of AM and PM characteristic of that source, the residual
(added) PN will be characteristic of the transit damage encountered
between source and PN test set.  So wouldn't this randomly varying
attenuation yield mostly residual AM PN and little residual PM PN?

Actually, measure vibration impact like this have a long tradition and
is indeed possible.

I thought as much.  Can you cite any articles on this?


Well, in the audio industry, wow and flutter measurements have been 
taken a step further to use such a recording and then analyze the 
side-band and use that to identify which wheel etc. I have not seen an 
article about it, but I've been told about it being used in the late 
80thies to diagnoze professional quarter inch tape machines.


You will also find that similar have FFT spectrum analyzers been used 
for quite some time for similar mechanical analysis to diagnose large 
machinery and pin-point which cog-wheel or whatever is having an issue. 
Often used with vibration sensors. I know that HP featured it in a few 
of their catalogs etc.


Regardless, it's fundamentally the same principle involved.

  

It may or may not be an effective method thought. As suggested by
others, TDR may very well be more effective method to locate impedance
errors. Could be that they add good information for different errors.

TDR units may have some difficulties with an unstable contact under
vibration.  When one has determined that there is a problem somewhere
using the 53100A is when the TDR equipment comes out, if the root
cause isn't obvious on inspection.
Impedance variations when they exists will be measureable, and smoothed 
out by the TDR for sure, but location bump should be clear enough when 
detectable.

Also, recall that erroneous connectors can create passive
intermodulation distortion (PIM), which is readily measured using the
two-tone method.

The signal levels are pretty low for PIM to be important.  And the
connectors are generally gold plated.  A cracked copper conductor
could in theory do PIM, but I have not seen this.  Even if it is
happening, so long as the AM component jumps, it will serve to warn
the experimenter.
I was suggesting it as an active diagnosis approach if your signals does 
not provide suitable PIM products.




I would use a wealth of methods to attempt different techniques and see
what they excel at and not.

Yes.



I would not assume the random telegraph waveform variation. I would
rather learn from reality the types of variations you see.

Random telegraph keying is likely when a loose contact is driven with
random vibration.  If the vibration is instead a sine wave, some kind
of square-wave keying is more likely.  And so on.  Random telegraph
keyed waveform seemed representative.


Rather than random noise, yet to he correlated noise is what you mean. 
Not all of that is random in occurrence.


  

I think you should consider two different phases, detection of problem
and location of problem. When it comes to location finding, TDR excel
at that. AM measurements as well as PIM is relevant for detection of
problem as well as verification.

Yes, but with a pre-scan before phase-noise tests are run.


The dynamics of modern phase-measurement kits is amazing, so yes.

BTW When you do indeed have PIM, AM-to-PM and PM-to-AM conversion is not 
unheard off.


  

I would recommend you to look at the updated IEEE Std 1193 when it comes
out. There is improved examples and references in it that may be of
interest to you.

Will do.  The prior version is well-thumbed now.

  

It may be beneficial to stick accelerometers here and there to pick up
the vibrations, so it can be correlated to the measured noise, at it
could help to locate the source of the noise and thus help with locating
where, more or less which engine that was causing it.

We do usually have nearby accelerometers, but no direct way to
correlate 

[time-nuts] Re: 10 MHz TCXO periodically jumping 20 mHz up and down, solved

2022-02-19 Thread Magnus Danielson via time-nuts

Erik,

What you describe is a classic problem. Especially oven controlled 
oscillators will have GND and VCC issues.


I recommend you to look att both frequency and phase deviation plots. 
Systematics like these is mangled up in a ADEV plot.


Regardless of what isuse you really had, I hope you learned a bunch from 
all the different comments. A failure you learn from is not a failure, 
it's an experience. A failure you do not learn from, is the real failure.


Keep going!

Cheers,
Magnus

On 2022-02-19 21:18, Erik Kaashoek wrote:

Magnus, others
My previous mail was not very clear but the jumping problem was solved.
It was not caused by the TCXO but by small current fluctuations in the TCXO
causing small VCC fluctuations causing feedback into he Vtune input because
the Vtune was derived from the VCC.
I just did not realize how sensitive the Vtune input was.

Thanks again for the feedback as now I realize I need to check if there is
not a synthesizer inside that is being set when changing Vtune or
temperature causing unwanted clicks or steps. Can this be tested by using a
slow small sweep on Vtune and check with timelab is there are no jumps in
the sweep?
Or is it better to analize a high harmonic of the 10MHz on a SA?
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: 10 MHz TCXO periodically jumping 20 mHz up and down, cause identified

2022-02-19 Thread Magnus Danielson via time-nuts

Erik,

So, your pick of VC-TCXO is one that obviously seems to use the 
fractional synthesis to both set the output frequency as well as 
compensate for temperature. The modulations you have will be intrinsic 
to the pick.


As you lock your VC-TCXO to a GPS, the average frequency will be locked 
to the GPS, but the variations you have from steering will remain. The 
control theory of a PLL lock says that you will low-pass filter the 
signal on the reference and high-pass filter the noise of the 
oscillator. The cut-over frequency is the same, the bandwidth of the 
PLL. The actual responses will also be coloured by the damping factor, 
which should be kept high to avoid bumps. A rule of thumb is to ensure 
you keep the damping factor at least 3 or higher. Now, that remains of 
you shifts will be the spikes of the shift, and you really do not get 
rid of them unless you have a second cleanup PLL that can low-pass 
filter out them. This assumes naturally that you have a quiet oscillator 
in the clean-up, and the lesson being, you should be using that 
oscillator instead most of the times. The special case is when you have 
a somewhat dirty but stable oscillator so you can do hold-over in 
unlocked conditions, and then use a clean but less table oscillator as 
clean-up. However, many times you can just get a clean oscillator and 
avoid the issue, resulting in a simpler and cleaner design, which may be 
beneficial as you get a more compact and less power-hungry setup.


These disturbances can eat your precision if you do frequency counters 
or spectrum analysis. Actual frequency precision is not the only 
measure, but phase-noise and ADEV stability (as perscribed in IEEE Std 
1139).


It sounds like you have yourself a fine little oscillator there, but it 
may be unfit for your application. I've seen that so many times. Being 
"in spec" in terms of the datasheet does not necessarily makes it "in 
spec" for the application. At best datasheets help you coarse-select 
candidates, but then they need to be tested that their basic behaviour 
is compliant with all the performance needs of the actual application. 
Learning what is relevant and not for an application is a learning 
experience, and one set of experience may or may not be relevant for 
another users needs. If one only use high quality oscillators, the types 
of tricks being used in high volume low cost oscillators to provide some 
set of performance can cause surprises. Either one learn to live with 
them, or find ways to compensate them.


For instance, the low frequency PWM pattern you have on frequency you 
measured creates a phase-deviation that looks like a triangular wave 
shape. It will sweep from equal-slope triangular over to sawtooth shape. 
The acceleration spikes as frequency shift at th ends will stress a 
follow-up loop. The time-deviation of the oscillator will limit what you 
can do with it for other measures. The time-deviation can be 
characterized in the standard MTIE curve that measure max max-to-min 
time-deviation. This is important in telecom applications, as it 
translates to buffer-size. It will also be important in time-interval 
measurements.


OK, do let's calculate the peak-to-peak of the phase. You measured a 
step difference of about 20 mHz and a period of 107 s. Now let's just 
assume that we set it up to synthesize 10 mHz, that is just inbetween 
the two steps. This means half the time it would be set to the lower 
frequency and the other half to the high frequency, and it now becomes 
trivial to see that the frequency synthesis goal is reached. However, it 
means that it spends about 50 s at +10 mHz and then 50 s at -10 mHz of 
that average. This produces a phase-ramp going -0.5 s and then +0.5 s, 
since 0.01 * 50 = 0.5 s. The peak-to-peak time is thus half a second. 
That is really bad. This is the problem with long PWM on frequency.


For one design I did, I built a reversed PWM spectrum modulation, which 
actually had about the same logic complexity of PWM. It forced the most 
significant energy to the highest possible frequency, where it becomes 
trivial to filter. For that system I had both an analog filter and the 
filtering of the VC-OCXO. The side-consequence is also that the ramp of 
phase error keeps being moved up and down at high rate and only ower 
components would be seen, but much damped. Any remaining phase issue is 
then controlled by the loop as it is a phase-lock and those errors is at 
low enough frequency to be suprpessed by the high-pass function of the 
PLL. Also, for my case, the issue was small since it was a way to cram 
out 19 bits out of a 16 bit DAC, so the amplitude was scalled down by 
1/65536 of full-scale and then by the sensitivity of the EFC input.


We have to do these ugly tricks in real life engineering, but the trick 
is learning to cheat where it doesn't hurt too much. Telling about these 
things I hope is illustrative enough to be a good reading.


Cheers,
Magnus

On 2022-02-19 

[time-nuts] Re: 10 MHz TCXO periodically jumping 20 mHz up and down

2022-02-18 Thread Magnus Danielson via time-nuts

Hi Erik,

I only saw that thread later, and I will have to return to that as I 
have a little more energy.


I'm trying to get you up to speed with the many variants there is, and 
there is plenty experience here to feed from. What may be true for one 
device will not make any sense for another. Please feel free to ask 
questions on and off list. I've had to measure behaviours of many 
devices over the years. Eventually the reduced the range of devices I 
had to measure for qualification, since they learned what probably would 
not work.


For the price-range, manufactures have been very inventive in their 
approach to make compensations and also make a wide range of 
frequencies. Today we have frequency synthesis so the distributor can 
program the frequencies the customer wants. Now, combine that with TCXO 
and you can let that synthesis also synthesize the output frequency for 
compensation. Trouble is, it can cause jitter we do not want, but for 
many applications that's just fine, as they even want more of it for 
spread spectrum to make EMC compliance easier.


It might be that you are looking in the wrong range of oscillators for 
your type of application.


Also, beware that different vendors have their different tweaks. They 
may not even be the same over time. From bitter experience, check your 
second sources before putting them onto the second source list.


Spending a bit more can get you out of certain troubles.

I do not recall what your application of choice was, sorry if I missed 
that in the process.


Cheers,
Magnus

On 2022-02-18 12:52, Erik Kaashoek wrote:

Hi Magnus,
Tom also replied to my question and suggested a 107.34 seconds 
interval related to dithering with a 1e7/2^30 interval
Unfortunately the datasheet is rather short (sub $1 device) and does 
not provide any hints to being a digital implementation.

Thanks to all for helping!
Erik.
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe 
send an email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Types of noise (was: Phase Station 53100A Questions)

2022-02-18 Thread Magnus Danielson via time-nuts

Dear Joe,

On 2022-02-13 23:31, Joseph Gwinn wrote:

On Sun, 13 Feb 2022 03:30:30 -0500, time-nuts-requ...@lists.febo.com
wrote:
time-nuts Digest, Vol 214, Issue 15

Attila,


Date: Sat, 12 Feb 2022 20:38:48 +0100
From: Attila Kinali 
Subject: [time-nuts] Types of noise (was: Phase Station 53100A
Questions)
To: Discussion of precise time and frequency measurement

Message-ID: <20220212203848.72783256d221001199dfd...@kinali.ch>

On Fri, 11 Feb 2022 18:25:05 -0500
Joseph Gwinn  wrote:


May not realize that thermal noise (additive) and phase
noise (multiplicative) are not the same, and do not behave the same.

It seems like you are mixing up here quite a few different concepts:
Phase noise vs amplitude noise, additive vs multiplicative noise,
thermal vs other noise sources, white noise vs 1/f^a-noise.

You are right of course.  I was using shorthand.

A better word than multiplicative is parametric, the varying
parameters being path loss and path group delay.  This is as seen at
the phase noise test set.



All these are orthogonal to each other and you can pick and match them.
I.e. Phase noise can be additive, 1/f^2-noise and thermal.

At the generator, certainly.  But the downstream PN test set may not
be able to tell.  More later.



Amplitude and phase noise are looking at noise from two different
perspective. One is how large the variation of the peak of a sine
wave is, the other is how much the zero crossing varies in time.
Note that all natural noise sources will be both amplitude and
phase noise.

Hmm.  One case I'm interested in is where the path attenuation varies
according to a random telegraph waveform, due to for instance a loose
connector or cracked center conductor rattling under heavy
vibration.  In this, the electrical length does not change.  While
the source of the carrier whose PN is being measured will have some
mixture of AM and PM characteristic of that source, the residual
(added) PN will be characteristic of the transit damage encountered
between source and PN test set.  So wouldn't this randomly varying
attenuation yield mostly residual AM PN and little residual PM PN?


Actually, measure vibration inpact like this have a long tradition and 
is indeed possible.


It may or may not be an effective method thought. As suggested by 
others, TDR may very well be more effective method to locate impedance 
errors. Could be that they add good information for different errors.


Also, recall that errorenous connectors can create passive 
intermodulation distorsion (PIM), which is readilly measured using the 
two-tone method.


I would use a wealth of methods to attempt different techniques and see 
what they excell at and not.


I would not assume the random telegraph waveform variation. I would 
rather learn from reality the types of variations you see.


I think you should consider two different phases, detection of problem 
and location of problem. When it comes to location finding, TDR excell 
at that. AM measurements as well as PIM is relevant for detection of 
problem as well as verification.


I would recommend you to look at the updated IEEE Std 1193 when it comes 
out. There is improved examples and references in it that may be of 
interest to you.


It may be beenficial to stick accelerometers here and there to pick up 
the vibrations, so it can be correlated to the measured noise, at it 
could help to locate the source of the noise and thus help with locating 
where, more or less which engine that was causing it.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: 10 MHz TCXO periodically jumping 20 mHz up and down

2022-02-18 Thread Magnus Danielson via time-nuts

Hi Erik,

I think you have yourself a digital TCXO controller. Those use a 
tempsensor, use the reading to calculate the compensation and the use a 
normal varactor control to steer the frequency. Older TCXOs use a 
resistor/thermistor network to do the same work. You can probably read 
up on the vendors material to see that the keywords are there to support 
the suspicion.


So, what you see is really the resolution limit of reading/control. As 
long as it's within spec, and the transitions does not upset your system 
downstreams, I guess it's just fine. If you have issues with the steps, 
then another product is what you look for.


Cheers,
Magnus

On 2022-02-18 11:11, Erik Kaashoek wrote:
During long term testing of some 10 MHz TCXO  the output frequency 
seems to jump within one second 20 mHz ( millihertz) up in frequency 
every 110 seconds up and after a 25 seconds, again within one second, 
the same amount down. The noise in the frequency measurement was well 
below 5 mHz
In an ASCII drawing of frequency versus time this looked like: 
||___||
Sometimes the high frequency period was very short (some seconds) or 
absent but the overall period was within 5 seconds constant
This was tested with 4 different power supplies, although all where 
mains connected, not yet tested with battery only, 2 different 
counters and two different reference frequency standards.
The TCXO was thermally shielded and testing with some cold air showed 
a different  behavior for external temperature changes (fast jump away 
and slow return to stable frequency)
Also, with thermal shielding removed, touching the TCXO showed also a 
fast jump away and slow returning to stable
Measuring the supply voltage did not show clear changes but the used 
voltmeter only had 4 digits resolution.
The official spec of the TCXO is much worse so the device is well 
within spec but I'm trying to understand why this could happen.

Does anyone know a possible cause for this behavior?
Could this be a small mains supply variation in a 110 seconds long cycle?
Or what else?
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe 
send an email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Is this bad OCXO behavior?

2022-02-05 Thread Magnus Danielson via time-nuts

Hi,

As you run the OCXO without the lock to the maser, what does it do then?

Can you probe the input before the loop?

The EFC Voltage is a consequence of the loop action, so sorting out what 
is a oscillator thing and what is sourced by other things becomes much 
easier by opening up the loop and measure things separately.


Cheers,
Magnus

On 2022-02-06 03:28, Skip Withrow wrote:

Hello Time-Nuts,

Well, hopefully many of you have read the saga of the Sigma Tau MHM-A1
restoration by now.

Seems like the next chapter has already begun.  Attached are two
graphs, one of the VCO EFC voltage, and one of the VCO supply voltage.
They cover about a six day period (logged once per hour).  As you can
see the EFC plot has several blips in it.  After two of the three
episodes there is an offset in the EFC voltage.

The second plot is the VCO supply voltage.  There does not appear to
be any spikes there that correlate to the EFC events.  You can see the
daily diurnal variations in the supply voltage as there is 2 ohms in
series with the supply before this point (and the bus supply has a
slight diurnal variation as well).

I know that OCXO's can have these types of jumps.  But I have to
believe that this is not a desired thing for a maser.

The two big questions I have are:

1. Should I consider replacing the oscillator?  It is an Austron 1120L
which is probably unobtainium.  However, there are probably lots of
SC-cut low phase noise units out there today that would beat the pants
off this unit.

2. Could it be a component other than the crystal?  Not that I want to
go tearing into the oscillator again, but if it was the varactor diode
or a capacitor I might be up to the challenge.  The problem is trying
to identify the faulty component.

Just wondering what the hive mind thinks.
Thanks,
Skip Withrow

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Timestamping counter techniques : dead zone quantification

2022-02-05 Thread Magnus Danielson via time-nuts

Erik,

You also also test the issue by vary the slew-rate of the input signal. 
The trigger circuit will convert voltage noise into time noise, and any 
such leakage will become larger time for slower slew-rate.


You can look up Collins paper amongst others for zero-cross-detectors.

Combining that with the knowledge of leakage can be fruitful to 
understand the analog side of it.


The cross-talk comes in two flavours, one is just straight 
signal-leakage, typically capacitive coupling, the other is indirect 
through common ground-bounce which is really inductive.


High resolution counters have been used to analyze signal integrity 
issues like these. An alternative approach is TDR/TDT, which I tend to 
fancy.


A good way to characterize an input is to measure the RMS for a sweep 
over all phase relationships of the incomming phase and that of the 
coarse clock. One need to make sure one covers those phase relationships 
equally enough.


Cheers,
Magnus

On 2022-02-04 11:49, Erik Kaashoek wrote:

Magnus,
Thanks, good input. To check if there is "pulling" between the two 
counter inputs I used two signals generated by two PLL's from the same 
OCXO. First measurement is both at 10MHz. The ratio of these two 
signals was measured in the two counters using a shared 10MHz 
reference with a 0.1 s gate time. The ADEV behaves well and starts 
below 1e-9.
When one of the signals is shifted with 0.1 Hz (ratio change 1e-8) or 
0.2 Hz (ratio change 2e-8) the ADEV starts to show oscillations and 
the frequency difference shows a pulling pattern that repeats every 10 
s for 0.1 Hz difference and 5 s for 0.2 Hz difference.

Both ADEV and frequency difference plots are attached
The difference between the 10MHz from the signal generator and the 
10MHz reference in the counters was large enough to not create any 
visible pulling using a 0.1 s gate time but when I brought the 10MHz 
from the signal generator close (within 0.2 Hz) to the 10MHz reference 
in the counter the interaction became very visible and the repetition 
rate nicely varied with the measured frequency difference.
This clearly demonstrates the cross-talk you mentioned, both between 
the two counter inputs and between the inputs and the counter 
reference OCXO
As my goal is to create a dual input timestamping counter that can 
reliably measure with 1e-9 accuracy (both short and long term) there 
is clearly some work to do.

Erik.

On 3-2-2022 17:14, Magnus Danielson via time-nuts wrote:

Erik,

You should be aware that cross-talk of transitions is a factor here. 
It "pulls" the transition to the time-base clock.


It can be worth evaluating this by delaying the time-base clock in 
controlled manor and measure non-linearity of the time-stamps.


A similar test is done between two inputs, as the trigger inputs can 
cause cross-talk from one another. This is known to be the issue of 
several vendors counters.


As you push the limit for the resolution, these effects tends to 
increase in relative size, but for other work they can be fairly 
ignored.


For some reason I have built a collection of pulse-generators and 
delay mechanisms to increase the ability to test this. :)


Cheers,
Magnus


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Timestamping counter techniques : dead zone quantification

2022-02-03 Thread Magnus Danielson via time-nuts

Erik,

You should be aware that cross-talk of transitions is a factor here. It 
"pulls" the transition to the time-base clock.


It can be worth evaluating this by delaying the time-base clock in 
controlled manor and measure non-linearity of the time-stamps.


A similar test is done between two inputs, as the trigger inputs can 
cause cross-talk from one another. This is known to be the issue of 
several vendors counters.


As you push the limit for the resolution, these effects tends to 
increase in relative size, but for other work they can be fairly ignored.


For some reason I have built a collection of pulse-generators and delay 
mechanisms to increase the ability to test this. :)


Cheers,
Magnus

On 2022-02-03 09:36, Erik Kaashoek wrote:
To prepare for the implementation of dead zone countermeasures I did 
some measuring of the dead zone band width versus frequency of the 
subharmonic
The test setup use a generator with two outputs, one fixed at 10MHz 
and one variable to test the dead zone. The fixed 10MHz was send to 
one input of the timestamping counter. The variable frequency output 
was send to the other input.
The reference clock used for the timestamping was set to 200 MHz and, 
through its VC-TCXO,  locked to the fixed 10MHz using a SW control 
loop updating the voltage to the VC-TCXO once every 10 seconds.
As the generator used was only able to set frequency at 0.1 Hz 
resolution there where some limitations in this assessment.
The dead zone was observed on the sub harmonics of 200MHz and its 
harmonics. The size of the dead zone was very much dependent on the 
used frequency
Below 1MHz the width of the dead zone was below 0.1Hz and thus not 
observable

At 10 MHz the width was about 1 Hz
At 40 MHz the width was about 2 Hz
At 80 MHz the width was about 10Hz
This makes implementing dead zone counter measures doable as with 
lower frequency subharmonics the width of the dead zone decreased thus 
putting a limit on the amount of subharmonics to include in the 
calculations and this makes it unlikely there is a  scenario where the 
two input frequencies used make it impossible to find a reference 
frequency avoiding the subharmonics.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe 
send an email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Timestamping counter techniques : phase computation question

2022-01-31 Thread Magnus Danielson via time-nuts

Erik,

On 2022-01-31 22:17, Erik Kaashoek wrote:

@Magnus
The time interval of the capturing of the counters is not always exactly
the same. There could be even substantial variation if the capture interval
is close to the event interval. Is this a problem for the calculation
method you propose?


Observant!

Yes, so there is an assumption that the time between samples is robust. 
In fact, this an approximation assumed that there is a regular \tau_0 
for each sample, and with that you can collapse the big linear algebra part.


At the same time, as you slip one cycle of the signal and measures the 
edge of a later cycle (thus having an event count higher than ideal), 
the next measure event will be one cycle shorter as it is likely to 
occurr on that ideal event. These tend to balance out. If you do plain 
averaging, it turns out the balance out perfectly, because as you add 
the time and event you just expanded the base-length. This is why just 
plain averaging does not give you more precision than just measure the 
end-points. Now, if you make separate frequency estimates and average 
those, it will not perfectly balance but the effect would be fairly small.


So, if your time-base generator is not stable, it can help to polute 
your observations somewhat. To some degree this can be remedied by 
running a separate least square on the on the event data to produce a 
\tau_0 estimate and plug into the final estimator forms. Another 
approach is to make sure that the time-base is just a certain number of 
cycles of event or time counter.


An approach to slipped cycles is to back-annotate them, simply by 
removal of their phase advance over that time. If your frequency is very 
near perfect match-up, that impact will be very low anyway so it can be 
ignored. Actually, it is usually ignored.


So, you just stumbled on one of the more peculiar things that make 
counter frequency estimation less perfect that you would think, part of 
it's systematic noise. A systematic noise often being overlooked and 
ignored. Funny is, some of it washed out with white noise and averaging.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Timestamping counter techniques : phase computation question

2022-01-31 Thread Magnus Danielson via time-nuts

Erik,

On 2022-01-31 20:32, Erik Kaashoek wrote:

Thanks all for the good input.

@Magnus, I need some time to understand the math as it has been over 30
years since when I used to do this kind of math.
There is no intention to store the collected captures, only to present a
measurement at the measurement interval, so currently I'm calculating
the 5 running sums from the captures and at the end of the interval I do
the regression calculation using these running sums like described in
the Wikipedia article on linear regression.
This is what I am storing now (Sum means running sum from start of
measurement interval till capture number n):  Sum(X), Sum(Y), Sum(X*X),
Sum (Y*Y) and Sum (X*Y) and n.


You end up not needing to use the linear algebra part, as it is removed 
and reduced.


You form two sums, then process them through one of the two rules 
depending on phase or frequency estimated and you are done.


This performs an very cheap least square estimation. The decimation 
rules allow hierarchial decimation, so you end up in Fast Least Square 
space just as you do with Fast Fourier Transform.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Timestamping counter techniques : phase computation question

2022-01-31 Thread Magnus Danielson via time-nuts

Hi Erik,

On 2022-01-30 12:46, Erik Kaashoek wrote:
In a timestamping counter I'm trying to calculate phase and frequency 
using statistical techniques.
The counter has two counters, one for the input events and one for an 
internal clock.

The capturing of these counters happens synchronized with an event.
The counter takes the timestamps at more or less regular, but not 
identical, intervals

An example of 20 captures is:

Events   Clock
280707207    1693452332
280708207    1693473665
280709207    1693494999
280710207    1693516332
280711207    1693537665
280712207    1693558999
280713207    1693580332
280714206    1693601644
280715207    1693622999
280716206    1693644310
280717206    1693665644
280718206    1693686977
280719206    1693708311
280720206    1693729644
280721206    1693750977
280722206    1693772311
280723206    1693793644
280724206    1693814977
280725206    1693836310
280726206    1693857644

By subtracting the counts at the first capture one gets:

Events  Clock
0       0
1000    21333
2000    42667
3000    64000
4000    85333
5000    106667
6000    128000
6999    149312
8000    170667
8999    191978
    213312
10999    234645
11999    255979
12999    277312
13999    298645
14999    319979
15999    341312
16999    362645
17999    383978
18999    405312

It is visible the clock count interval is not completely constant and 
the mathematical method used should be able to deal with these 
interval variations.
To calculate the ratio between the frequency of the events and the 
frequency of the clock a linear regression is calculated where the 
event capture is the X and the clock capture is the Y.
The slope of the linear regression is the ratio between the frequency 
of the events and the frequency of the clock.
To calculate the phase between the clock and the events it is assumed 
that the regression also provides the Y intersect for X is zero as the 
fractional correction of the clock count at event count zero as the 
capture happened synchronized with the event (and NOT with the clock)

I think you got off to a great start there.


This leads me to my questions:
1: Is using linear regression as described above a good method to 
calculate the phase relation between events and clock? If not, what 
method to use?


Modern counters use linear regression. However, the usual formulas can 
run into numerical precission issues. An alternative approach is to do 
least square analysis, that gives the same benefit as linear regression, 
but the calculation can be made to avoid or control numerical precission 
issues.


To get started, classic linear regression is doing just fine.

You should be aware that the effective bandwidth you have will depend on 
the number of samples, so as you later perform ADEV it will be affected 
by the filtering of the linear regression/least square, the same 
filtering helping you do good measures, as it supress white noise.


It is often incorrectly claimed that linear regression will give optimum 
frequency estimation, this is not true.


2: For highest accuracy of the calculation output, is it best the 
captures are at (almost) regular intervals (as above) or is some form 
of dithering of the interval better? And what form of dithering is best?


That will not help you in a good way. Keep it as close to the same 
interval as you can, and then let the linear regression work for you. 
The supression slope will be the optimum 1/(N^1.5) until you hit the 
noise floor of other noises.


There is a systematic noise you want to fight, and the best way to fight 
it is with dither-signal onto the quantization, and the sample points on 
a large scale is not the best method. Typically a "white noise" is 
better. However, the other filtering method does most of this anyway so 
it is rarely done.


3: Assuming it is possible to have a large amount (1e+5) of captures 
per measurement interval, are there other or additional methods to 
further improve the accuracy?


Linear regression / least square at a high rate can let you decimate 
data for good precision and high read-out rate.


Maybe this [1] article of mine can be of inspiration.

Cheers,
Magnus

[1] 
https://www.academia.edu/63957662/Memory_efficient_high_speed_algorithm_for_multi_%CF%84_PDEV_analysis


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: HP105B

2022-01-17 Thread Magnus Danielson via time-nuts

Hi,

On 2022-01-17 08:42, Poul-Henning Kamp wrote:


Magnus Danielson via time-nuts writes:


No, not for HP.

Somewhere on one of the HP memory webpages there is a presentation
where somebody claims the longest lived HP product is a particular
microwave gadget, I seem to recall it being a directional coupler.

There is several such gadgets that very well can be the longest HP 
(/Agilent/Keysight) product, but I raised the one they highlighted 
themselves, and it has a lifetime longer than mentioned for the 105. I 
think one should be careful to separate measurement instruments and 
gadgets, nothing wrong with either, but the former tends to have shorter 
life-cycle than the later can have.


I have a pair of HP200CD, really enjoy them for what they are, but they 
rarely come to use.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: HP105B

2022-01-16 Thread Magnus Danielson via time-nuts

Hi,

No, not for HP. The HP200A through D products where separate products 
for 8 years, then the 200A and 200B was merged to the 200AB and the 200C 
and 200D merged into 200CD that was running for 37 years, totaling in 45 
years of continuous production. Not bad for a tube-based oscillator. The 
numbers are from the top of my head, but was featured in an HP 
PR-article, where they where proud of this fact, even after breaking out 
Agilent. Someone can probably do the research and correct details, but I 
distinctly recall the 45 total years.


Cheers,
Magnus

On 2022-01-16 22:18, Louis Taber wrote:

Hi All,

The HP 105B is in the HP catalogs for a 33 year period, 1968 through
2000.  That is a 33 year run.  Is this an HP/Agilent/Keysight record?
  Does anyone know how the newer units were fabricated?

It is just barely mentioned in the year 2000 catalog.  The price went up
from $1800 to $9700 in 26 years.  About 5.5 times the original price.  I
wonder what the price was in 2000.

I was looking at the old HP catalogs at http://hparchive.com/hp_catalogs
for the HP105B Quartz Oscillator.  The 1967 catalog has the 106A and
107AR/BR on page 538, but no 105A/B

1968 $1800 p594-597
1969 $1800 p648-651
1970 $1800 p624
1972 $1950 p237
1973 $2145 p284
1974  No catalog or supplement published
1975 $2470 p287
1976 $2725 p276
1977 $2950 p274
1978 $3250 p300
1979 $3500 p282
1980 $3750 p284
1981 Not listed
1982 $5750 p307
1983 No price p281
1984 No price p275
1986 $5800 p257
1987 $6400 p340
1988 $6800 p467
1989 $7500 p487
1990 $8600 p491
1991 $9000 p510
1992 $9500 p556
1993 $9700 p498
1994/5/6 No catalog on the site.
1997 No price p493
1998 No price p503
1999 No price p508
2000 "The HP 105B quartz frequency standard uses the HP 10811D and is
available as a complete standalone instrument."  p 491
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Clock specs for audio (was: High precision OCXO supplier for end costomers)

2022-01-10 Thread Magnus Danielson via time-nuts

Hi,

On 2022-01-10 15:41, Attila Kinali wrote:

On Mon, 10 Jan 2022 12:35:17 +0100
Attila Kinali  wrote:


That said, are yo sure you need such stringend phase noise requirements?
It's audio. Nobody is going to hear whether the noise is -60dBc or -80dBc @ 1Hz,
much less -120dBc.

To give here a bit more background: psychoacoustic masking, which is the 
relevant
metric here, mans that we cannot discern sounds that are close to eachother with
one of them being louder than the other. Depending on who you listen to, it's
usually a sound masking another sound at a distance of 100Hz up to 20dB to 40dB
lower. Even if we account for someone with golden ears and use 60dB, that would
translate to a noise spec of -60dBc @ 100Hz offset. That's a spec that almost 
all
XO do fulfill. A good VCXO (40-100MHz) is somewhere around 90-100dBc @ 100Hz.
Any OCXO will fulfill that spec too, even the tiny DIL-14 ones (most are at
-110-140dBc @100Hz @10MHz).


A classical paper on jitter for audio is written by late Julian Dunn:

http://www.nanophon.com/audio/jitter92.pdf

He wrote a range of good papers which is worth digging up.

AES-3 (also known as AES/EBU) is the professional version of the digital 
audio interface that in the consumer version is called S/P-DIF. Much of 
the reasoning for AES-3 applies directly onto S/P-DIF.




And this doesn't take into account that we are arguing about audio frequency 
specs
at HF frequencies. I.e. if we use the 10MHz clock and use it to derive a 
sampling
clock for an ADC to sample a 20kHz signal, the noise performance improves by 
another
~25dB... at least (if the design is done right, it can be up to 50dB)

What is more important than close in noise, though, is broadband noise 
performance
and spurs. For someone with good ears, it's not unheard of to be able to discern
far away noise and spurs down a -100dB-120dB. Especially the spurs can be quite
hard to control, depending on what clock synthesis system is used.

Another important spec, especially for recording, is accuracy of frequency. An
offset of just 1ppm becomes 3.6ms if you record for an hour. That's something
most people can hear already. But whether this actually elevant or not depends
on how the recording is done. The usual way is to have a central master clock
that feeds all clocked devices, such that all of them have the same notion of
time/frequency. In that case, quite high frequency deviations can be tolerated,
way beyond what a simple XO would deliver.


In professional audio we tolerate +/- 10 ppm for larger productions and 
for all my experience, the exact frequency have not been as much trouble 
as that of different frequencies and the degradation that comes from 
slips, resync or Sample Rate Conversion (SRC). It has been interesting 
to teach the TV and radio techs about basic synchronization and how 
doing it well once save money, time and quality. There is a few war 
stories to be told.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: PICDIV stability (was: Crystal oscillator for a begginer)

2022-01-09 Thread Magnus Danielson via time-nuts

Hi,

The traditional way is to lock an oscillator and look at the phase 
detector output.


You get a high-pass filter from the locking, but for many purposes 
that's just fine.


In some cases it is called "the golden PLL method".

Cheers,
Magnus

On 2022-01-09 18:04, Marek Doršic wrote:

Is there any method to measure random jitter without TimePod or scopes costing 
a small fortune?

 .md


On 9 Jan 2022, at 01:21, Bruce Griffiths  wrote:

Yes, that post is full of misleading information.
The TI document is irrelevant as the PIC based divider doesn't have non 
harmonically related signals using the same chip.
All internal signals within the PIC are harmonics of the divided output signal.
The post did not distinguish between random jitter and data dependent jitter 
etc. Either the poster doesn't understand the finer details of frequency 
division or the post is intended to mislead.

Bruce

On 09/01/2022 12:55 Angus via time-nuts  wrote:


Maybe it got mashed up, but I only linked to one post, and that
addressed the specific question that had been asked. There is also, as
far as I know, no 'misinformation' in it. However if anything does
need corrected, I can easily do that.

  One of the main reasons that I did the test was all the actual
(IMHO...) misinformation that was in the thread about the PIC
dividers. I find them very useful and have not had any problems with
them, but since they are mostly used on 53131As which do not have a
very high resolution, I also wanted to see if I was missing anything.

As far as I can see, it showed just what is going on as well as I
could have expected with that scope, os I don't quite agree that
*everything* should be ignored :)

Angus.


On Sat, 8 Jan 2022 14:00:28 +1300 (NZDT), you wrote:


That entire thread is full of misinformation and should be ignored unless one 
understands the difference between random and data dependent jitter.

For a well designed divider with a single output frequency only the random 
jitter spec is significant.

One doesn't need a bunch of expensive LeCroy gear to measure RJ of such 
dividers as its PN manifestations are readily apparent and measurable.

Using one of the supposedly super low jitter flipflops isn't a panacea. In 
practice unless an appropriately designed ZCD is used the wideband input noise 
of the very fast FF will dominate and produce much more jitter than expected 
due to the relatively slow slew rate of the outputs of most 10MHz sources.

Bruce


On 08/01/2022 12:40 Angus via time-nuts  wrote:


On Fri, 07 Jan 2022 12:40:49 -0800, you wrote:


The two biggest outside influences on the PICDIV are supply voltage and 
temperature.

Another interesting influence is the number of outputs that are switching and
the load on them.  In particular, if you have several outputs running at
different frequencies, the clock-out delay should be slightly longer when 2
outputs switch when compared to when only one is switching.

Has anybody measured that on a PIC? (or similar chip)

I think one of tvb's picDEVs has several outputs.

To some extent:
https://www.eevblog.com/forum/projects/easiest-way-to-divide-10mhz-to-1mhz/msg3257018/#msg3257018
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: PICDIV stability (was: Crystal oscillator for a begginer)

2022-01-08 Thread Magnus Danielson via time-nuts

Hi Hal,

On 2022-01-07 21:40, Hal Murray wrote:

The two biggest outside influences on the PICDIV are supply voltage and 
temperature.

Another interesting influence is the number of outputs that are switching and
the load on them.  In particular, if you have several outputs running at
different frequencies, the clock-out delay should be slightly longer when 2
outputs switch when compared to when only one is switching.

Has anybody measured that on a PIC? (or similar chip)

I think one of tvb's picDEVs has several outputs.


I have measured it on the TADD-2. It's not so quiet as I could wish, but 
there is good usage for the TADD-2s.


It's very clearly noise from other outputs.

So, one needs to check how quiet it has to be. Using a single-frequency 
output setup can be expected to be much more quiet. For some counter 
setups, the PICDIV is better than the counter anyway.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: OCXO Oven design (was: E1938A phase noise improvement)

2021-12-28 Thread Magnus Danielson via time-nuts

Hi,

On 2021-12-26 23:38, Attila Kinali wrote:

On Sun, 26 Dec 2021 15:54:06 -0500
Bob kb8tq  wrote:


The market is what dictates how fancy an OCXO gets made. Bottom
line is that there really isn’t that big a market (and willingness to pay
for super duper TC). If indeed you could do all the fancy stuff and
still keep the sell price below $10 then who knows ….

I wonder about that. I was told by an ex-manager of
Oscilloquartz, that their biggest problem with the 8607
was that the Option 08 sold too well. So much, that
they had a huge over-supply of the other, lesser
versions of the 8607, to the point that even raising
the price of the Option 08 beyond what a car cost
didn't recover its cost.

Sure, such a high stability oscillator doesn't have a
mass market. And it's definitely not a comodity item.
But there seems to be decent market even if its very
expensive.


Well, within some very small market segment it may feel unlimited. Trust 
me when I say I wish I could fit 8607s into my boxes, but they would not 
fit the price-range, size and to some degree power options. There is 
actually quite many oscillators made today that also do not fit the bill.


There isn't one market. There is actually several parallel market 
segments, each with their own quirks and logics to them. The oscillators 
is put in different environments for various reasons.


I see increased market for wider temperature range, as more devices 
needs to operate in cars, so outside of 0 to 70 C range into -40 to +85 
C. That does only benefit a small class of oscillators market and 
feature wise.


Another thing we see is that synthesizer chips take over. We use less 
odd frequency oscillators today in our designs. That also changes the 
market.


So what may be true for Oscilloquartz and their very narrow 
customer-range, does not apply to other uses, and I am also in telecom 
timing just as them.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-16 Thread Magnus Danielson via time-nuts
In modern IP/MPLS routing, forward and backward paths is indivudally 
routed and becomes re-routed do spread traffic load or overcome loss of 
links. Analyzing it makes much more sense in this form. You can draw the 
same conclusion from the wedge-plot of TE vs RTT, but it may be less 
clear, so shift plot-form to match the problem.


At ITSF 2021 GMV had a presentation where they used NTP based on Chrony 
between two RPis which had good GNSS timing on both ends between two FTH 
accesses. They showed significant noise and asymmetry. In parallel they 
where using a different equipment which was able to chew way into the 
noise, and see other variations.


Cheers,
Magnus

On 2021-12-15 17:43, Adam Space wrote:

Good idea. Doing so reveals the expected outcome from the wedge plot:
variable forward path delay, shifted in the positive direction, and a
pretty stable negative path delay. Is this the norm for consumer grade
connection? It seems to be for me.

On Wed, Dec 15, 2021 at 10:53 AM Magnus Danielson via time-nuts <
time-nuts@lists.febo.com> wrote:


Hi,

Expect network routes to be more dispersed these days, as it is needed.

While the wedge plot is a classic for NTP, it may be interesting to plot
forward and backward path histograms independently.

Cheers,
Magnus

On 2021-12-15 16:25, Adam Space wrote:

Yeah I think it is localized. Network paths have been quite variable for
me. Every once in a while I start getting massive delays from the NIST
servers to my system, resulting in results like yours.

Interestingly though, time-e-g was one of the only servers that didn't

have

this problem for me. This is a recent wedge plot for it. seems to be
working fine for me now, just with a variable outgoing delay causing
positive offsets, which seems to be more of a problem with my connection
than anything else.

On Tue, Dec 14, 2021 at 9:04 PM K5ROE Mike  wrote:


On 12/14/21 5:23 PM, Hal Murray wrote:

Out of curiosity, since you monitor NIST Gaithersburg, if you were to

average

over the offsets for a whole month, what kind of value would you get?

Surely

it is close to zero but I am curious how close. Within 1ms?

It depends.  Mostly on the routing between you and NIST.  If you are

closer,

the routing is more likely to be symmetric.

   From my experience, routing is generally stable on the scale of

months.  There

are short (hours) changes when a fiber gets cut or a router gets

busted.

There are long term changes as people add fibers and/or change business

deals.

There are some cases where a stable routing will produce 2 answers: x%

of the

packets will be slightly faster/slower than most of them.  I think

what's

going on is that the routers are doing load sharing on multiple paths,

hashing

on the address+port.  Or something like that.  So it's a roll of the

dice

which path you get.



I'm in California.

NIST has NTP servers at 3 locations in the Boulder CO area: NIST, WWV,

and

Univ of Colorado.  (Google maps says WWV is 60 miles north of Bouler.

Univ of

Colorado is a few miles from NIST.)

   From a cheap cloud server (Digital Ocean) in San Francisco, the RTT

to

NIST is

31.5 ms, to WWV is 32.1 ms, to Univ of Colorado is 54.5 ms.  The time

offsets

are about 1 ms for NIST and WWV and 12 ms for Univ of Colorado.

   From my home (AT via Sonic), 30 miles south of San Francisco, the

RTTs are

61 ms for NIST and WWV and 81-82 for Univ of Colorado.  Offsets are 6-7

ms for

NIST and WWV and 4-5 ms in the other direction for Univ of Colorado.



Might be a localized routing phenomenon.  Using my verizon connection

from

Northern Virginia the results are awful for time-e-g.nist.gov:

remote   refid  st t when poll reach   delay   offset
jitter



==

-192.168.1.219   68.69.221.61 2 u   56   64  3770.400   -0.290
   0.035
*192.168.1.224   .PPS.1 u1   16  3770.1840.087
   0.017
-129.6.15.26 .NIST.   1 u   32   64  377   93.087  -37.940
   7.867

However from my AWS machine in Oregon:

MS Name/IP address Stratum Poll Reach LastRx Last sample



===

^- 152.63.13.177 3   6   37763  -2011us[-2011us] +/-
128ms
^+ 209.182.153.6 2   7   37765   -959us[ -959us] +/-
   86ms
^- 64.139.66.105 3   6   377   128  -5838us[-5838us] +/-
134ms
^+ 129.6.15.26   1   6   37764  -2075us[-2075us] +/-
   37ms
^* 173.66.105.50 1   8   377   438   -448us[ -870us] +/-
   38ms


-mike
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe

send

an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


___
time-nuts mailing list -- tim

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-15 Thread Magnus Danielson via time-nuts

Hi,

Expect network routes to be more dispersed these days, as it is needed.

While the wedge plot is a classic for NTP, it may be interesting to plot 
forward and backward path histograms independently.


Cheers,
Magnus

On 2021-12-15 16:25, Adam Space wrote:

Yeah I think it is localized. Network paths have been quite variable for
me. Every once in a while I start getting massive delays from the NIST
servers to my system, resulting in results like yours.

Interestingly though, time-e-g was one of the only servers that didn't have
this problem for me. This is a recent wedge plot for it. seems to be
working fine for me now, just with a variable outgoing delay causing
positive offsets, which seems to be more of a problem with my connection
than anything else.

On Tue, Dec 14, 2021 at 9:04 PM K5ROE Mike  wrote:


On 12/14/21 5:23 PM, Hal Murray wrote:

Out of curiosity, since you monitor NIST Gaithersburg, if you were to

average

over the offsets for a whole month, what kind of value would you get?

Surely

it is close to zero but I am curious how close. Within 1ms?

It depends.  Mostly on the routing between you and NIST.  If you are

closer,

the routing is more likely to be symmetric.

  From my experience, routing is generally stable on the scale of

months.  There

are short (hours) changes when a fiber gets cut or a router gets busted.
There are long term changes as people add fibers and/or change business

deals.

There are some cases where a stable routing will produce 2 answers: x%

of the

packets will be slightly faster/slower than most of them.  I think what's
going on is that the routers are doing load sharing on multiple paths,

hashing

on the address+port.  Or something like that.  So it's a roll of the dice
which path you get.



I'm in California.

NIST has NTP servers at 3 locations in the Boulder CO area: NIST, WWV,

and

Univ of Colorado.  (Google maps says WWV is 60 miles north of Bouler.

Univ of

Colorado is a few miles from NIST.)

  From a cheap cloud server (Digital Ocean) in San Francisco, the RTT to

NIST is

31.5 ms, to WWV is 32.1 ms, to Univ of Colorado is 54.5 ms.  The time

offsets

are about 1 ms for NIST and WWV and 12 ms for Univ of Colorado.

  From my home (AT via Sonic), 30 miles south of San Francisco, the

RTTs are

61 ms for NIST and WWV and 81-82 for Univ of Colorado.  Offsets are 6-7

ms for

NIST and WWV and 4-5 ms in the other direction for Univ of Colorado.



Might be a localized routing phenomenon.  Using my verizon connection from
Northern Virginia the results are awful for time-e-g.nist.gov:

   remote   refid  st t when poll reach   delay   offset
jitter

==
-192.168.1.219   68.69.221.61 2 u   56   64  3770.400   -0.290
  0.035
*192.168.1.224   .PPS.1 u1   16  3770.1840.087
  0.017
-129.6.15.26 .NIST.   1 u   32   64  377   93.087  -37.940
  7.867

However from my AWS machine in Oregon:

MS Name/IP address Stratum Poll Reach LastRx Last sample

===
^- 152.63.13.177 3   6   37763  -2011us[-2011us] +/-
128ms
^+ 209.182.153.6 2   7   37765   -959us[ -959us] +/-
  86ms
^- 64.139.66.105 3   6   377   128  -5838us[-5838us] +/-
134ms
^+ 129.6.15.26   1   6   37764  -2075us[-2075us] +/-
  37ms
^* 173.66.105.50 1   8   377   438   -448us[ -870us] +/-
  38ms


-mike
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Magnus Danielson via time-nuts

Hi,

On 2021-12-14 17:26, Steven Sommars wrote:

The Gaithersburg servers are accurate.  This plot shows Gaithersburg
time-e-g.nist.gov for the current month.
[image: image.png]
My monitoring client is located near Chicago and is Stratum-1 GPS sync'd.
Typical round-trip time to Gaithersburg is ~27 msec.
On 2021-12-07 a few monitoring polls saw RTT of ~100 msec.  This changes
the computed offset from
~0 msec to to ~40 msec (  (100-27)/2. ). Such transient increases are
often called  "popcorn spikes"
Many NTP clients including ntpd and chrony contain logic that identifies
and suppresses these outliers
Further, Gaithersburg is subject to fiber-cut outages and other planned &
unplanned network outages.
If you look carefully at the diagram, you can see a brief outage beginning
at about 2021-12-08 03:00 UTC.

I have other monitoring clients and can produce similar diagrams from other
locations.  A monitoring client
in Japan saw this for the same Gaithersburg server:
[image: image.png]
The delay spikes occur at different times and have different signs.  [The
2021-12-08 outage is still present]
See   http://leapsecond.com/ntp/NTP_Paper_Sommars_PTTI2017.pdf for a
discussion of why there are multiple
offset bands.  In the same paper there are examples of sustained high
delay, something that

Magnus summarized the situation.  Either asymmetric network delay or a
misbehaving NTP server can
cause the computed offset to be non-zero.The former is very common.
NTP servers, even stratum 1's
driven by GPS, are sometimes in error.


For sure. I've seen significant biases and jitter from bad servers. I 
just had to save the company from getting a worse situation as the 
IT-folks wanted to setup a new server in a virtualized machine. Having 
multiple propper machines in house, it was just a few things to fix to 
set it up.


Also, I assume that NIST have monitoring, and in fact if you look back I 
tossed a link that gave an in depth report from NIST on their current setup.


If actual offsets traceable to the actual machines of NIST is found, 
please report it into NIST, and I think that will be Judah Levine.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Magnus Danielson via time-nuts

Hi,

On 2021-12-14 02:26, Adam Space wrote:

I'm not sure if anyone else uses the NIST's NTP servers, but I've noticed
that the offsets I'm getting from Gaithersburg servers seem to be
really far off, like 40-50 ms off. This is pretty odd since they usually
have a 2 - 3 ms accuracy at worst.

It is interesting to think about what is going on here. NIST has a secondary
time scale

at Gaithersburg, maintained by a couple of caesium clocks that are
typically kept within 20ns of UTC(NIST), i.e. their primary time scale in
Boulder. They also host their remote time transfer calibration service and
their Internet Time Service (i.e. NTP servers) out of Gaithersburg.

It seems highly unlikely that their time scale there is that far off. One
thing that immediately comes to mind is asymmetric network delays causing
this. I do think this has to be the reason for the large discrepancy, but
even so, it is an impressive feat of asymmetric path delays. The maximum
error in offset from a client to server due to asymmetric network delays is
one half of the delay. (This corresponds to one path being instantaneous
and the other path taking the entire delay time). When I query their
servers, I am getting about a 45ms offset, and a delay of around 100ms.
This would mean the maximum error due to asymmetric path delays is around
50ms--and less even if we're being realistic (one of the delays can't
literally be zero). Basically, for the offset error to be accounted for
primarily by asymmetric network delays, the delays would have to be *very*
asymmetric.


For the asymmetry to be 45 ms, the difference between forward and 
backward path would need to be 90 ms, since the time error will be the 
difference in delay divided by 2. The round trip time is instead the sum 
of the two delays.


Now, as you observe this between two clocks with a time, difference, the 
time-difference add onto the TE, but does not show up on the RTT.


So, 90 ms difference would fit, but delays would be 95 ms and 5 ms +/- 5 
ms, since we can trust the RTT to be unbiased. Now we come to what is 
physical possible, and 5 ms is 1000 km fiber delay. You can calculate 
yourself from your location the minimum distance and thus delay. In 
practice fiber is pulled not as straight as one would wish. I use at 
least square root of 2 as multiplier, but many agree that this is still 
optimistic and it can be far worse.


What can cause such delay in a network? In IP/MPLS, the routing 
typically does not care about forward and backward direction being the 
same. Rather, they trim it to shed the load, i.e. Traffic Engineering. 
That means that for two pair of nodes in that network, it can be sent 
over a shorter path in one direction and longer in the other. In 
addition, buffer fill levels can be high on a path, meaning that you end 
up in the end of a buffer for each router hop due to traffic load. Delay 
is a means to throttle TCP down in rate. Random Early Discard (RED) is 
meant to spread that evenly between streams to cause throttle earlier 
than dropping packets due to full buffers, but it still means dropping 
packets. That affects UDP traffic too. MPLS-TE then tries to work on 
that on a secondary level.


With that, depending on your actual distance, which I do not know, it 
becomes fuzzy if the network or servers have asymmetry. If you have 
enough distance, then some of the time error can not be allocated to 
network asymmetry as the short path needs to be higher. This then needs 
to be allocated over to clock errors.


All this is a result of having three unknowns and two measures, you 
cannot fully resolve that equation system. It needs aiding. Having the 
right time on one end does not help if one attempts to know the time 
error of the other end.


It would help if you could add observations from other locations near 
Gaithersburg, network wise.



Is anyone else experiencing the same thing?


Which makes this question very relevant. Measuring with less of the 
biases and noise of the network may provide clearer answer on the 
Gaithersburg servers.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: 5065A - A15 (PSU)

2021-12-03 Thread Magnus Danielson via time-nuts

Paul,

Mine isn't loud, but I can hear it and it annoys me. As I do not have 
much other things causing noise, one becomes aware of that constant 
tone. Then again, the frequency is high enough that it is relatively 
easy to locate it in the room. Thing is, the ear is about most sensitive 
to tones and noise at about 3 kHz, so it's not strange to become "aware" 
of it. Me having a background in audio can be part of it, who knows.


Cheers,
Magnus

Den 2021-12-03 kl. 16:11, skrev paul swed:

Ulf pretty funny. I knew someone had developed a replacement integrator for
the 5065a. It was indeed you in 2017. Small world.
With respect to the inverter I will need to listen to my unit. I do not
recall anything very loud. But loud is different for every person. Like the
idea of eliminating the -20 inverter but that really means changing A3 and
A7 significantly. I had not realized that the DC bias to the photocell
actually comes from the input opamps output on A7. Something to learn every
day.
Regards
Paul
WB8TSL

On Thu, Dec 2, 2021 at 7:11 PM Shawn  wrote:


Let me second Paul's question.
I too am interested in the purchase of 2 of these boards and am happy
to contribute funds in the advancement of this project.
Shawn

On Thu, 2021-12-02 at 15:09 -0500, paul swed wrote:

Ulf I would have replied off list. Are you intending to sell blank
PCBs?I was recently thinking about the same thing as I watched the
cfld move abit.I know there is also a earlier thread on this subject
3-5 years ago.RegardsPaulWB8TSL
On Thu, Dec 2, 2021 at 11:15 AM Ulf Kylenfall via time-nuts <
time-nuts@lists.febo.com> wrote:

Since the A15 regulator board is mentioned...
I recently decided to try to improve the Voltage Regulator board
A15inorder to use a better voltage reference than CR5 (9V) and
alsosee if Icould reduce the drift and noise of the Magnetic Field
winding.After some experimenting, I am reasonable satisfied.
The board has been completely redesigned.
The drift and noise of the +20V is reduced. Thermal driftcompared
to theoriginal design from the MagFieldcircuitry is also reduced
significally.
Modern thru-hole components are used facilitating manual assembly.A
total of 10 PCB's have been ordered (Double side, solder
masksilkscreen, plated holes etc.) but I did not opt for over-
nightdelivery, so itwill be another couple of weeks for them to
arrive.I will assemble a board for myself and, unless covid makes
itimpossible,take the 5065A to my former employer and evaluatethe
performance usingstandards available.If it performs well, the
design will be placed into "public domain".Best RegardsUlf
KylenfallSM6GXV
___time-nuts mailing
list -- time-nuts@lists.febo.com -- To unsubscribe sendan email to
time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___time-nuts mailing list
-- time-nuts@lists.febo.com -- To unsubscribe send an email to
time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: HP 5065A, no 2nd harmonic.

2021-12-02 Thread Magnus Danielson via time-nuts

Hi Jared,

One of my 5065's have exactly this fault.

One approach is to rebuild with a new power inverter. Another is to 
rebuild the amplifier board so it does not need -20 V and can operate 
only on +20 V.


There should be DC-DC inverters, but I failed to grab a suitable 
pre-built board for it.


I would enjoy not having to hear the 3.5 kHz switching frequency in my lab.

Cheers,
Magnus

Den 2021-12-02 kl. 15:15, skrev Jared Cabot via time-nuts:

So it looks like I lost the -20V rail. Seems that A15Q10 (2N4234), A10CR9A (9V 
Zener), and A15CR9B (11V Zener) are all dead for some reason.
Q10 is easy enought o find, but the zeners are a little more tricky, they are 
specified as part number 1902-0247, a 20V, 1%, 0.4W +0.005%TC zener (20V made 
up of the 9v + 11V).
I can find a 5% 9V (1N936A), but the 11V is harder to find.

Maybe I should look into redesigning this board entirely, the transformer on 
mine is rather noisy anyway.

Schematics attached for reference.



Jared

‐‐‐ Original Message ‐‐‐

On Thursday, December 2nd, 2021 at 21:04, Jared Cabot 
 wrote:


Latest update.

Looks like I'm getting 2.76VDC out of A7J1 and I can't adjust it with A7R3. 
According to the manual, it should be less than 50mV.. I was able to set this 
to around 0.5mV previously, so it did work at one point.

Q1 and Q2 test ok on my ebay component tester, IC1 is a selected uA741TC, but 
I'm not sure how to effectively test it.

(I attached the schematic, hopefully it shows up ok).

Jared

‐‐‐ Original Message ‐‐‐

On Thursday, December 2nd, 2021 at 18:57, Jared Cabot via time-nuts 
time-nuts@lists.febo.com wrote:


Well, it seems more digging is needed... I have 84uA Photo I coming from the 
RVFR on A7P1, but the front panel is at 14 (Should be 42 for my Photo I level)

Manual says look at Q1, Q2 and IC1 circuits. Anyone have any pointers or 
suggestions on what to check?

Jared.

‐‐‐ Original Message ‐‐‐

On Thursday, December 2nd, 2021 at 17:21, Andy Gardner, ZL3AG via time-nuts 
time-nuts@lists.febo.com wrote:


Don't give up now, Jared! You must be so close to getting that puppy running 
correctly.
On 2/12/21 9:16 pm, Jared Cabot via time-nuts wrote:

Well, yay...
I put the OCXO back in the 5065A, and got a good 2nd Harmonic reading. So I 
started going through the alignment procedures and somehow I now have no 2nd 
harmonic again, and I can't get it back...
I'm starting to lose interest in this thing, seeing as I have the Leo Bodnar 
GPS standard anyway that I don't have to sit for hours twiddling dials and 
poking things just to get it maybe sort of working.
I might give this one more chance before I sell it off to someone with a little 
more patience, or who prefers fixing gear instead of actually using it. I think 
I've had this thing working for all of a couple days in the months I've owned 
it, whereas the GPS standard is the size of a pack of cards and has never 
failed in the same time.
Jared.

time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions 
there.___

time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Rb standards (was: Project Great)

2021-12-02 Thread Magnus Danielson via time-nuts

Hi Attila,

Den 2021-12-02 kl. 00:10, skrev Attila Kinali:

On Mon, 29 Nov 2021 13:23:33 +
"Poul-Henning Kamp"  wrote:


The resonance we measure is not the real one, but one affected by
the doppler-shift of the atoms speed, ie: velocity without sign,
in the direction of interrogation.

Modern, laser based Rb standards can measure doppler-free
by selecting only those atoms that are in rest or only move
laterally to the laser beam. But, as far as I know, there
is no commercial product available that does that.

There exists at least two commercial products for that, but availability 
remain scarse so far. SDI and uClock. So, commericial products, but 
commercially available may remain the issue. Hope that resolves itself soon.


With cold rubidium you have that cold relatively unperturbed small ball 
of rubidium, not very diffferent from that of a cesium fountain, but 
more compact.



I belive the primary source of drift in Rb's are adsorption and
absorption of Rb molecules onto and into the glass itself.  This
causes a drop of gas pressure, which changes the collision dynamics
for the remaining Rb gas, which affects their velocity distribution,
which again moves the "appearant resonance" we measure.

Largest sources of uncertainty for an Rb are:
* Temperature (affects Rb density and buffer gas pressure)
* Atmospheric pressure (affects pressure inside the glass cell)
* Light and microwave intensity variation (shifts the electron energies)
* Helium absorption (changes buffer gas compostion)

Wall shift vs. buffer gas shift. Helium disappation is part of that.

I have not seen any measurement of Rb absorption into glass
and its effect, so I cannot comment on that.


For gas cells, contamination onto the glass provides both reduced pump 
light shift as well as worse S/N. That at least is known and understood.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Project Great

2021-12-02 Thread Magnus Danielson via time-nuts

Hi Michael,

Thanks for that plot. I think it sumarizes fairly well what typical 
devices of today achieves.


There is rubidiums and quarts that go into the 1E-14s, but most do not. 
The old plot uses a "span" for the different technologies and that helps 
to avoid the "but this device/product" types of issues.


The optical clocks is not really on there, but the modern cold Rb etc. 
should be there.


One clearly sees the benefit of CSOs and there should not be to 
surprising to see that they are used in other clocks to improve 
performance in that range.


I clearly miss out on having CSOs in my arsenal, otherwise that plot 
illustrates much of my experience of measuring these devices in my lab.


Cheers,
Magnus

Den 2021-12-02 kl. 07:13, skrev Michael Wouters:

Hello Jim

Here is a relatively up to date plot that I made for the T course we give.
Labels are a bit misaligned courtesy of PowerPoint,
No CSAC, but there is an SDI cold Rb and a UAdelaide CSO.
Slightly different perspective to the Schmittberger et al plots that
Tom B referenced,

Cheers
Michael

On Mon, Nov 29, 2021 at 9:07 AM Lux, Jim  wrote:

On 11/28/21 1:06 PM, Marek Doršic wrote:

Hi,

  I'm also one of those fascinated by Project Great and this was The 
project inspired me to start with time related stuf.

Can Cs clock be avoided by prolonged period at high altitudes? Assume you can 
spend a month at the summit. Is e.g. the Rb drift stable enought to compensate 
and obtain viable results?

.marek


Sadly, no...  That's what the Allan Deviation tells you.  It says
"here's the best you can do, at this averaging time".  And a lot of
sources may have a low flat spot in the curve, but it eventually trends
up. Except for primary standards like Cs beam.

So if a Rb plateaus at, say, 1E-12, that's the best you're going to do.


Speaking of which, does anyone have a link to a "current state of the
art" graph.

Kind of like this one from Vig, but with specific new technologies like
CSAC etc

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Seeking advice: Is this the right way to check very short term (below 1s) stability?

2021-11-29 Thread Magnus Danielson via time-nuts

Erik,

Den 2021-11-28 kl. 19:19, skrev Erik Kaashoek:
As the collection of frequency sources and counters in my home lab is 
growing I'd like to understand the performance of the frequency sources.

Two different GPSDO do help to check long term stability.
But the Rubidium frequency standard I have (Accubeat AR60A) is fairly 
unknown and seemingly not of good reputation, more specifically its 
(very) short term stability is doubted.
So how best to check very short term  (below 1s) frequency stability. 
The frequency counters available loose resolution quickly when the 
gate time is reduced below 1 second and high performance phase noise 
measurement equipment is not available so google helped with a search 
for alternative measurement methods.
What I found was a method using two frequency sources, one of the two 
being  a VCO, a mixer and some filters and amplifiers.
By weak locking (large time constant)  the VCO source using the mixer 
as phase detector to the other source, the output of the mixer's IF 
port should carry a voltage real time proportionally to the phase 
difference and by filtering and amplifying it should be possible to 
check for variations in the 1ms-1s range.

Maybe even a scope can see the variations.
When you know the amplification and the full range voltage you can 
even do an absolute measurement.

Would this method work?
Any specific concerns to take note of when doing the measurement?
Removing the DC component (or locking the VCO such that there is no DC 
component) will be crucial I guess but given the slow speed of the 
loop even an ADC->computer->DAC->VCO setup can work.

Any suggestion is welcome.


So, in that region one typically transition into measuring phase noise, 
as for shorter taus your performance will be dominated by the wideband 
white noise, and counters isn't the best tool to analyze that.


The weak locking technique can be used to a limit, but to get good 
results, you need to calibrate it. I suggest you set up the loop in the 
analog domain and only digitize the residual noise. Then inject using a 
synthesizer side-tones to your carrier and know relationship in 
amplitude and frequency offsets, for which then the phase-noise should 
be known, and use that to build a calibration scale. This is described 
in the NIST T catalog. You can do that with a varity of sources, but 
eventually you will be limited by the noise of the other oscillator.


I use a cross-correlator setup in the form of TimePod most of the times, 
with quiet references.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Project Great

2021-11-29 Thread Magnus Danielson via time-nuts

Hi,

Den 2021-11-29 kl. 10:57, skrev Hal Murray:

j...@luxfamily.com said:

And a lot ofsources may have a low flat spot in the curve, but it
eventually trends up. Except for primary standards like Cs beam.

What's magic about "primary standard" or "Cs beam" that keeps the ADEV from
trending up?


"primary standard" is an overloaded term, so depending on context a 
particular product may suffice to be a "primary standard" in some 
context but not in others. In general a "primary standard" does not need 
external corrections, and some clocks will have less than perfect 
mechanisms for their systematic variations or drift, which does not 
covers the ADEV, as ADEV is not covering systematic properties but is 
only intended to cover random noise. For metrology contexts, "primary 
standards" is only a handful of cesium foutains to achieve frequency 
accuracy, where as the bulk of atomics clocks contribute stability (i.e. 
optimal ADEV).


In telecom, a "Primary Reference Clock (PRC)" or "Primary Reference 
Source (PRS)" ensures frequency within +/- 1E-11, which used to be what 
analog cesiums could achieve. Requirements have since progressed, 
especially for the phase as time is now an added.


So, in general, it's about the repetitive independent generation of 
phase, frequency and drift. Stability in terms of ADEV and TDEV then 
comes in as othogonal requirement.


I think you will find that IEEE Std 1139 and 1193 has further 
refinements as they pop out of the approval and publishing. 1139 draft 
is now in balloting process. We still work on 1193. I also recommend 
having a look at VIM and GUM documents as available from BIPM.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Project Great

2021-11-27 Thread Magnus Danielson via time-nuts

Hi Jim,

Den 2021-11-28 kl. 01:26, skrev Lux, Jim:


Not only is the CSAC low power (~ 100 mW) it's physically small, which 
is attractive for some applications (inside a cubesat, for instance).  
It used to be price competitive with a Rb, too ($1000-1500, as I 
recall) but now they're about $5k.  Microsemi also has the NAC (which 
is more a conventional Rb, but small)



Parameter    NAC1        CSAC
Aging        3E-10/mo    9E-10
    1E-9/yr        10E-9

ADEV        2E-11 @ 100sec    2.5E-11
    8E-11 @ 10    8E-11
    2E-10 @ 1 sec    2.5E-10

phase noise
    -86 @ 10    -70
    -120 @ 100    -113
    -138 @ 1000    -128
    -143 @ 10k    -135
    -148 @ 100k    -140
    -150    floor

max chg        1E-9 (-20 to 65) 5E-10  (-10 to 35)

pwr        1.2W op        0.12
    1.8W warm    0.14

A good comparison.

In addition, for some application weight of the CSAC is also very 
attractive.


I'm not to say that CSAC is bad, but rather that one has to understand 
that it is a product intended for certain uses, and for those contexts 
it is a very interesting option, but outside of those contexts, there 
may simply be better suited options. It being "cesium" is not the magic 
keyword thought.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Project Great

2021-11-27 Thread Magnus Danielson via time-nuts

Hi,

There is an overemphasis on the atom being used, and especially on 
cesium as that is what is used for SI definition. However, actual 
implementation means actual physical devices, and the physical devices 
have a physics package, for which details will be important to the 
actual performance. Various atoms have been more or less well adapted to 
different types of physical package types. The beam type of device can 
be made to have very little perturbation, and cesium was well suited for 
that, while rubidium ended up being very well suited for the gas cell 
type. The CSAC is really a cesium based gas cell, but the original 
benefit of rubidium filtered optical pumping has been replaced with 
semiconductor lasers for pumping. Today both cesium and rubidium gas 
cells with the same mechanism exists. With gas cell you get wall shift 
from atoms banging around the wall, but also gas shift as buffer gas 
makes the atoms hit the buffer gas most of the times. With a bit of 
selection of gas mixture, these can be made to balance each other.


So, what is the claim to fame for CSAC? It's actually not being cesium, 
but for the stability it provides for the small amount of power it 
consumes. That's also where it finds actual applications. If you can 
afford more power, there is cheaper alternatives available.


Cheers,
Magnus

Den 2021-11-27 kl. 23:28, skrev Bob kb8tq:

Hi

The CSAC is not a cesium in the conventional sense. It is much closer to
a telecom Rb than anything else. There likely are telecom Rb’s that would
do a better job. Would they do a good enough job? …. likely not ….

Bob


On Nov 27, 2021, at 5:11 PM, Lux, Jim  wrote:

On 11/27/21 12:37 PM, Thomas Valerio wrote:

I think that Tom's GREAT adventure is kind of what sealed the deal making
me a time-nut or at least a time-nuts lurker, a lot of this stuff is still
little over my head, but I keep reading.

If anyone is inclined and has the clocks and the kids ( I don't have
either ), there is always Mount Evans and Pikes Peak, although you may
have to leave the clocks behind overnight.  Mount Evans is still on my
bucket list but without clocks and two or three days of time to monitor
them, I don't think I will be doing the Mount Evans edition of GREAT.  For
anyone that is flush enough to afford or can beg, borrow or steal access
to a Microsemi chip scale atomic clock, I think a Mount Evans edition
would be an awesome addition to Tom's original work.

Thomas Valerio


I don't think a CSAC would be good enough.

Tom's experiment was 22 ns out of 42 hours or about 1.45E-13. That's quite a 
bit smaller than a CSAC adev over that period.

There's a variety of roads that go to ~12,000 ft in Colorado, about ~10,000 in CA 
(Tioga Pass isn't closed yet), so you can get about 3x change, but still you're 
talking <1E-12.

Mammoth Mtn has a gondola to the top, but it's only 11,000. There may be a ski 
resort in CO that's higher.



For newcomers to time-nuts, Andy is asking about my DIY gravitational
time dilation experiment(s).

  > What am I missing?

It looks like you used the wrong value (or wrong units) for "h".

The summit of Mt Rainier is 14411 ft (4400 m), but the highest point on
Mt Rainier that is accessible by road is the Paradise visitors center at
5400 ft. Our house is at 1000 ft elevation so the net difference in
elevation of the clocks was 4400 ft (1340 m).

The clock(s) on the mountain ran fast by gh/c² = 9.8 × 1340 / (3e8)² =
1.5e-13. Fast clocks gain time. We stayed for about 42 hours so the net
time dilation was 42×3600 × gh/c² = 22 ns.



For more information see the Project G.R.E.A.T. 2005 page:

http://leapsecond.com/great2005/

Better yet, these two recent talks from 2018 and 2020 cover all 3 GREAT
experiments:





Lots of time nutty photos in both of those!

/tvb


On 11/27/2021 7:33 AM, Andy Talbot wrote:

Just been reading your adventures with 3 Cs clocks, a mountain and 3
kids,
but I can't make the estimate of time dilation work out.
You measured ~ 23ns and say it agrees with calculation

The equation quoted in a related reference, for "low elevations" is
g.h/c²
which if you plug in g = 9.81 m/s²  and h = 4300m for Mt Rainer gives
an
expected value of 4.7 * 10^-16.
Over 2 days, 2 * 86400s, that would be 81 ns in total, four times your
value

What am I missing?

Was just speculating what Ben Nevis at a mere 1340m height might offer

Andy
www.g4jnt.com
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe
send an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
an email to 

[time-nuts] Re: Frequency Standard - Where Can I Get One.

2021-11-21 Thread Magnus Danielson via time-nuts

Hi,

So, on that note. I am surprised that I have not seen popular telecom 
rubidiums being reverse-engineered. For instance, the LPRO-101 should 
have been reverse-engineered a long time. Some of the circuitry is known 
from patents, but those do not build up a complete schematic. I've 
considered to do the job, but apparently I have not been able to sit 
down and do that particular job either.


I think the LRPO-101 should not be too much of a challenge. Beyond the 
schematic some documentation of the other functions, hints and tips, 
etc. that is related should be written up so one approach something 
similar to a service manual.


With enough people contributing, I think it should not be too hard to 
collect things. We should be able to provide useful hints and tricks, 
such as suitable replacement components etc.


Cheers,
Magnus

Den 2021-11-21 kl. 19:45, skrev Bob kb8tq:

Hi

Well, if I could keep a 5065 running without repairs for more than a couple 
years
I might be more willing to agree with you. What makes the 5065 different is 
that you
have schematics and can do repairs. When the telecom gizmos die, there’s not 
much
to fall back on. They were designed to run a finite amount of time and then go 
to the
scrap heap.

Bob


On Nov 21, 2021, at 12:03 PM, Skip Withrow  wrote:

Hello Time-Nuts,

No offense Bob, but I would like to take issue with your statement 'Rb
standards have a finite life'.

There are time-nuts on this list of every skill and knowledge level
and I would like to keep the information as correct as possible.  My
feeling is this is not a true statement.

There is nothing inherent in the design of a rubidium frequency
standard that limits its life (unlike cesium).  However, there are
manufacturing choices that can possibly limit time before failure.

First example, of course, is the HP 5065.  There are many of us that
have units that have been running continuously for close to 50 years.
HP made choices in their bulb design that ensures that it runs for a
very long time.

An opposite example would be the Tracor rubidiums.  The lamps in these
units were either horribly underfilled, or the glass was very reactive
with Rb and almost all suffered from rather early lamp failures.

Then, there is the huge mass of telecom rubidiums.  As you stated,
keeping the base plate at a reasonable temperature goes a LONG way to
extended life.  Excessive temperature obviously leads to higher
component (and sometimes lamp) failure.

There are also units that just did not have enough design margin in
certain areas.  The SRS PRS-10 is one of these where I have seen
things go up in smoke in the lamp area.  BTW, the HP 5065 can have
some issues in this area as well.

I'm obviously a big fan of rubidium frequency standards.  My advice to
newer time-nuts is that you can't go wrong owning one (better long
term stability than OCXO, lots less cost and longer life than cesium).

I'll get off the soap box now.  Thanks for the bandwidth.

Skip Withrow
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] NIST Technical Note 2187 - A Resilient Architecture for the Realization and Distribution of Coordinated Universal Time to Critical Infrastructure Systems in the United States

2021-11-14 Thread Magnus Danielson via time-nuts

Fellow time-nuts,

NIST just published the technical note 2187 that probably some of you 
might find interesting. It's for free download here:


https://nvlpubs.nist.gov/nistpubs/TechnicalNotes/NIST.TN.2187.pdf

I think you will find quite a bit of interesting material in there. Just 
recall, the fine-print says that mention of vendors and solutions is for 
technical completeness and not is any form of endorsement, other vendors 
and products exists that may be just as good or even better.


This is part of a larger context, but I think you will find a lot of 
interesting things in there.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: Can ADEV of a frequency source be correctly determined using a continuous time-stamping frequency counter?

2021-11-11 Thread Magnus Danielson via time-nuts

Hi,

I strongly recommend the HP AN200 series of application notes. Having 
those alongside Enricos slides is a good start for those new in the field.


There is one variant of averaging which is used in the HP5328A/B 
counters that is described there that is not covered in Enrico Rubiolas 
otherwise excellent set of slides.


A more updated version exist in the Rohde, Rubiola, Whitaker "Microwave 
and Wireless Syntesizers" book. The PDEV was not published in 2012, but 
came in 2014. PDEV is now included in the IEEE 1139 draft, going through 
the voting process.


Many modern counters have used the method of charging a capactor and 
then read that charge out either as a pulse (Nutt interpolator) or 
through A/D conversion such as SR-620, Pendulum/Fluke CNT-80/81/90/91 
and Wavecrest DTS and SIAs. Modern FPGA based tapped delay was used 
already in HP5371A but is now used in FPGA and ASIC for higher 
resolution and is for sure comming along strong in commercial and 
academic counters.


Cheers,
Magnus

On 2021-11-11 16:02, Erik Kaashoek wrote:

Lewis
The interesting source you referred to contained a slide deck which 
turned out to be a gold mine for a novice like me.

http://rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf
This should be on leapsecond
Many thanks
Erik.

On 10-11-2021 19:36, Lewis Masters wrote:

Hi,


Anders Wallin has details of a commercially available time stamping 
counter

that will do a proper ADEV measurement.  He also includes links to other
interesting sources.


Search for:

CONT vs RCON mode on the 53230A frequency counter
 - anderswallin.net

1604.05076.pdf (arxiv.org) 


Lew

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe 
send an email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe 
send an email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Can ADEV of a frequency source be correctly determined using a continuous time-stamping frequency counter?

2021-11-10 Thread Magnus Danielson via time-nuts

Rick,

"continuous count" as in counting/time-stamping each individual cycle 
forms a sample-rate limit. However, this is not what is meant with 
continuous conting today, as that is that you have a continuous 
time-stamping for some time-base. In that some number of counted cycle 
(+/- 1) occurs between each time-stamp. Unless one attempts to use 
time-base very near the maximum sample rate per second, it cease to be a 
practical concern as one does not want to miss samples.


I have a counter that can time-stamp at 10 MSa/s and 13.333 MSa/s 
depending on mode. I extremely rarely use that even close to the 
extreme, as continuous counting I normally need is maybe up to 100 Sa/s.


Cheers,
Magnus

On 2021-11-10 01:53, Richard (Rick) Karlquist wrote:

Let me just mention that when I worked at the HP Santa Clara
Division counters section, they came out with a "feature"
that they called "continuous count".  However, it was limited
to something like 3 MHz.  So a 100 MHz counter would only
continuously count signals below 3 MHz.

So you need to verify for what bandwidth your specific counter
model is truly doing continuous count.

Rick N6RK

On 11/9/2021 2:29 PM, Magnus Danielson via time-nuts wrote:

Hi Erik,

On 2021-11-09 18:26, Erik Kaashoek wrote:
As far as I understood the ADEV at a Tau of 1 second is a statement 
about the amount of variation to be expected over a one second 
interval.
Rather, the variation of readings of a frequency estimation done over 
a span over 1 second.
It would be nice if we would be able to measure a frequency in an 
infinite short interval but any frequency measurement takes time.
Turn out that basic white noise and systematic noise will limit our 
frequency resolution to form a 1/tau limit slope, so infinite short 
interval will bury it well into that noise whatever we do.
What if the frequency counter does a complete measurement of a 
frequency source every second and all the variation within that 
second is hidden because of the "integration" that happens over the 
second?


That is what happens, but that is not what the ADEV is about, it's 
about the variations of these measures as we look for a bunch of 
them. So if we now have say 1000 of these frequency estimates, how 
much variations in these can be contributed to the random noise of 
the source, and to analyse that, we need at least a tool like ADEV 
since standard deviation will not even converge for white and flicker 
phase noise modulation.


What ADEV actually aims to do is to provide a low-frequency 
spectroscopy method at a time when time-interval counters was about 
the only tool at hand, and even those where very rare. We now have a 
much wider palette of tools, but ADEV is relevant for how we measure 
frequency stability and a few other applications.



This is specially the case with continuous time-stamping counters.
They can provide a precise number by applying statistical methods on 
many measurements done during one second but they can not provide 
information exactly at the end of a second.
Is this kind of statistical measurement over a period of a second 
still valid for determining the ADEV at the Tau of one second of a 
frequency source?
Not for ADEV, but if you use averaging counter you get the result of 
MDEV and for linear regression / least square counter you get the 
response of PDEV. That is the result of various statistical measures 
and then applying the ADEV processing on these frequency estimates. 
The upcoming IEEE Std 1139 revision, which is in approval process now 
include language to reflect that.
Or should there be a correction factor depending on the method used 
in the frequency counter?


Yes, you then need to use the appropriate bias function for ADEV/MDEV 
and ADEV/PDEV to convert between these scales. Knowing the response 
of ADEV, MDEV and PDEV for a particular noise-type which is dominant 
at the tau of interest, you can readily convert between them by 
forming the bias functions.


You may find NIST SP-1065 a useful and handy tool, even if it does 
not cover the more recent work such as PDEV.


https://www.nist.gov/publications/handbook-frequency-stability-analysis

I tried to read some scientific studies on this subject but I am not 
smart enough to understand.

Hope one of you can provide some information.


It is scattered over a large number of articles, and quite a lot of 
folks get confused. Hopefully the updated IEEE Std 1139 will be of 
aid to you. It also has lots of useful references.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe 
send an email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: Can ADEV of a frequency source be correctly determined using a continuous time-stamping frequency counter?

2021-11-09 Thread Magnus Danielson via time-nuts

Hi Erik,

On 2021-11-09 18:26, Erik Kaashoek wrote:
As far as I understood the ADEV at a Tau of 1 second is a statement 
about the amount of variation to be expected over a one second interval.
Rather, the variation of readings of a frequency estimation done over a 
span over 1 second.
It would be nice if we would be able to measure a frequency in an 
infinite short interval but any frequency measurement takes time.
Turn out that basic white noise and systematic noise will limit our 
frequency resolution to form a 1/tau limit slope, so infinite short 
interval will bury it well into that noise whatever we do.
What if the frequency counter does a complete measurement of a 
frequency source every second and all the variation within that second 
is hidden because of the "integration" that happens over the second?


That is what happens, but that is not what the ADEV is about, it's about 
the variations of these measures as we look for a bunch of them. So if 
we now have say 1000 of these frequency estimates, how much variations 
in these can be contributed to the random noise of the source, and to 
analyse that, we need at least a tool like ADEV since standard deviation 
will not even converge for white and flicker phase noise modulation.


What ADEV actually aims to do is to provide a low-frequency spectroscopy 
method at a time when time-interval counters was about the only tool at 
hand, and even those where very rare. We now have a much wider palette 
of tools, but ADEV is relevant for how we measure frequency stability 
and a few other applications.



This is specially the case with continuous time-stamping counters.
They can provide a precise number by applying statistical methods on 
many measurements done during one second but they can not provide 
information exactly at the end of a second.
Is this kind of statistical measurement over a period of a second 
still valid for determining the ADEV at the Tau of one second of a 
frequency source?
Not for ADEV, but if you use averaging counter you get the result of 
MDEV and for linear regression / least square counter you get the 
response of PDEV. That is the result of various statistical measures and 
then applying the ADEV processing on these frequency estimates. The 
upcoming IEEE Std 1139 revision, which is in approval process now 
include language to reflect that.
Or should there be a correction factor depending on the method used in 
the frequency counter?


Yes, you then need to use the appropriate bias function for ADEV/MDEV 
and ADEV/PDEV to convert between these scales. Knowing the response of 
ADEV, MDEV and PDEV for a particular noise-type which is dominant at the 
tau of interest, you can readily convert between them by forming the 
bias functions.


You may find NIST SP-1065 a useful and handy tool, even if it does not 
cover the more recent work such as PDEV.


https://www.nist.gov/publications/handbook-frequency-stability-analysis

I tried to read some scientific studies on this subject but I am not 
smart enough to understand.

Hope one of you can provide some information.


It is scattered over a large number of articles, and quite a lot of 
folks get confused. Hopefully the updated IEEE Std 1139 will be of aid 
to you. It also has lots of useful references.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: GPS D (i.e. gpsd) roll over bug..Oct 24th

2021-10-22 Thread Magnus Danielson via time-nuts

Chris,

On 2021-10-22 22:18, Chris Caudle wrote:

On Fri, October 22, 2021 12:22 pm, Magnus Danielson via time-nuts wrote:

I find it amusing, as this was discussed on ntp-questions email-list in
mid August 2013, and clearly explained with due references.

According to the problem report this problem was only introduced in 2019,
so seems to be a regression rather than a problem left uncorrected since
2013.

This is correct, but the bug re-introduces the 1024 wrapping issue, 
which should have a basic safetynet after it.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: GPS D (i.e. gpsd) roll over bug..Oct 24th

2021-10-22 Thread Magnus Danielson via time-nuts

Björn,

The C/A code format have seen much greater changes before that. It's 
extention from 24 to 32 birds was not small, neither was the addition of 
the GPS-UTC corrections.


Cheers,
Magnus

On 2021-10-22 21:29, Björn wrote:

”It's actually a systemflaw that never was fixed in L1 C/A code. It would have 
been good if they added additional GPS-weeks bits in that signal, but it never 
materialiseras. It did for other newer signals.”

The L1 C/A was optimised many decades ago. How Do you change the bitstream 
definition, considering several satellite generations running in parallel. Some 
approaching 20 years since launch. Many years more since design.

How do you ensure that old and current working C/A-code receivers don’t turn 
into unusable crap?  (Including the systems they reside in)

Systemflaw... for a system designed now yes. For one designed in the 70-ties, 
still running the show No. Not in my view.


/Björn
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: GPS D (i.e. gpsd) roll over bug..Oct 24th

2021-10-22 Thread Magnus Danielson via time-nuts

Hi,

On 2021-10-22 17:58, Kevin Rowett wrote:

For those using GPSD (https://gpsd.gitlab.io/gpsd/index.html 
) to read GPS receivers, and get the 
GPS time, there is a bug that could cause problems on Oct 24th.


https://us-cert.cisa.gov/ncas/current-activity/2021/10/21/gps-daemon-gpsd-rollover-bug
 



I find it amusing, as this was discussed on ntp-questions email-list in 
mid August 2013, and clearly explained with due references. It seems 
that people did not act. There was a misconception that this was a 
"receiver error" for which they should not do any fixes. It's actually a 
systemflaw that never was fixed in L1 C/A code. It would have been good 
if they added additional GPS-weeks bits in that signal, but it never 
materialized. It did for other newer signals.


It seems nothing happen in those 8 years until the patching this year.

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: HP Z3801A - Dead GPS Receiver - Oncore VP

2021-10-15 Thread Magnus Danielson via time-nuts

Hi,

Yeah, I've seen the Lucent timing pair setup, and shipping cost was for 
sure involved into breaking it up into the modules and a cable.


Always nice to see how it all looked.

Cheers,
Magnus

On 2021-10-15 14:45, Bob kb8tq wrote:

Hi

There are ( or used to be ) pictures of the full up cell stations that stuff
like the 3801 and it’s cousins went into. Just how much searching it
would take to find them … who knows. My recollection of them is a
very boring bunch of faceplates in a couple of side by side custom
racks. Couple LED's here / connectors there, not a lot else.

Back in the era when LPro’s were $30, the outfit that was selling the
Lucent timing module with them in it would happily sell you the entire
setup, shelter and all.

Bob


On Oct 15, 2021, at 6:00 AM, Magnus Danielson via time-nuts 
 wrote:

Hi,

Well, designs from before SA off did get better performance, even if not 
optimum for the new condition. As they where installed, it did in itself not 
cue for a shift, but rather, as the equipment it was installed with was tossed, 
these receivers could be salvaged and used on their own. It would be neat to 
see how that installation actually looked, because it's pulled from some chassi 
and I've never seen how that setup really looked and was wired.

Other receivers where not integrated the same way, so they would have longer 
operational life.

Cheers,
Magnus

On 2021-10-15 02:35, Bob kb8tq wrote:

Hi

The stability profile of the GPS timing signal changed significantly when SA 
was turned off.
Things like sawtooth correction that didn’t make much difference in the 1990’s 
now did
make a difference. Time constants and OCXO parameters that made sense “before” 
didn’t
make sense “after”.

More or less: When you make profound changes in the GPS timing signal, the best 
approach
to accurately recovering that signal has to change to match the new signal. 
Does everybody
change everything the next day? Of course not. It takes a while for folks to 
work out what’s
what with the “new rules”.

Bob


On Oct 14, 2021, at 8:12 PM, Hal Murray  wrote:


kb...@n1k.org said:

The other way to look at it: The Z3801 and it???s kin basically went obsolete
in 2000 when SA was turned off. Once that happened, the design approach
changed.

Could you please say more.  What changed in the design approach?

Can I tell the difference by looking at a box?  Or poking at the serial port?



--
These are my opinions.  I hate spam.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: HP Z3801A - Dead GPS Receiver - Oncore VP

2021-10-15 Thread Magnus Danielson via time-nuts

Hi,

Well, designs from before SA off did get better performance, even if not 
optimum for the new condition. As they where installed, it did in itself 
not cue for a shift, but rather, as the equipment it was installed with 
was tossed, these receivers could be salvaged and used on their own. It 
would be neat to see how that installation actually looked, because it's 
pulled from some chassi and I've never seen how that setup really looked 
and was wired.


Other receivers where not integrated the same way, so they would have 
longer operational life.


Cheers,
Magnus

On 2021-10-15 02:35, Bob kb8tq wrote:

Hi

The stability profile of the GPS timing signal changed significantly when SA 
was turned off.
Things like sawtooth correction that didn’t make much difference in the 1990’s 
now did
make a difference. Time constants and OCXO parameters that made sense “before” 
didn’t
make sense “after”.

More or less: When you make profound changes in the GPS timing signal, the best 
approach
to accurately recovering that signal has to change to match the new signal. 
Does everybody
change everything the next day? Of course not. It takes a while for folks to 
work out what’s
what with the “new rules”.

Bob


On Oct 14, 2021, at 8:12 PM, Hal Murray  wrote:


kb...@n1k.org said:

The other way to look at it: The Z3801 and it???s kin basically went obsolete
in 2000 when SA was turned off. Once that happened, the design approach
changed.

Could you please say more.  What changed in the design approach?

Can I tell the difference by looking at a box?  Or poking at the serial port?



--
These are my opinions.  I hate spam.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: HP Z3801A - Dead GPS Receiver - Oncore VP

2021-10-14 Thread Magnus Danielson via time-nuts

Hi,

It kind of depends. The Z3801A was really crafted to meet the needs of 
CDMA mobile stations. That had it's life-span. Other GPSDOs can sit for 
very long time and when they fail, much around them can have changed, or 
mostly things have been added.


It used to be that GPSes could be installed and no real intention to 
upgrade existed. Some where even questionable if they could be upgraded 
during their lifetime, where as others could maybe be upgraded in the 
field or at least required the vendors service organisation being 
involved. Others was dead easy to upgrade in the field by the user. Very 
few firms still support their oldest devices, but it seems to be mostly 
because they can and they like the challenge. For some reason, being 
able to upgrade it in the field, remotely in secure way and still have 
support enough to do it has creeped into requirements. I helped to push 
that. DHS published it and we just started an IEEE standard for it. 
Little bit of a side-track, but never the less. Awareness have increased.


Cheers,
Magnus

On 2021-10-12 18:58, Bob kb8tq wrote:

Hi

If a cell tower is running for 5 years without an upgrade, that’s doing pretty 
well.
Ten years is an eternity in this case. Even for core network stuff, the 
“expected
lifetime” in the spec rarely makes it to 20 years and pretty much never goes 
past
that (in the spec.). Does the stuff last longer? In some cases it most 
certainly does.
Is the firmware still supported after X years? ….. h….

One way to “see” this is to take a look at the date codes on this gear as it 
shows
up on eBay. The 3801’s headed out into the field in the late 90’s and became a
“thing” for Time Nuts to buy and poke at by the early 2000’s.

How you factor in the delay between being pulled out of service in who knows
where, auctioned off, shipped to China, parted out, parts resold, and listed on 
eBay is
unclear. I’d bet they go by slow boat heading over there ….

Bob


On Oct 12, 2021, at 12:27 PM, Hal Murray  wrote:


kb...@n1k.org said:

I???ve run 3801???s for years and years without ever power cycling them. Other
than power supply failure, they never had a  problem. They did get detailed
monitoring pretty much all  the time.

I was guessing that the reboot-every-few-months recipe was trying to dance
around the week number roll over issue.

Has anybody figured out where/when it writes whatever it needs so that it
comes up right on power up?

On the initial application (cell towers?), was there any expectation of
lifetime?  In particular were they expected to keep going over WNRO and/or was
there a difference between run over WNRO and spares sitting on the shelf
coming up after WNRO?

--
These are my opinions.  I hate spam.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: HP Z3801A info needed - 1PPS interface

2021-10-10 Thread Magnus Danielson via time-nuts

Hi,

Yes, you need to modify it to operate with RS232 levels. If one goes to 
the now more acient annals one find the instructions. Some of the 
resources may be on the brink of dropping of the web.


It would be good if the many scraps of material was supplied on a common 
place, such as a common Wiki where we could cover the many topics and 
update common resources.


I remember I had to modify up my Z3801A to get RS232 levels and then 
solder up a suitable cable. I now have one with a failed power-supply 
and have not attended to that. Maybe in due time.


Cheers,
Magnus

On 2021-10-10 18:08, paul swed wrote:

I believe it is as thats the alternate strap on the z3801. Its also the way
z3801s arrives as RS 422.
Regards
Paul

On Sun, Oct 10, 2021 at 11:14 AM Joseph Gwinn  wrote:


On Sun, 10 Oct 2021 03:30:24 -0400, time-nuts-requ...@lists.febo.com
wrote:
time-nuts Digest, Vol 210, Issue 7


Date: Sat, 9 Oct 2021 15:18:08 -0700
From: ed breya 
Subject: [time-nuts] Re: HP Z3801A info needed - 1PPS interface
To: time-nuts@lists.febo.com
Message-ID: <61621520.9050...@telight.com>
Content-Type: text/plain; charset=utf-8; format=flowed

I have the 1 PPS circuit working just fine. The pulse width is around 27
uSec, nice and flat and strong regardless of the termination. I can't
discern the rise time or prop delay yet.

I discovered an interesting thing about the 1 PPS signals from the DB-25
connector. They are (or rather, one of them is) rather odd in voltage -
not PECL, except under certain conditions.

I hooked up one of the 1 PPS outputs to the circuit, just with a pair of
wires. This gave me a chance to make some measurements out in the open.
The comparator circuit worked fine, and once I got a good view of the
result, I started looking into the details. The first thing I found is
that the quiescent "low" value of the "1 PPS_1-" (J3 P17) rests at about
2.5 VDC - not PECL at all. The high side "1 PPS_1+" (J3 P9) seemed about
right, near 3.9 V. Uh oh - I thought maybe the port is damaged. I double
and triple checked the connections (they were right), then tacked some
wires on the number two port, pins 8 and 21.

They behaved exactly the same, so probably normal - or both burned out
the same way. So, I figured there must be some logic to this big
asymmetry. It couldn't be terminations to ground, since the 2.5 V one
could only go lower, so differential is the only kind that makes sense.
I tried various values across the lines, and sure enough, the 2.5 V
level rose substantially with decreasing R, but did not reach a "proper"
PECL low level until the differential load was around 50 ohms. The high
side changed only a little, indicating it goes right to the output of an
ECL part - if it was reverse terminated it would have dropped much more
with the loading.

So, it looks like these lines are connected to the outputs of ECL parts
(run as PECL), or maybe a simulation from some other kind of circuit. If
you picture each line being the emitter output, the high one is on most
of the time, and of proper level, You'd think the low one should still
hold at PECL low, at some current into its load, but it doesn't. It
could be that its load is made heavier, and to ground, on purpose,
drawing it down more. If it were terminated into a proper terminator
supply, it should be 2 V below Vcc, or 3 V in this case, so it couldn't
go to 2.5 V. Anyway, I understand what it's doing, but don't see why it
was made this way.

Just in case, I checked these levels under different conditions - fresh
power-up, locked, and hold modes, to make sure the common-mode levels
aren't changed for external signalling of conditions. They were constant
in all conditions.

Then I checked the signals on all the lines with a scope, directly
through coax. I tried a few different termination Rs, as shown below,
with the results.

When the pulse goes active, the high side drops, and the low side rises,
to roughly the same as the DC levels, so only the terminator value and
end levels are needed to get the picture. Remember, these are
approximate, from eyeballing a scope trace flash once a second.

Open circuit 3.9/2.5
221 R 3.8/2.5
100 R 3.7/2.6
75 R 3.7/2.8
47 R 3.7/3.2

So, there's plenty of signal under all conditions, and I think it's just
a matter of picking a termination for whatever cable is used. I was
quite surprised by this oddity, but it seems to work fine with my
circuit no matter what.

BTW the two 10 MHz outputs there are also described as "pseudo-ECL," so
I'd imagine they have the same characteristics. I'll take a look when I
get a chance.

Ed

Can this be RS-422 from a 5-volt source?

.

Joe Gwinn
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


___
time-nuts mailing list -- 

[time-nuts] Re: Rubidium oscillator : pack it in styrofoam or attach it to a heath-sink?

2021-10-05 Thread Magnus Danielson via time-nuts

Hi,

I agree with Bob, get a 5065 or similar. Another approach is just to 
realize how cheap telco rubidiums are and have a stockpile of 
replacements. Testing them through is fairly straight-forward.


Cheers,
Magnus

On 2021-10-06 00:10, Bob kb8tq wrote:

Hi

The whole device is magnetic shielded. If you mess with the “where things go
you need to come up with new magnetic shields.

Since you are dealing with microwave this and microwave that, stretching out
the distance from the physics package to the “other stuff” is going to involve
some redesign of this or that. Even the VHF stuff that fires up the bulb 
probably
isn’t going to be “happy” with the leads moved here or there.

As you move this or that, you change the heat sinking on this or that oven. That
will get you into a bit of re-optimization and (possibly) re-heat sinking of 
things.

So: can you do it? Sure you can. The 5065 has some (but not all) of the 
electronics
pretty far away from the physics package. With enough effort and redesign you
could do the same thing.

My guess: It’s quicker / easier / cheaper / better results to find a 5065 that 
needs
a lot of TLC ….

Bob


On Oct 5, 2021, at 4:17 PM, paul swed  wrote:

This is a good discussion and has my brain cell working.
Totally agree on te temperatures in the filter and such.
But I have always disliked the temperature everything is running at in the
telco RB's.
So would it make sense to actually seperate the boards and get them away
from the heat while leaving the hot items as is? The leads can be
lengthened, even the RF.
Regards
Paul
WB8TSL

On Tue, Oct 5, 2021 at 4:09 PM Magnus Danielson via time-nuts <
time-nuts@lists.febo.com> wrote:


Hi,

It's a complex picture as depending on temperature of the components,
and other aspects such as RF intensity, sensitivity to this or that
changes, line with changes etc. It also depends on the actual buffert
gas mix, which by the way changes over time due to leakage.

There is really three parts, the lamp, the filter and the detector cell.
Turns out that  the filter and cell temperatures end up in about 65
degrees C. For some I've seen 73 degrees C. For the lamp, you end up
with something in 110-120 degrees C. Physically these two temperatures
is just next to each other, as the lamp needs to shine straight into the
microwave cavity of the cell but the filter cell needs the same
temperature as the cell.

One can optimize the temperatures for strongest signal, which sounds
like a good thing for S/N, one can optimize them for minimal sensitivity
for lamp or RF intensity or you can optimize it for low line width.
Depending on the conditions, you end up with a bit different settings.
If it is easy to stabilize RF intensity for instance, then one can relax
that optimization, similarly for lamp intensity. Then you can push it
for a balance between line-width (Q) and S/N. For others, this is not
feasible, for instance for simplicity/cost and/or size.

Regardless, temperatures of rubidiums is quite a different mess to that
of cesiums or hydrogen masers, and let me tell you that temperature of
the later is a mess I look at quite a bit at the moment.

Cheers,
Magnus

On 2021-10-05 17:45, Bob kb8tq wrote:

Hi

Rubidiums are somewhat unusual beasts. They typically have two heated

zones ( = two ovens) in

them. One is a bit hotter than the other. Because of the basic physics,

those ovens are right next

to each other / in contact with each other.

If you go to crazy with the insulation, the “colder” oven will heat up

due to heat leakage from the

“hotter” oven.  You need a certain amount of heat coming off the package

to allow this to happen.

The bigger issue is that there is a pretty big batch of electronics near

the ovens in the typical telecom

Rb. Unless you heatsink things pretty well these parts heat up. When

they do their MTBF drops

quite a bit. You save a couple of watts of heat (maybe) and loose the Rb

after a year or two. Not

a great tradeoff.

Yes, there are a lot of different designs for lab grade Rb’s. There are

also some really tiny little

guys running around. Neither category is all that easy to get on the

surplus market. If you want

to dive into either of those categories, there are issues, they just may

not be quite the same.

Bob


On Oct 4, 2021, at 1:39 PM, Wim Peeters  wrote:

Insulation decreases the power consumption.  But it will also increase

the temperature of the electronics.

A heath-sink will cool the electronics but will increase the power

consumption.

Or maybe insulate the  part of the case that gets hot, and put a

heat-sink on the other parts?

Wim Peeters
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe

send an email to time-nuts-le...@lists.febo.com

To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe

send an

[time-nuts] Re: Rubidium oscillator : pack it in styrofoam or attach it to a heath-sink?

2021-10-05 Thread Magnus Danielson via time-nuts

Hi Paul,

So, sure, most of the electronics is actually better served away from 
the heat, but that is the compromise of the cheap telco rubidiums that 
need things fit into a small space. Another aspect is that isolating 
them as you suggest can help reduce the amount of heat we need to 
produce, and with that the current through transistors to burn energy 
for heating, which is another source of failure, a strain on MTBF in 
itself. At the same time, as you isolate, you need to leak more heat 
from the colder filter/cell so that it can dump heat from the lamp side 
and still have a regulating heating to maintain the temperature you 
want. So you need to understand the balances and keep everything there. 
Chopping up an LPRO like this is possible for sure. The LPRO also uses 
the temperature of the filter/Cell block to stabilize the crystal, and 
the oscillator part needs to be more or less straight there or you end 
up in trouble. If you have a couple, you can let most of the electronics 
be dead but at high temperature and another bord cold to do what can be 
done at a bit of distance, if that is what makes you go. I have enough 
LPROs not to care, I have spares.


Hope it helps.

Cheers,
Magnus

On 2021-10-05 22:17, paul swed wrote:

This is a good discussion and has my brain cell working.
Totally agree on te temperatures in the filter and such.
But I have always disliked the temperature everything is running at in 
the telco RB's.
So would it make sense to actually seperate the boards and get them 
away from the heat while leaving the hot items as is? The leads can be 
lengthened, even the RF.

Regards
Paul
WB8TSL

On Tue, Oct 5, 2021 at 4:09 PM Magnus Danielson via time-nuts 
mailto:time-nuts@lists.febo.com>> wrote:


Hi,

It's a complex picture as depending on temperature of the components,
and other aspects such as RF intensity, sensitivity to this or that
changes, line with changes etc. It also depends on the actual buffert
gas mix, which by the way changes over time due to leakage.

There is really three parts, the lamp, the filter and the detector
cell.
Turns out that  the filter and cell temperatures end up in about 65
degrees C. For some I've seen 73 degrees C. For the lamp, you end up
with something in 110-120 degrees C. Physically these two
temperatures
is just next to each other, as the lamp needs to shine straight
into the
microwave cavity of the cell but the filter cell needs the same
temperature as the cell.

One can optimize the temperatures for strongest signal, which sounds
like a good thing for S/N, one can optimize them for minimal
sensitivity
for lamp or RF intensity or you can optimize it for low line width.
Depending on the conditions, you end up with a bit different
settings.
If it is easy to stabilize RF intensity for instance, then one can
relax
that optimization, similarly for lamp intensity. Then you can push it
for a balance between line-width (Q) and S/N. For others, this is not
feasible, for instance for simplicity/cost and/or size.

Regardless, temperatures of rubidiums is quite a different mess to
that
of cesiums or hydrogen masers, and let me tell you that
temperature of
the later is a mess I look at quite a bit at the moment.

Cheers,
Magnus

On 2021-10-05 17:45, Bob kb8tq wrote:
> Hi
>
> Rubidiums are somewhat unusual beasts. They typically have two
heated zones ( = two ovens) in
> them. One is a bit hotter than the other. Because of the basic
physics, those ovens are right next
> to each other / in contact with each other.
>
> If you go to crazy with the insulation, the “colder” oven will
heat up due to heat leakage from the
> “hotter” oven.  You need a certain amount of heat coming off the
package to allow this to happen.
>
> The bigger issue is that there is a pretty big batch of
electronics near the ovens in the typical telecom
> Rb. Unless you heatsink things pretty well these parts heat up.
When they do their MTBF drops
> quite a bit. You save a couple of watts of heat (maybe) and
loose the Rb after a year or two. Not
> a great tradeoff.
>
> Yes, there are a lot of different designs for lab grade Rb’s.
There are also some really tiny little
> guys running around. Neither category is all that easy to get on
the surplus market. If you want
> to dive into either of those categories, there are issues, they
just may not be quite the same.
>
> Bob
>
>> On Oct 4, 2021, at 1:39 PM, Wim Peeters mailto:peeter...@scarlet.be>> wrote:
>>
>> Insulation decreases the power consumption.  But it will also
increase the temperature of the electronics.
>>
>> A heath-sink will cool the electronics but wi

[time-nuts] Re: Rubidium oscillator : pack it in styrofoam or attach it to a heath-sink?

2021-10-05 Thread Magnus Danielson via time-nuts

Hi,

It's a complex picture as depending on temperature of the components, 
and other aspects such as RF intensity, sensitivity to this or that 
changes, line with changes etc. It also depends on the actual buffert 
gas mix, which by the way changes over time due to leakage.


There is really three parts, the lamp, the filter and the detector cell. 
Turns out that  the filter and cell temperatures end up in about 65 
degrees C. For some I've seen 73 degrees C. For the lamp, you end up 
with something in 110-120 degrees C. Physically these two temperatures 
is just next to each other, as the lamp needs to shine straight into the 
microwave cavity of the cell but the filter cell needs the same 
temperature as the cell.


One can optimize the temperatures for strongest signal, which sounds 
like a good thing for S/N, one can optimize them for minimal sensitivity 
for lamp or RF intensity or you can optimize it for low line width. 
Depending on the conditions, you end up with a bit different settings. 
If it is easy to stabilize RF intensity for instance, then one can relax 
that optimization, similarly for lamp intensity. Then you can push it 
for a balance between line-width (Q) and S/N. For others, this is not 
feasible, for instance for simplicity/cost and/or size.


Regardless, temperatures of rubidiums is quite a different mess to that 
of cesiums or hydrogen masers, and let me tell you that temperature of 
the later is a mess I look at quite a bit at the moment.


Cheers,
Magnus

On 2021-10-05 17:45, Bob kb8tq wrote:

Hi

Rubidiums are somewhat unusual beasts. They typically have two heated zones ( = 
two ovens) in
them. One is a bit hotter than the other. Because of the basic physics, those 
ovens are right next
to each other / in contact with each other.

If you go to crazy with the insulation, the “colder” oven will heat up due to 
heat leakage from the
“hotter” oven.  You need a certain amount of heat coming off the package to 
allow this to happen.

The bigger issue is that there is a pretty big batch of electronics near the 
ovens in the typical telecom
Rb. Unless you heatsink things pretty well these parts heat up. When they do 
their MTBF drops
quite a bit. You save a couple of watts of heat (maybe) and loose the Rb after 
a year or two. Not
a great tradeoff.

Yes, there are a lot of different designs for lab grade Rb’s. There are also 
some really tiny little
guys running around. Neither category is all that easy to get on the surplus 
market. If you want
to dive into either of those categories, there are issues, they just may not be 
quite the same.

Bob


On Oct 4, 2021, at 1:39 PM, Wim Peeters  wrote:

Insulation decreases the power consumption.  But it will also increase the 
temperature of the electronics.

A heath-sink will cool the electronics but will increase the power consumption.

Or maybe insulate the  part of the case that gets hot, and put a heat-sink on 
the other parts?

Wim Peeters
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

  1   2   >