[time-nuts] Re: CHU transmitters off air for a while

2022-05-25 Thread John Franke via time-nuts
Thanks for the information/update!

John WA4WDL

> On May 25, 2022 at 5:48 PM "Joseph B. Fitzgerald via time-nuts" 
>  wrote:
> 
> 
> Was looking for CHU over the weekend and couldn't find it, nor any clear 
> indication of what was wrong on the Web.  Kind of unsettling for a time nut!
> 
> Found out today at https://nrc.canada.ca/en
> 
> "Due to the recent severe weather event in the Ottawa-Gatineau area, the CHU 
> time signal for shortwave radio has been impacted and is currently 
> unavailable. We are working to restore the service as quickly as possible. 
> Thank you for your patience. "
> 
> 
> 
> - Joe Fitzgerald KM1P
> ___
> time-nuts mailing list -- time-nuts@lists.febo.com
> To unsubscribe send an email to time-nuts-le...@lists.febo.com
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Noise down-converter project

2022-05-25 Thread ed breya via time-nuts
Thanks Mike, for info on LCR alternatives. It's good to know of others 
out there, if needed. I have an HP4276A and HP4271A. The 4276A is the 
main workhorse for all part checking, since it has a wide range of LCZ, 
although limited frequency coverage (100 Hz - 20 kHz). The 4271A is 1 
MHz only, and good for smaller and RF parts, but very limited upper LCR 
ranges. I think it works, so I can use it if needed, but would have to 
check it out and build an official lead set for it. I recall working on 
it a few years ago to fix some flakiness in the controls, so not 100% 
sure of its present condition.


The main difficulty I've found in measuring small chokes is more of 
probing/connection problem rather than instrument limitation. For most 
things, I use a ground reference converter that I built for the 4276A 
many years ago. It allows ground-referenced measurements, so the DUT 
doesn't have to float inside the measuring bridge. The four-wire 
arrangement is extended (in modified form) all the way to a small 
alligator clip ground, and a probe tip, for DUT connection, so there is 
some residual L in the clip and the probe tip, which causes some 
variable error, especially in attaching to very small parts and leads. 
When you add in the variable contact resistance too, it gets worse. 
Imagine holding a small RF can (about a 1/2 inch cube) between your 
fingers, with a little clip sort of hanging from one lead, and pressing 
the end of the probe tip against the other lead. All the while, there's 
the variable contact forces, and effects from the relative positions of 
all the pieces and fingers, and the stray C from the coil to the can to 
the fingers. I have pretty good dexterity, and have managed to make 
these measurements holding all this stuff in one hand, while tweaking 
the tuning slug with the other.


I had planned on making other accessories like another clip lead to go 
in place of the probe tip, but not yet built. I also have the official 
Kelvin-style lead set that came with the unit, so that's an option that 
would provide much better accuracy and consistency, but the clips are 
fairly large and hard to fit in tight situations, and the DUT must 
float. Anyway, I can make all sorts of improvements in holding parts and 
hookup, but usually I just clip and poke and try to get close enough - 
especially when I have to check a lot of parts, quickly.


The other problem is that the 4276A is near its limit for getting 
measurements below 1 uH, with only two digits left for nH. The 4271A 
would be much better for this, with 1 nH vs 10 nH resolution.


If I get in a situation where I need to do a lot of this (if I should 
get filter madness, for instance), then I'll have to improve the tools 
and methods, but I'm OK for now, having slogged through it this time.


Ed



___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] CHU transmitters off air for a while

2022-05-25 Thread Joseph B. Fitzgerald via time-nuts
Was looking for CHU over the weekend and couldn't find it, nor any clear 
indication of what was wrong on the Web.  Kind of unsettling for a time nut!

Found out today at https://nrc.canada.ca/en

"Due to the recent severe weather event in the Ottawa-Gatineau area, the CHU 
time signal for shortwave radio has been impacted and is currently unavailable. 
We are working to restore the service as quickly as possible. Thank you for 
your patience. "



- Joe Fitzgerald KM1P
___
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com


[time-nuts] Re: Build a 3 hat timestanp counter

2022-05-25 Thread Magnus Danielson via time-nuts

Dear Hans-Georg,

The sooner you can supply ADEV and MDEV phase-plots from your device, we 
can provide more detailed recommendations and point you to specifics. I 
try to raise your awareness in dry-simulation before any measurement is 
available.


Please be aware that comparators is inherently slew-rate limited, such 
that the slew-rate convert the amplitude noise to time-noise, which is 
later time-tagged by the time-tagger circuits. This follows the classic 
trigger noise formula:


t_n = e_n / SR

where t_n is the time-noise, e_n is the voltage noise and SR is the 
slew-rate.


You can stress-test this using two methods:

1) change trigger voltage to alter voltage point and thus slew-rate on 
signal shape. In practice this is mostly useful as you operate a scope, 
but very illustrative as you see the fuzzyness increase and decrease due 
to the trigger noise.


2) alter amplitude of signal, such as inserting a 6 dB damper, which 
will double the slew-rate limited noise.


To complicate matters, there is inherent noise of any channel, so that 
comes on top of the slew-rate limited noise. Alternate slew-rate to 
separate the effects is straight-forward.


So, with that basic on trigger jitter, I aim to complicate matters for you.

You can make the trigger process better than comparators. There is a 
paper by Collins that covers it, but it has been further investigated. 
See the contributions to the field available online by Bruce Griffiths, 
a fellow time-nuts in New Zeeland. The basic reasoning builds on four 
observations:


First, the noise of a circuit depends on the bandwidth of the input. 
Quite simply, if we have a 300 K noise-source, how much voltage we get 
depends also on the bandwidth.


Secondly, the slew-rate we have is limited by the bandwidth, since the 
reciprocal of bandwidth provides access to the rise-time and slew-rate 
is rise-time limited.


Third, we can increase slew-rate by using gain.

Fourth, each gain-stage will add noise.

In a reasoning similar to, but not quite matching up to the Friis 
formula for noise figure is relevant here.


So, rather than going straight into a comparator, which is a 
high-bandwidth input with high gain, you can have multiple stages of 
amplification and successively increasing bandwidth to support the 
slew-rate. Eventually you have a slew-rate so high, that a straight 
comparator or even digital input will not care.


Another complicating factor is that the quantization noise and random 
noise actually interact. I did analysis on that and presented at a 
conference. The paper for it is just unreadable and below my standards, 
but the learnings is important and I will have to revisit the topic. It 
can actually be good to have more noise than the quantization step has, 
if you do averaging. Actually, the HP5328A with Option 040, 041 or 042 
or a HP5328B will intentionally add noise to the signal to improve the 
precision as you do averaging. It claims 10 ps resolution achievable if 
you look in the catalog. Turns out it is better than that, because it 
assumed a sqrt(N) benefit.


Now, observe how I just contradicted myself in methods. This is, in Bob 
Camps lovely terminology, a "it depends" issue. Which way to go depends 
so much on where your limits are and how you choose to approach it.


You mention that it may be good to separate things. I can state clearly 
that it does. As these signals go through the same chip, ground-bounce 
issues cause cross-talk. This cross-talk looks similar to a capacitive 
coupling between the signals, at the point of most slew-rate, the 
cross-talk is worst. This cross-talk causes any time-wise nearby signal 
to shift it's apparent time, causing a non-linear shift of 
time-difference between the signals. This will cause the RMS error to 
increase as measured over all phase-relationships, and it looks like 
apparent lock-up of the oscillators, when it is in fact the measurement 
device which causes the imperfection. One trick to be used is to use a 
short bit of coax and simply time-stamp a delayed version independently. 
This way you can estimate the effect and it's impact on your 
measurement. This usually leads to work on the counter to reduce 
coupling. Notice that this non-linearity increases with increased 
slew-rate. Care in how traces and it's ground-imaging variant is traced, 
de coupled etc. can significantly help to reduce it. Also, decreasing 
slew-rate where not needed helps. Again, contradictory to what one would 
think of.


The magical trick of mixing with a signal is that you subtract frequency 
but maintain phase, which causes the time-difference amplified. This is 
the core of the DMTD measurement. Now, what bites you is the slew-rate 
is reduced by the same factor. But sure, to some degree with the other 
tricks you can gain something. Can be worth testing.


Now, with the high-speed ADCs you can approach this differently, convert 
the IQ samples through arctan, and then decimate the data to arb

[time-nuts] Re: Build a 3 hat timestanp counter

2022-05-25 Thread Hans-Georg Lehnard via time-nuts
Thanks for your answer and the many suggestions what can be improved. 

The first picture shows my concept for a prototype. 

The input shaper consists of a 4:1 transformer with differential output
and a TLV3501 comparator. The digital part with divider and start/stop
logic fit for all 3 channels into a XCR3064XL CPLD. Maybe it is better
to separate the channels later. The MC is a STM32H743 and runs with 450
MHz (pll), clocked from the reference frequency. TDC7200 used as TDC. 
The measurement with spi readout takes about 5µs, so I decided for 10µs
(100 kHz) sample time. 

The second picture shows the measuring timing inside CPLD. 

The TDC7200 runs in mode 1 and supplies only the fine time. The MC runs
a 10 MHz (reference) counter with 3 capture channels as coarse time. So
I only have to read the fine timer and the calibration register from the
TDC. 
The TDC cannot measure from 0, so a reference cycle is added. (t = x
+100ns). 

For the averaging I had thought of a linear regression. 

Hans-Georg 

Am 2022-05-25 01:18, schrieb Magnus Danielson via time-nuts:

> Hi,
> 
> The first limit you run into is the 1/tau slope of the measurement setup. 
> This is often claimed to be white phase modulation noise, but it is also the 
> effect of the single-shot resolution of the counter, and the actual slope 
> level depends on the interaction of these two.
> 
> So, you might want to try a simple approach first, just to get started. 
> Nothing wrong with that. You will end up want to get better, so I will try to 
> provide a few guiding comments for things to think of and improve.
> 
> So, in general, try to use as high frequency as you can so that as you 
> average down, your sqrt(f/f0) gets as high as possible as the benefit will be 
> 1/sqrt(f/f0) where f is the oscillator frequency and f0 is the rate after 
> average.
> 
> As you do ADEV, the f0 frequency will control your bandwidth.
> 
> The filter effect of the averaging as you reduce and sub-sample will help to 
> some degree with anti-aliasing, but rather than doing averaging, consider 
> doing proper anti-aliasing filtering as the effect of aliasing into these 
> measures is established and improvements into the upcoming IEEE Std 1139 
> reflect this. In short, aliasing folds the white noise and straight averaging 
> tends to be a poor suppressor of aliasing noise.
> 
> For white phase modulation (WPM) the expected ADEV response depends linearly 
> with the bandwidth of the measurement filter. It's often modelled as a 
> brick-wall filter, which it never is. For classical counters, the input 
> bandwidth is high, then the sampling rate forms a Nyquist sampling frequency, 
> but wide band noise just aliase around that. Anti-aliasing filter helps to 
> reduce or even remove the effect, and then the bandwidth of the anti-aliasing 
> filter replace the physical channel bandwidth. If the anti-aliasing is done 
> digitally after the counter front-end, you already got some aliasing 
> wrapping, but keeping that rate as high as possible keep the number of 
> overlays low and then filtering-wise reduce it will get you better result.
> 
> For aliassing effects, see Claudio Calosso of INRIM. Great guy.
> 
> This is where the sub-sampling filter approach is nice, since a filter 
> followed by sub-sampling removes the need to produce all the outputs of the 
> original sample rate, so filter processing can operate on the sub-sampled 
> rate.
> 
> As your measures goes for higher taus in ADEV, the significant amount of the 
> ADEV power will be well within the pass-band of the filter, so just making 
> sure you have a flat top avoids surprises. For shorter taus, the 
> anti-aliasing filter will be dominant, so assume first decade of tau to be 
> waste.
> 
> I say this to guide you to get the best result with the proposed setup.
> 
> The classical three-cornered hat calculation has a limitation in that it 
> becomes limited by noise and can sometimes result in non-stable results. The 
> Grosslambert analysis is more robust, since it is essentially the same as 
> doing the cross-correlation measurement. The key is that you average down 
> before squaring where as in the three-cornered hat to square early and is 
> unable to surpress noise of the other sources with as good quality. For 
> Grosslambert analysis, see François Vernotte series of papers and 
> presentation. François is another great guy. I spent some time discussing the 
> Grosslambert analysis with Demetrios the other week. I think I need to also 
> say that Demetrios is a great guy too, not to single him out, but he really 
> is.
> 
> There is another trick up the sleeve thought. If you do the modified Allan 
> deviation (MDEV) processing, it actually integrate the sqrt() trick with 
> measurement, achieving a 1/tau^1.5 slope for the WPM. This will push it down 
> quicker if you let it use enough high rate of samples, so that you hit the 
> flicker phase-modulation slope (1/tau), the white frequency modulation