Daniel,
It looks like we agree that backplane clock distribution is the best way
to go, /so long/ /as/ we can overcome the phase noise/drift concerns
that we've discussed -- which isn't a given. We've probably taken this
discussion about as far as we can without having test data to see how
well we can get the backplane clock to work practice. We (Oxford) are
happy to have a go at getting this data and, as I mentioned, are
currently designing hardware to that end. Once we've got designs ready
for the test hardware, we'll post them here along with a rough
description of the tests we plan to perform.
Thanks for the detailed response to my points. Some comments below...
Best,
Tom
On 08/04/2016 21:22, Slichter, Daniel H. (Fed) wrote:
2) Distributing the Wenzel oscillator's signal without degrading its
phase noise is going to be pretty tough.
For this I would suggest that the distribution be done with passive
microwave splitters tested for low phase noise (obviously this doesn’t
scale forever), but these will do you better than an IC for sure.
Mini-circuits .085 tinned flex cable (with the blue plastic jacket)
has sufficiently good phase noise performance to be used for the
cabling between these components and the cards (as tested by the phase
noise measurement group here at NIST, that is what they use).
Yes, a passive power splitter and high-quality cable is likely to be the
best way of distributing a single-frequency DAC clock to a small number
of boards. Obviously, after a few boards one starts to need amplifiers,
at which point it's not clear to me that this scheme outperforms using
nice clock fanout buffers. But, I take your point that passive
distribution makes it relatively easy to get a small-scale
high-performance system running, which is a significant benefit.
For example, the best clock distribution IC that I know of is the
HMC830. Looking at figure 22 of its data sheet suggests that even this
IC will degrade the Wenzel oscillator's phase noise considerably. In
general, clock switches tend to be far worse than this (which is one
reason that I'd really rather not put a switch in the clock path to
select between the front-panel and backplane clocks). I wouldn't trust
a switch/distribution IC to work at those phase noise levels unless
its data sheet explicitly gave a phase noise plot.
Yes, a clock switch might degrade things, and this is important to
consider. We might be able to get around that by not running the
“nice” clock through the clock switch, but including a passive
component (e.g. a hybrid or splitter) through which both clocks reach
the DACs, and then putting a switch on the not-so-nice clock upstream
of the splitter to enable you to turn it on or off. A bit of a
workaround but I think totally reasonable given the hardware required
to make and distribute the external clock anyway. You could also do
something as dumb as having a clock output and clock input on the
front panel; you connect them together to use the internal clock, or
connect in an external clock to use that.
These are options. I'd probably avoid having a clock input + output on
the front panel of each FMC board, as then you'd need (e.g.) an SMA for
the upconverter LO + 2 MMCX for the DAC clock input/outputs, which
starts to take up a lot of space. My guess is that a really nice clock
switch like an ADCLK950 (50fs/K propagation delay TC) will significantly
outperform the DAC and won't degrade the system stability/noise...
A similar remark goes for the ERA-4SM amplifier you suggest using on
the DACs output. For close-in phase noise, amplifier noise figure is
meaningless. In general, for work where phase noise really matters, no
amplifiers should be used unless their datasheet actually gives a
phase noise plot -- and I certainly wouldn't trust a cheap MCL
"general purpose" Darlington amp with my Wenzel oscillator without
some careful measurements!
Agreed that amplifier noise figure is meaningless for most things! In
this case, however, I had the NIST phase noise measurement group
actually test the ERA-4SM (and compare it to another minicircuits amp,
the PMA-545, which claims a lower noise figure on the spec sheet).
Turns out the ERA-4SM has quite superb phase noise, and is far
superior to the PMA-545 as well. Tests were done at 100 MHz, and it
appears that the ERA-4SM reaches a white noise phase floor of about
-175 dBc/Hz by 10 kHz, and the 1 Hz phase noise is about -145 to -150
dBc/Hz. See the attached plot. Both I and the phase noise
measurement folks were pleasantly surprised! We use this amplifier as
the gain stage after all our DDS chips, including for the clock folks.
Thank you for posting that data, that really /is/ a pleasant surprise --
I thought one only got that level of performance from relatively
specialist amplifiers. But, as David Allcock used to tell me, "sometimes
we all need our BS detectors recalibrated".
3) The DAC's phase noise contribution is going to be worse than the
locked VCO's noise (compare the 100MHz trace on figure 33 of the
AD9154 data sheet to the VCO datasheets) for all frequencies above
1kHz. Below 1kHz, the locked VCO's noise should be reduced close to
the level of the 10MHz reference source, which could be a Wenzel
oscillator if one wants. FWIW, the situation is the same even using
something like an AD9914, which is about the lowest noise digital RF
source I've seen.
This is a major issue that needs to be examined with the AD9154. Joe’s
discussions with AD indicated that the PLL on the DAC is lousy, which
can be seen from the datasheet, but if one does not use the internal
PLL the phase noise is still higher than optimal (see figs 33 and 34
on datasheet). On my lengthy list of things to do is to take some of
my own phase noise measurements on the AD9154, channel to channel and
chip to chip, just to double check.
If you have a couple of test boards up and running, would it be easy to
take some data on the propagation delay stability of these DACs at the
same time? This would be a really useful reference point for thinking
about how stable the clock distribution network needs to be -- there
isn't much point worrying about the temp. co. of clock buffers/PLLs if
they are small compared with the DAC...
The AD9914 DDS chip has substantially better phase noise than the
AD9154 DAC, but as you say this is about the best one can find. We
can’t use the AD9914 for this anyway, as you can’t make arbitrary
waveforms at the full data rate and it’s about 4x the cost per channel.
Agreed, the AD9914 isn't an option for many reasons. My point was just
that the VCO phase noise can be good even compared with a
state-of-the-art DDS, not just compared with the DAC...
4) In practice, I think that very low-level phase noise isn't going to
be the thing that stops us all getting gate errors <1e-6. For example,
I'd guess that duty-cycle-dependent thermal amplitude/phase transients
in the DAC/amplifier will be far more of a problem in practice.
Obviously, that's not a reason to be sloppy about the clock noise, but
it does mean that it's probably not worth complicating the external
wiring by introducing an external oscillator just to save a few dB...
Agreed, and yes, thermal control is going to be very important here as
well. We would have to do a thorough study (which will of course be
dependent on component selection and proposed cooling architecture as
well) to determine the level at which reference phase noise is no
longer the leading error term. However, since the plan for these
cards is that some of them might be in “mini-crates” with only one
slot, to allow for better environmental control or to place closer to
the experiment, I just want to make sure that we are not hamstringing
ourselves with needlessly poor reference phase noise.
Out of curiosity, what is the plan for thermal management (e.g. the DACs
put out a fair amount of heat)? Is this something you've thought about
much yet?
6) The only major benefit of the 1GHz clock that I can see is that one
is 20dB (40dB) less sensitive to pickup on the backplane compared with
100MHz (10MHz) reference due to the lower multiplication factor. However:
a) I'd expect this improvement to be at least partially cancelled out
by the fact that pickup/channel-channel cross-talk will generally be
worse at higher frequencies.
b) At higher frequencies the backplane losses can start to become
significant. To achieve optimum phase noise performance from a clock
buffer, one has to be careful to maintain a good input level (6dB can
make a difference here).
b) In practice, it's likely to be far more important to consider how
much noise we expect at the different frequencies; the big win will
come from putting the clock at a frequency which nothing else in the
backplane runs at. It's not clear to me that 1GHz is better from this
perspective than 10MHz (see point 7).
c) In any case, I expect (hope) that backplane cross-talk won't
actually be an issue at all if we design our loop filters properly and
think about what signals we send down the backplane, (again see point 7)
Again, 1 GHz is just a straw man because the math was easy. Your
points are all well taken; the point I wanted to make is that one may
be able to do a lot better than a 10 MHz reference multiplied up.
There is a lot of good stuff available for 100 MHz as well, for
example, and I think that would probably be a win. You can use a nice
100 MHz
Sorry, I wasn't trying to criticise use of exactly 1GHz. My point was
just that it's nice to be roughly in the 10MHz-100MHz range for a
variety of technical reasons. I don't have a strong opinion about
whether one uses 10MHz or 100MHz, or anything else sensible in that kind
of range...
If the residual cross-talk on the "cleaned" 100MHz signal turns out to
be an issue when using a 10MHz reference frequency, it may be worth
switching to a 100MHz reference frequency. My preference for 10MHz
isn't that strong, and just stems from the fact that 10MHz is so
ubiquitous and there is so much high-quality 10MHz kit around. But,
one could always distribute 10MHz between racks and put a 100MHz VCSO
+ PLL on the MCH to generate the 100MHz system clock.
7) Okay, the next point is even further outside my area of expertise
than the rest of this email, so feel free to call BS on this one, but:
it's important when choosing the backplane clock frequency to consider
the spectrum of the noise we're trying to avoid. If we encode all data
going down the backplane using (e.g.) 8b/10b then the signals should
be contained in the range f_clk/2 to f_clk/10, where f_clk is the
digital communication clock frequency. The upper frequency here is
obtained for data that transitions on every clock cycle, while the
lower limit is for data that consists of a a run of 5 0s or 1s (the
longest allowed by the encoding). As a result, there should be
effectively no noise below f_clk/10.
My understanding is that 8b/10b and other such schemes will remove the
dc component, but I don’t think you actually get a sharp turn-on at
f_clk/10. Each 10-bit code word has a “disparity”, essentially a net
DC offset, of 0, 2, or -2. The “running disparity” of the code is
either +1 or -1. If it is -1, your next code word has to have
disparity 0 or +2, and it it is +1, the next code word needs to have 0
or -2. This means you can have spectral components considerably below
f_clk/10, because some 8b data values are only encodable with
disparity 0, and others only with +/- 2. Then you could get a signal
with 10b code disparities of (for example) 2, 0, -2, 0, 2, …, which
would have frequency content at f_clk/40, for example. The
probability of this gets lower for lower frequencies because it relies
on particular types of data patterns occuring, but it’s not like there
is a hard wall at f_clk/10. Below is a figure showing PSD for a
pseudo-random bit stream assuming a 10 GHz clock, from
https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-16-10-7279&id=159474.
Sorry, that point was tacked on to the end of a long email without
proper consideration. I had though that 8b10b was a bit better than
this, but the simulations I did were done in a rush, so it's very
possible I made a mistake... I replied to one of Greg's emails with some
further musings containing equally rough numbers. Don't take any of
these quantitative claims too seriously for the time being. My point
really boils down to this: while the backplane wasn't designed to
distribute ultra-low phase noise analogue clocks, and the cross-talk can
be significant, it doesn't look like it needs to be a show stopper. We
are only trying to send a relatively small volume of data, and have the
option of using a multi GBPS link. This gives us a lot of options in
terms of spreading the noise power out/minimising the power at our
reference frequency. If we do this properly, the numbers actually don't
look /so///bad and I think we may be able to get the cross-talk noise
below the DAC's noise floor. We'll have a proper think about this once
we've got our test hardware designed...
Aside from noise-related issues, I'd generally prefer to avoid using
an externally generated DAC clock because:
8) We're quite keen that our next generation hardware should support
deterministic board-board phase synchronisation. In other words:
suppose one programs the same RF frequency onto DACs on different FMC
boards. We would like the phase relationship between the RF outputs to
be unchanged by power-cycling the hardware (or just re-running the
synchronisation routine). The ways we've thought about achieving this
rely on the DAC clock being edge-aligned with the system reference
clock (which the RTIO clock is generated from). This is hard to
guarantee if the DAC clock is generated externally. Is this kind of
synchronisation something you guys have thought about much? Do you
have a plan for how you're going to do it?
Deterministic board-to-board phase synchronization I think is a must
for us as well. My notion had been something like the current ARTIQ
architecture, where you provide an edge-aligned DAC clock to each
board (e.g. 2.4 GHz), then have it divided down on each board to 100
MHz or 150 MHz to use as the RTIO clock. The boards would then have
to signal to each other over the backplane with these RTIO clocks,
likely including some loopback configuration to allow determination of
board-to-board propagation delays, to determine the relative phases
between their two RTIO clocks and adjust as necessary to the correct
value. One could do this by starting with a shared backplane
reference (e.g. 100 MHz) and up-converting to give the DAC clock with
a PLL, as you suggest. Now the trick is that there
Looking at the DAC data sheet again, we should be able to use "one-shot
then monitor sync mode" to help us edge align the local reference clock
to the DAC clock. If we do that then there isn't a problem with the
external clock.
----
9) Your point about phase drift is a good one; this is actually one of
the things that I think will be most crucial to get right in the new
hardware. However:
a) As you comment, it's not that scalable
My proposal would probably work with one “good” external clock source
per crate, in terms of scalability.
b) Related to (a), it doesn't seem future-proof either: you are
assuming that we won't ever want to use a second RF clock? But, what
if, e.g. at some point you want to upgrade to a DAC running at a
different (higher) frequency, but don't want to bin all your existing
hardware? If this happens, a second external oscillator is needed and
the solution gets messier. But, beyond that, one becomes sensitive to
the phase drifts between the two different external oscillators. In
the past, I've seen significant (>10deg over something like 20 mins,
after 24hrs of warmup) phase drifts between the 3GHz outputs of
high-end Agilent synths that are nominally phase-locked together at
10MHz. Presumably this comes from the fact that these sources are
designed with phase noise at frequencies above, say 1Hz, in mind,
rather than long term drift. Now, I'd expect the Wenzel Oscillator to
be better on that front (simpler than a synth so less to drift). But
I'd still want to test it.
If you want to use a second RF clock of some sort, and the boards are
externally clocked (via an input to the FMC), then you are just
changing the design of the FMC card if you need to put in two external
clocks instead of one, for example.
My point here was just that as soon as one needs a second RF clock
(which is likely to happen sooner or later for one reason or another),
one becomes sensitive to the phase stability between two independent
PLLs. At that point, one has to deal with the stability issue whether
the PLLs are on the AMCs or built into external oscillators. I'm keen to
have a go at solving this problem now while we're in the design stage
and can easily make changes, rather than waiting until after we've got
hardware built.
Your point about the two Agilent generators locked with a 10 MHz
signal seems to me to be a damning statement about your proposed
solution with one VCO per board, because you are proposing exactly
this: two independent oscillators locked together with a 10 MHz
reference. These things drift, even with $50k Agilent synthesizers,
so it seems to me that the odds of getting this not to have long term
drift board to board using only chip components are very low.
I'm not so sure about that... It's been my experience that if one wants
a source that can be programmed to go from 100kHz to 20GHz and -130dBm
to +20dBm with crazy resolution and accuracy then the Agilent synth is
the way to go. However, for a fixed-frequency, stable oscillator they're
actually not that great. e.g. it's not /that /hard to outperform an
Agilent synth at a few GHz with a cheap PLL phase noise wise. When it
comes to phase stability, bigger is generally worse. I suspect that a
small, if needs be ovenized, PLL is actually a pretty good solution.
But, maybe I'll take that back when our test boards arrive.
c) You're assuming that passive clock distribution is always more
stable than active. That's definitely true if one uses run of the mill
components. But, compare e.g. the temp co of an ADCLK950 (50fs/K) to
the channel-channel phase drifts of many MCL passive splitters. The
splitters I've looked at have been worse than this. I think that in
general, active or passive, this is a tough problem.
I’m curious to know about which models of splitter you are referring
to here. I would imagine that phase drifts in a passive splitter with
CW microwaves are just thermal effects, and that this could vary with
the way the splitter is implemented. The datasheet for the
minicircuits BP2U+, for example, shows a phase unbalance drift of a
bit less than 1 degree between -40 C and +85 C around 2.4 GHz. At
these frequencies, 1 degree is about 1.1 ps, so you are talking about
a drift of roughly 10 fs/K.
I was looking at one of the multi-channel connectorized ones (I forget
exactly which one now), which was a fair bit worse than the part you
quote. I agree, that one would be fine.
_______________________________________________
ARTIQ mailing list
https://ssl.serverraum.org/lists/listinfo/artiq