Hi,

On 11/27/2017 07:37 PM, Attila Kinali wrote:
Moin Ralph,

On Sun, 26 Nov 2017 21:33:03 -0800
Ralph Devoe <rgde...@gmail.com> wrote:

The issue I intended to raise, but which I'm not sure I stated clearly
enough, is a conjecture: Is least-square fitting as efficient as any of the
other direct-digital or SDR techniques?

You stated that, yes, but it's well hidden in the paper.

Least-square fitting done right is very efficient.

A good comparison would illustrate that, but it is also expected. What does differ is how well adapted different approaches is.

If the conjecture is true then the SDR
technique must be viewed as one several equivalent algorithms for
estimating phase. Note that the time deviation for a single ADC channel in
the Sherman and Joerdens paper in Fig. 3c is about the same as my value.
This suggests that the conjecture is true.

Yes, you get to similar values, if you extrapolate from the TDEV
data in S&J Fig3c down to 40µs that you used. BUT: while S&J see
a decrease of the TDEV consistend with white phase noise until they
hit the flicker phase noise floor at about a tau of 1ms, your data
does not show such a decrease (or at least I didn't see it).

There is a number of ways to do this.
There is even a number of ways that least square processing can be applied.

The trouble with least square estimators is that you do not maintain the improvement for longer taus, and the paper PDEV estimator does not either. That motivated me to develop a decimator method for phase, frequency and PDEV that extends in post-processing, which I presented last year.

Other criticisms seem off the mark:

Several people raised the question of the filter factor of the least-square
fit.  First, if there is a filtering bias due to the fit, it would be the
same for signal and reference channels and should cancel. Second, even if
there is a bias, it would have to fluctuate from second to second to cause
a frequency error.

Bob answered that already, and I am pretty sure that Magnus will comment
on it as well. Both are better suited than me to go into the details of this.

Yes, see my comment.

Least square estimator for phase and frequency applies a linear ramp weighing on phase samples or parabolic curve weighing on frequency samples. These filter, and the bandwidth of the filter depends on the sample count and time between samples. As sample count increases, the bandwidth goes way down.

Third, the Monte Carlo results show no bias. The output
of the Monte Carlo system is the difference between the fit result and the
known MC input. Any fitting bias would show up in the difference, but there
is none.

Sorry, but this is simply not the case. If I undestood your simulations
correctly (you give very little information about them), you used additive
Gaussian i.i.d noise on top of the signal. Of course, if you add Gaussian
i.i.d noise with zero mean, you will get zero bias in a linear least squares
fit. But, as Magnus and I have tried to tell you, noises we see in this area
are not necessarily Gauss i.i.d. Only white phase noise is Gauss i.i.d.
Most of the techniques we use in statistics implicitly assume Gauss i.i.d.

Go back to the IEEE Special issue on time and frequency from february 1966 you find a nice set of articles. In there is among others David Allans article on 2-sample variance that later became Allans variance and now Allan variance. Another article is the short but classic write up of another youngster, David Leeson, which summarize a model for phase noise generation which we today refer to the Leeson model. To deeper appreciate the Leeson model, check out the phase-noise book of Enrico Rubiola, which gives you some insight. If you want to make designs, there is more to it, so several other papers needs to be read, but here you just need to understand that you get 3 or 4 types of noises out of an oscillator, and the trouble with them is that noise does not converge like your normal textbook on statistics would make you assume. The RMS estimator on your frequency estimation does not converge, in fact it goes astray and vary with the amount of samples. This was already a known problem, but the solution came with Dave Allans paper. It in fact includes a function we later would refer to a bias function that depends on the number of samples taken. This motivates the conversion from one M sample variance to a 2-sample variance and a N sample variance to a 2-sample variance such that they can be compared. The bias function varies with the number of samples and the dominant noise-form.

The noiseforms are strange and their action on statistics is strange.
You need to understand how they interact with your measurement tool, and that well, in the end you need to test all noiseforms.

Attila says that I exaggerate the difficulty of programming an FPGA. Not
so. At work we give experts 1-6 months for a new FPGA design. We recently
ported some code from a Spartan 3 to a Spartan 6. Months of debugging
followed.

This argument means that either your design was very complex, or it
used features of the Spartan3 that are not present in Spartan6 anymore.
The argument does not say anything about the difficulty of writing
a down-mixer and sub-sampling code (which takes less than a month,
including all validation, if you have no prior experience in signal
processing). Yes, it's still more complicated than calling a python
function. But if you'd have to write that python function yourself
(to make the comparison fair), then it would take you considerably
longer to make sure the fitting function worked correctly.
Using python for the curve fitting is like you would get the VHDL code
for the whole signal processing part from someone. That's easy to handle.
Done in an afternoon. At most.

And just to further underline my point here: I have both written
VHDL code for FPGAs to down mix and subsample and done sine fitting
using the very same python function you have used. I know the complexity
of both. As I know their pitfalls.

Another sample point: One of my designs was converted to Virtex 7, we lost a week not because of the design was broken, but library dependencies where a bit old so synthesis got things incorrect. Once we realized that, it was trivial to fix and it works. VHDL for FPGA has good uses, and some will never really be competed by software. I do both, as they have different benefits. There is god and bad ways of designing things for FPGA or as software, it takes skills to design things in a portable way. It also takes skills and time to port a design to use most of a new chip or CPU.

FPGA's will always be faster and more computationally efficient
than Python, but Python is fast enough. The motivation for this experiment
was to use a high-level language (Python) and preexisting firmware and
software (Digilent) so that the device could be set up and reconfigured
easily, leaving more time to think about the important issues.

Sure. This is fine. But please do not bash other techniques, just because
you are not good at handling them. Especially if you hide the complexity
of your approach completely in a sidermark. (Yes, that ticked me off)

Indeed. This is not a good way to convince anyone. If you don't work well with certain tools, either don't use them or learn to understand how to use them. I tend to do the later so I learn.

The trick is that you want to use the benefit of both to achieve that extreme performance, and when you do it right, you use their strengths together without the actual design being very complex. That's the beauty of good designs.

Attila has about a dozen criticisms of the theory section, mostly that it
is not rigorous enough and there are many assumptions. But it is not
intended to be rigorous.

If it is not intended as such, then you should make it clear in
the paper. Or put it in an appendix. Currently the theory is almost 4 of
the 8 pages of your paper. So it looks like an important part.
And you still should make sure it is correct. Which currently it isn't.

This is primarily an experimental paper and the
purpose of the theory is to give a simple physical picture of the
surprizingly good results. It does that, and the experimental results
support the conjecture above.

Then you should have gone with a simple SNR based formula like S&J
or referenced one of the many papers out there that do this kind of
calculations and just repeated the formula with a comment that the
derivation is in paper X.

Break it up and focus on what is important in separate papers.

The limitations of the theory are discussed in detail on p. 6 where it is
called "... a convenient approximation.." Despite this the theory agrees
with the Monte Carlo over most of parameter space, and where it does not is
discussed in the text.

Please! This is bad science! You build a theory on flawed foundations,
use this theory as a foundation in your simulations. And when the
simulations agree with your theory you claim the theory is correct?
Please, do not do this!

Yes, it is ok to approximate. Yes it is ok to make assumptions.
But please be aware what the limits of those approximations and
assumptions are. I have tried to point out the flaws in your
argumentation and how they affect the validity of your paper.
If you just want to do an experimental paper, then the right
thing to do would be to cut out all the theory and concentrate
on the experiments.

Agree fully.

Please remember that while Attila, Bob and I may be critical, we do try to make you aware of relevant aspects you need to consider.

I have tried to hint that it would be useful to see how different estimator methods perform. The type of dominant noise for a certain tau is relevant, and is how we have been forced to analyze things for the last 50 years because of the problems we try to indicate.

I hope it comes accross that I do not critisize your experiments
or the results you got out of them. I critisize the analysis you
have done and that it contains assumptions, which you are not aware
of, that invalidate some of your results. The experiments are fine.
The precision you get is fine. But your analysis is flawed.

We are doing the friendly pre-review. A real reviewer could easily just say "Not worthy to publish", as I have seen.

There is nothing wrong about attempting new approaches, or even just test and idea and see how it pans out. You should then compare it to a number of other approaches, and as you test things, you should analyze the same data with different methods. Prototyping that in Python is fine, but in order to analyze it, you need to be careful about the details.

I would consider one just doing the measurements and then try different post-processings and see how those vary. Another paper then takes up on that and attempts analysis that matches the numbers from actual measurements.

So, we might provide tough love, but there is a bit of experience behind it, so it should be listened to carefully.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to