Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-15 Thread Magnus Danielson

Hi Fred,

On 05/15/2011 10:55 PM, Tijd Dingen wrote:

Hi Magnus,

Magnus Danielson wrote:

 Well, you always have the corner-case where numerical precision and near
 same frequency beating comes into play, so what will help and what will
 reduce your precision becomes a little fuzzy to say in general terms.
 That's why I be careful to say that "they are roughly the same".


Okay, then I understand what you mean. Explaining the fun numerical
intricacies would be a whole other thread. And quite possibly a whole
other forum. ;-)


We are time-nuts, we dwell into details. Oh the gore and blood!
I just thought it was not the most important thing for you right now, 
keep the eyes on the road for project.



"They are roughly the same" is something I can work with.


Great.


 If you run exact Nth edge you could do some algorithmic steps that
 avoids some rounding errors. Still, N can be allowed to be fairly large
 (say 1 milion). Another algorithmic benefit is that you could put your
 pre-processing upfront in the FPGA.


Understood. Amusingly enough by the exact same token, some algorithms
can run themselves into singularity trouble precisely because of the data
being too regular.

But rest assured I'll try several ways to do that linear regression. Right
now was just the sanity check if I am not overlooking something stupid.


Recall that many counters does not use linear regression. It's just one 
of several algorithms. Maybe you should stock up on a few different 
algorithms and figure out which works best... and possibly when. You 
know... to learn :)



 They will not be greatly different as far as I can see. Do recall that
 linear regression may need a drift component to it. I regularly see
 bending curves.


Check. I also plan to include a scatterplot with that line fit so you're
able to get a feeling for the data the frequency estimate is based on.


Got to love the residue plots! A residue max/rms/min value can be 
useful. Relative values is also handy. I really miss the drift number on 
my display. When waiting for heating up oscillators or lock-ins I care 
more for seeing the rate of change than the actual number. Flipping 
between actual and relative presentation is a presentation issue and not 
a counter processing mode.



 You can never be quite sure you see every Nth edge. You can see every
 Nth edge that your digital side detected. You will need to ensure that
 trigger level and signal quality is good to avoid cycle slipping on the
 trigger side. It requires care on the analogue side and adjustments of
 trigger levels to the signal at hand. I've seen lost pulses and double
 or additional triggers too many times.


In which case I think I now know that you meant by cycle-slip in the other
post. The analog front-end for now is a large part on the TODO list.
The digital processing part is a larger bottleneck than then analog
frontend,
so I am tackling that part first. If I cannot get the counter core to work,
no point in having a fancy analog front-end...


As you should have read by now, I had something different in mind. What 
I mean here is really the analogue side of things.



 You would indeed be able to avoid a hardware pre-scaler, but you would
 need a darn good analogue front-end to make sure the input side has
 slew-rate needed. Lacking slew-rate can problematic and can cause you to
 loose cycles or get multiple triggers.


Indeed, which brings all sorts of fun challenges of their own. Which is
why for now I do not use the serdes and keep the input frequency
relatively low.


Indeed. You can do some "digital filtering" to home in on your signal. 
Essentially creating a requirement for the signal to be in some window 
of counts... which can be used to filter out some of the trigger noise.



 Also, you will get a high data-rate out of the SERDES which a FW
 pre-scaler needs to sort out, but in parallel form rather than serial

form.

Yeah, but that is totally easy. I've already done a module that does that.
You only need 2 stages each of 1 logic level deep with a bunch of LUT6's.

However to keep things simpler on the coarse counter front, I currently
don't use that.


It is pretty easy yes.


 The SERDES provides a wonderful digital front-end for high-speed
 signals, but the fixed sampling rate provides little interpolation
 powers, a 10 Gb/s SERDES can sample every 100 ps for you.


Yep, which is why IMO it is better not to use the serdes as
interpolator. You
can use it for your coarse event counter. The main drawback to that is that
your entire event counter is by definition sampled. This as opposed to a
free
running counter that counts on the events, and is then sampled.

Another way of saying that is: with the serdes as a sampler, the signal from
the DUT is nothing but data. There are no flip-flops that toggle on the
clock
of the DUT.


Exactly. From this parallel stream you then process out the number of 
rising/falling edges (event-counter increment for that cycle) and the 
time-i

Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-15 Thread Tijd Dingen
Hi Magnus,

Magnus Danielson wrote:
>>> There are many things you can get away with, just how much trouble you
>>> want to verify it versus doing the proper thing is another issue.

>> Define "proper thing". ;-) From what I understand taking the exact Nth edge,
>> and then do linear regression is equivalent to taking roughly every Nth
>> edge and then do linear regression.

> Well, you always have the corner-case where numerical precision and near 
> same frequency beating comes into play, so what will help and what will 
> reduce your precision becomes a little fuzzy to say in general terms. 
> That's why I be careful to say that "they are roughly the same".

Okay, then I understand what you mean. Explaining the fun numerical
intricacies would be a whole other thread. And quite possibly a whole
other forum. ;-)

"They are roughly the same" is something I can work with.

> If you run exact Nth edge you could do some algorithmic steps that 
> avoids some rounding errors. Still, N can be allowed to be fairly large 
> (say 1 milion). Another algorithmic benefit is that you could put your 
> pre-processing upfront in the FPGA.

Understood. Amusingly enough by the exact same token, some algorithms
can run themselves into singularity trouble precisely because of the data
being too regular.

But rest assured I'll try several ways to do that linear regression. Right
now was just the sanity check if I am not overlooking something stupid.


>> Equivalent in the sense that the frequency estimates of the two will be
>> the same, to within the usual numerical uncertainties. Or to put that 
>> another way:
>> The first method of doing things is not inherently better or worse than the 
>> second
>> method. After all, that is the whole thing I am trying to be sure of right 
>> now.

> They will not be greatly different as far as I can see. Do recall that 
> linear regression may need a drift component to it. I regularly see 
> bending curves.

Check. I also plan to include a scatterplot with that line fit so you're
able to get a feeling for the data the frequency estimate is based on.


>> Of course I can make sure that I take exactly every Nth edge. It is just 
>> that there
>> are some considerable implementation advantages if that constraint does not 
>> have
>> to be so strict.

> You can never be quite sure you see every Nth edge. You can see every 
> Nth edge that your digital side detected. You will need to ensure that 
> trigger level and signal quality is good to avoid cycle slipping on the 
> trigger side. It requires care on the analogue side and adjustments of 
> trigger levels to the signal at hand. I've seen lost pulses and double 
> or additional triggers too many times.

In which case I think I now know that you meant by cycle-slip in the other
post. The analog front-end for now is a large part on the TODO list.
The digital processing part is a larger bottleneck than then analog frontend,
so I am tackling that part first. If I cannot get the counter core to work,
no point in having a fancy analog front-end...


>> One advantage being that if this constraint can be fairly loose, then using 
>> the
>> ISERDES2 in the spartan-6 as part of the coarse counter is fairly simple. I 
>> did
>> a couple of test with that, and all looks good. The main advantage there 
>> being
>> that if I use the serdes, this translates into a higher input frequency 
>> without the
>> need for a prescaler. Which translates into better precision.

> You would indeed be able to avoid a hardware pre-scaler, but you would 
> need a darn good analogue front-end to make sure the input side has 
> slew-rate needed. Lacking slew-rate can problematic and can cause you to 
> loose cycles or get multiple triggers.

Indeed, which brings all sorts of fun challenges of their own. Which is
why for now I do not use the serdes and keep the input frequency
relatively low.


> Also, you will get a high data-rate out of the SERDES which a FW 
> pre-scaler needs to sort out, but in parallel form rather than serial form.

Yeah, but that is totally easy. I've already done a module that does that.
You only need 2 stages each of 1 logic level deep with a bunch of LUT6's.

However to keep things simpler on the coarse counter front, I currently
don't use that.


> The SERDES provides a wonderful digital front-end for high-speed 
> signals, but the fixed sampling rate provides little interpolation 
> powers, a 10 Gb/s SERDES can sample every 100 ps for you.

Yep, which is why IMO it is better not to use the serdes as interpolator. You
can use it for your coarse event counter. The main drawback to that is that
your entire event counter is by definition sampled. This as opposed to a free
running counter that counts on the events, and is then sampled.

Another way of saying that is: with the serdes as a sampler, the signal from
the DUT is nothing but data. There are no flip-flops that toggle on the clock
of the DUT.

The approach I take now is a free-running bubb

Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-15 Thread Magnus Danielson

Hi Fred,

On 05/14/2011 03:42 PM, Tijd Dingen wrote:


Magnus Danielson wrote:

There are many things you can get away with, just how much trouble you
want to verify it versus doing the proper thing is another issue.


Define "proper thing". ;-) From what I understand taking the exact Nth edge,
and then do linear regression is equivalent to taking roughly every Nth
edge and then do linear regression.


Well, you always have the corner-case where numerical precision and near 
same frequency beating comes into play, so what will help and what will 
reduce your precision becomes a little fuzzy to say in general terms. 
That's why I be careful to say that "they are roughly the same".


If you run exact Nth edge you could do some algorithmic steps that 
avoids some rounding errors. Still, N can be allowed to be fairly large 
(say 1 milion). Another algorithmic benefit is that you could put your 
pre-processing upfront in the FPGA.



Equivalent in the sense that the frequency estimates of the two will be
the same, to within the usual numerical uncertainties. Or to put that another 
way:
The first method of doing things is not inherently better or worse than the 
second
method. After all, that is the whole thing I am trying to be sure of right now.


They will not be greatly different as far as I can see. Do recall that 
linear regression may need a drift component to it. I regularly see 
bending curves.



Of course I can make sure that I take exactly every Nth edge. It is just that 
there
are some considerable implementation advantages if that constraint does not have
to be so strict.


You can never be quite sure you see every Nth edge. You can see every 
Nth edge that your digital side detected. You will need to ensure that 
trigger level and signal quality is good to avoid cycle slipping on the 
trigger side. It requires care on the analogue side and adjustments of 
trigger levels to the signal at hand. I've seen lost pulses and double 
or additional triggers too many times.



One advantage being that if this constraint can be fairly loose, then using the
ISERDES2 in the spartan-6 as part of the coarse counter is fairly simple. I did
a couple of test with that, and all looks good. The main advantage there being
that if I use the serdes, this translates into a higher input frequency without 
the
need for a prescaler. Which translates into better precision.


You would indeed be able to avoid a hardware pre-scaler, but you would 
need a darn good analogue front-end to make sure the input side has 
slew-rate needed. Lacking slew-rate can problematic and can cause you to 
loose cycles or get multiple triggers.


Also, you will get a high data-rate out of the SERDES which a FW 
pre-scaler needs to sort out, but in parallel form rather than serial form.


The SERDES provides a wonderful digital front-end for high-speed 
signals, but the fixed sampling rate provides little interpolation 
powers, a 10 Gb/s SERDES can sample every 100 ps for you.



Hence my current (over)focus to make absolutely sure that all the results are 
also
valid if one takes almost the Nth edge, but not quite right all the time... 
However, you
still know which edge is which. You just don't know it early enough in the 
pipeline
to use as basis for a triggering decision.


You will have to work with multiple possible trigger-locations, but it 
is possible to post-process out.



I have not looked on detail performance comparison between these
algorithms lately. However, they should not be used naively together
with AVAR and friends since they attempt to do the same thing, so the
resulting filtering will become wrong and biased results will be produced.


Well, for the AVAR calculation I only use the raw time-stamps. So nothing
preprocessed. Then I should not have to worry about this sort of bias, right?


Exactly, if you use raw time-stamps and have decent quality on tau0 
measures, you have avoided a lot of problems.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-14 Thread Tijd Dingen

Magnus Danielson wrote:
> There are many things you can get away with, just how much trouble you 
> want to verify it versus doing the proper thing is another issue.

Define "proper thing". ;-) From what I understand taking the exact Nth edge,
and then do linear regression is equivalent to taking roughly every Nth
edge and then do linear regression.

Equivalent in the sense that the frequency estimates of the two will be
the same, to within the usual numerical uncertainties. Or to put that another 
way:
The first method of doing things is not inherently better or worse than the 
second
method. After all, that is the whole thing I am trying to be sure of right now.

Of course I can make sure that I take exactly every Nth edge. It is just that 
there
are some considerable implementation advantages if that constraint does not have
to be so strict.

One advantage being that if this constraint can be fairly loose, then using the
ISERDES2 in the spartan-6 as part of the coarse counter is fairly simple. I did
a couple of test with that, and all looks good. The main advantage there being
that if I use the serdes, this translates into a higher input frequency without 
the
need for a prescaler. Which translates into better precision.

Hence my current (over)focus to make absolutely sure that all the results are 
also
valid if one takes almost the Nth edge, but not quite right all the time... 
However, you
still know which edge is which. You just don't know it early enough in the 
pipeline
to use as basis for a triggering decision.


> I have not looked on detail performance comparison between these 
> algorithms lately. However, they should not be used naively together 
> with AVAR and friends since they attempt to do the same thing, so the 
> resulting filtering will become wrong and biased results will be produced.

Well, for the AVAR calculation I only use the raw time-stamps. So nothing
preprocessed. Then I should not have to worry about this sort of bias, right?

regards,
Fred

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-14 Thread Magnus Danielson

Hi Fred,

On 05/14/2011 12:12 PM, Tijd Dingen wrote:

Thanks for the sanity check! :)

I was indeed hoping to be able to get away with "not every Nth edge" since that 
simplifies thing.


There are many things you can get away with, just how much trouble you 
want to verify it versus doing the proper thing is another issue.



On the subject of extracting frequency out of the dataseet, for now I use 
ordinary least squares. What other approaches do you know of that are used for 
this particular applicaction?


Linear regression is being used as well as block-averaged time-stamps.

The later method using say 200 time-stamps divides them into the first 
and second half, you average the time-stamps, subtract the first half 
average from the second half average and divides by the time between the 
first samples (or equivalent time).


Just using a time sequence of 200 time-stamps you get 199 direct in 
sequence pairs forming 199 frequency estimates. Sounds cool, now we can 
average those 199 frequency estimates... well... the sad thing is that 
you essentially cancels the 198 measures and end up with only using the 
first and last samples. The sqrt(N) averaging of the above block 
averager is lost in this simple variant and still they have about the 
same computing complexity. So, frequency estimation algorithms can look 
good until you find out that their degrees of freedom may vary greatly. 
These to algorithms differs about N/2 in degrees of freedom..


I wanted to illustrate how good or bad algorithms can be on the same 
amount of data.


I have not looked on detail performance comparison between these 
algorithms lately. However, they should not be used naively together 
with AVAR and friends since they attempt to do the same thing, so the 
resulting filtering will become wrong and biased results will be produced.


Cheers,
Magnus


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-14 Thread Tijd Dingen
Thanks for the sanity check! :)

I was indeed hoping to be able to get away with "not every Nth edge" since that 
simplifies thing.

On the subject of extracting frequency out of the dataseet, for now I use 
ordinary least squares. What other approaches do you know of that are used for 
this particular applicaction?

regards,
Fred



- Original Message -
From: Magnus Danielson 
To: time-nuts@febo.com
Cc: 
Sent: Saturday, May 14, 2011 9:28 AM
Subject: Re: [time-nuts] Continuous timestamping reciprocal counter question

On 05/13/2011 04:56 PM, Tijd Dingen wrote:
> To calculate the frequency from these time stamps you have to do some slop 
> fitting. If you use a least squares matrix approach for that I could see how 
> the more random distribution could help prevent singularities.
> 
> The only reason I can see now to really try harder to always get the exact 
> Nth edge is for numerical solving. As in, should you choose a solver that 
> only operates optimally for equidistant samples.
> 
> Any thoughts?

You don't have to get exactly every Nth edge. But you need to count the edges. 
A continuous time-stamping counter will count time and edges and the time-stamp 
will contain both (except in some special conditions where it isn't needed).

There are a number of different approaches on how frequency is extracted out of 
the dataset, however very few of them assumes perfect event count distance.

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-14 Thread Magnus Danielson

On 05/13/2011 04:56 PM, Tijd Dingen wrote:

To calculate the frequency from these time stamps you have to do some slop 
fitting. If you use a least squares matrix approach for that I could see how 
the more random distribution could help prevent singularities.

The only reason I can see now to really try harder to always get the exact Nth 
edge is for numerical solving. As in, should you choose a solver that only 
operates optimally for equidistant samples.

Any thoughts?


You don't have to get exactly every Nth edge. But you need to count the 
edges. A continuous time-stamping counter will count time and edges and 
the time-stamp will contain both (except in some special conditions 
where it isn't needed).


There are a number of different approaches on how frequency is extracted 
out of the dataset, however very few of them assumes perfect event count 
distance.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-13 Thread Tijd Dingen
Mmmh, that wouldfall more into bookkeeping and less into computation. Not 
trying to be pedantic here, just looking for the optimum solution... What I 
mean is this:


You are indeed right that for the "every Nth cycle" approach I do not have to 
keep track of N. Which saves some data over the wire, less memory consumption 
before computation starts. Check.

However when I do an ordinary least squares fit, I will still have to generate 
both coordinates (timestamp as well as cycle number) and stuff them into the 
matrix.

So even if I know that the dataseries is this:


t=t0,t1,t2,t3,t4 (have to keep track of the data stamps)

n=0,1,2,3,4 (don't have to send this over the wire, since I know by design that 
n0=0, n1=1, n2=2 etc...

I still have to use these exact same numerical values when filling in the 
matrix, and then solve. Or is there a least squares approach (or other line fit 
method for that matter) that uses the row index number implicitely as data. If 
there is an algorithm for that could you point me to it? Because if it is 
sufficiently faster than regular boring least squares then it might be worth it 
to change the processing pipeline somewhat to be able to spit out a timestamp 
for exactly every Nth cycle.


Thanks!
Fred





- Original Message -
From: Bob Camp 
To: 'Tijd Dingen' ; time-nuts@febo.com
Cc: 
Sent: Friday, May 13, 2011 6:26 PM
Subject: RE: [time-nuts] Continuous timestamping reciprocal counter question

Hi

Computationally with an every Nth measure, you only have to keep track of
the time stamps. With both the cycle count and time stamps varying you have
two sets of data to keep up with. 

Be very careful talking about construction projects on the list, you will
quickly be declared "excessive bandwidth" in all caps.

Bob

-Original Message-
From: Tijd Dingen [mailto:tijddin...@yahoo.com] 
Sent: Friday, May 13, 2011 12:22 PM
To: Bob Camp; time-nuts@febo.com
Subject: Re: [time-nuts] Continuous timestamping reciprocal counter question

Now it is my turn for an "it depends". ;)


If by that you mean that it makes the bookkeeping for me the human easier,
then yes. It certainly is easier for me conceptually when I know it is every
Nth, and not just almost-but-not-quite-every-Nth.

But as far as the math is concerned, I do not see a bit of difference. Easy
math as in computationally cheap? For an ordinary least squares approach, as
far as I can tell the two varieties have the same computational cost. Either
that, or I am missing something.

"In an FPGA keep in mind that your PLL may be a significant source of
noise."

Heh, that can indeed be a significant source of noise. Which with the right
approach is not as big a problem as some around here may think. As in, I
suspect that is one of the reasons of why those "fpga based DIY counter"
projects die so horribly around here. The advantage of an fpga is that you
can process a fair amount of data, so you can do some averaging to
compensate for some of the shortcomings. Certainly for a repeated process
(as with frequency measurement), but also for single shot applications (with
only 1 START/STOP).

Case in point for example the paper mentioned in this post:

http://www.febo.com/pipermail/time-nuts/2011-March/055240.html

I am doing something similar, and am getting similar results. Of course
still plenty work do be done, but that is par for the course in hobby
country...

regards,
Fred




- Original Message -
From: Bob Camp 
To: 'Tijd Dingen' 
Cc: 
Sent: Friday, May 13, 2011 6:02 PM
Subject: RE: [time-nuts] Continuous timestamping reciprocal counter question

Hi

The exactly every Nth cycle thing makes the math easy. Easy math means I can
do lots of samples. Lots of samples means better regression. Just how much
better depends on the type of noise.  

As long as you get the math right for your sample spacing, the result will
be ok.

In an FPGA keep in mind that your PLL may be a significant source of noise.

Enjoy!

Bob



-Original Message-
From: Tijd Dingen [mailto:tijddin...@yahoo.com] 
Sent: Friday, May 13, 2011 11:40 AM
To: Bob Camp; time-nuts-boun...@febo.com
Subject: Re: [time-nuts] Continuous timestamping reciprocal counter question

Hi Bob,

Well, for the moment it is as simple as calculating the frequency of a
reasonably stable frequency. Meaning it is not modulated, but it could very
well be just a cheap XO that is being measured. That sort of "reasonably
stable".

Not trying to recover modulation. If I was, given that I am using an fpga
for this, I'd take a different approach. Get some use out of those SERDES'
after all. :)

So as said, for now it's just calculating the frequency. I'm just trying to
make sure that I am not overlooking something stupid...

regards,
Fred



- Original Message -
From: Bob Camp 
To: 'Tijd Dingen' 
Cc: 
Sent: Friday, May 13, 2011 5:11 PM
Subj

Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-13 Thread Bob Camp
Hi

Computationally with an every Nth measure, you only have to keep track of
the time stamps. With both the cycle count and time stamps varying you have
two sets of data to keep up with. 

Be very careful talking about construction projects on the list, you will
quickly be declared "excessive bandwidth" in all caps.

Bob

-Original Message-
From: Tijd Dingen [mailto:tijddin...@yahoo.com] 
Sent: Friday, May 13, 2011 12:22 PM
To: Bob Camp; time-nuts@febo.com
Subject: Re: [time-nuts] Continuous timestamping reciprocal counter question

Now it is my turn for an "it depends". ;)


If by that you mean that it makes the bookkeeping for me the human easier,
then yes. It certainly is easier for me conceptually when I know it is every
Nth, and not just almost-but-not-quite-every-Nth.

But as far as the math is concerned, I do not see a bit of difference. Easy
math as in computationally cheap? For an ordinary least squares approach, as
far as I can tell the two varieties have the same computational cost. Either
that, or I am missing something.

"In an FPGA keep in mind that your PLL may be a significant source of
noise."

Heh, that can indeed be a significant source of noise. Which with the right
approach is not as big a problem as some around here may think. As in, I
suspect that is one of the reasons of why those "fpga based DIY counter"
projects die so horribly around here. The advantage of an fpga is that you
can process a fair amount of data, so you can do some averaging to
compensate for some of the shortcomings. Certainly for a repeated process
(as with frequency measurement), but also for single shot applications (with
only 1 START/STOP).

Case in point for example the paper mentioned in this post:

http://www.febo.com/pipermail/time-nuts/2011-March/055240.html

I am doing something similar, and am getting similar results. Of course
still plenty work do be done, but that is par for the course in hobby
country...

regards,
Fred




- Original Message -
From: Bob Camp 
To: 'Tijd Dingen' 
Cc: 
Sent: Friday, May 13, 2011 6:02 PM
Subject: RE: [time-nuts] Continuous timestamping reciprocal counter question

Hi

The exactly every Nth cycle thing makes the math easy. Easy math means I can
do lots of samples. Lots of samples means better regression. Just how much
better depends on the type of noise.  

As long as you get the math right for your sample spacing, the result will
be ok.

In an FPGA keep in mind that your PLL may be a significant source of noise.

Enjoy!

Bob



-Original Message-
From: Tijd Dingen [mailto:tijddin...@yahoo.com] 
Sent: Friday, May 13, 2011 11:40 AM
To: Bob Camp; time-nuts-boun...@febo.com
Subject: Re: [time-nuts] Continuous timestamping reciprocal counter question

Hi Bob,

Well, for the moment it is as simple as calculating the frequency of a
reasonably stable frequency. Meaning it is not modulated, but it could very
well be just a cheap XO that is being measured. That sort of "reasonably
stable".

Not trying to recover modulation. If I was, given that I am using an fpga
for this, I'd take a different approach. Get some use out of those SERDES'
after all. :)

So as said, for now it's just calculating the frequency. I'm just trying to
make sure that I am not overlooking something stupid...

regards,
Fred



- Original Message -
From: Bob Camp 
To: 'Tijd Dingen' 
Cc: 
Sent: Friday, May 13, 2011 5:11 PM
Subject: RE: [time-nuts] Continuous timestamping reciprocal counter question

Hi

As always with any real question, the answer is "that depends".

If you are going into a measurement process that wants well defined bins for
it's tau, then it could be a problem.

If all you want is frequency, then the start and end sample *may* have all
the information you need in them. 

If you are trying to recover modulation, then both approaches have issues.
They just have different ones.  

It all depends on what you are trying to do. 

Bob

-Original Message-
From: time-nuts-boun...@febo.com [mailto:time-nuts-boun...@febo.com] On
Behalf Of Tijd Dingen
Sent: Friday, May 13, 2011 10:56 AM
To: time-nuts@febo.com
Subject: [time-nuts] Continuous timestamping reciprocal counter question

The counters of the continuous timestamping variety I've read about all
mention taking the Nth edge of the input signal. For example:

http://www.spectracomcorp.com/Support/HowCanWeHelpYou/Library/tabid/59/Defau
lt.aspx?EntryId=450&Command=Core_Download

In "Picture 5" on page 5 you see a bunch of data points that roughly
describe a straight line. Cycle number (of the input signal) on the x-axis,
timestamps on the y-axis. Now the question is this:

Will it also work when you get the timestamp of every almost-but-not-quite
Nth edge? I'd say yes, but who knows...

To clarify ... when I say "timestamp of edge N", I mean "the time stamp of
the pos

Re: [time-nuts] Continuous timestamping reciprocal counter question

2011-05-13 Thread Tijd Dingen
Now it is my turn for an "it depends". ;)


If by that you mean that it makes the bookkeeping for me the human easier, then 
yes. It certainly is easier for me conceptually when I know it is every Nth, 
and not just almost-but-not-quite-every-Nth.

But as far as the math is concerned, I do not see a bit of difference. Easy 
math as in computationally cheap? For an ordinary least squares approach, as 
far as I can tell the two varieties have the same computational cost. Either 
that, or I am missing something.

"In an FPGA keep in mind that your PLL may be a significant source of noise."

Heh, that can indeed be a significant source of noise. Which with the right 
approach is not as big a problem as some around here may think. As in, I 
suspect that is one of the reasons of why those "fpga based DIY counter" 
projects die so horribly around here. The advantage of an fpga is that you can 
process a fair amount of data, so you can do some averaging to compensate for 
some of the shortcomings. Certainly for a repeated process (as with frequency 
measurement), but also for single shot applications (with only 1 START/STOP).

Case in point for example the paper mentioned in this post:

http://www.febo.com/pipermail/time-nuts/2011-March/055240.html

I am doing something similar, and am getting similar results. Of course still 
plenty work do be done, but that is par for the course in hobby country...

regards,
Fred




- Original Message -
From: Bob Camp 
To: 'Tijd Dingen' 
Cc: 
Sent: Friday, May 13, 2011 6:02 PM
Subject: RE: [time-nuts] Continuous timestamping reciprocal counter question

Hi

The exactly every Nth cycle thing makes the math easy. Easy math means I can
do lots of samples. Lots of samples means better regression. Just how much
better depends on the type of noise.  

As long as you get the math right for your sample spacing, the result will
be ok.

In an FPGA keep in mind that your PLL may be a significant source of noise.

Enjoy!

Bob



-Original Message-
From: Tijd Dingen [mailto:tijddin...@yahoo.com] 
Sent: Friday, May 13, 2011 11:40 AM
To: Bob Camp; time-nuts-boun...@febo.com
Subject: Re: [time-nuts] Continuous timestamping reciprocal counter question

Hi Bob,

Well, for the moment it is as simple as calculating the frequency of a
reasonably stable frequency. Meaning it is not modulated, but it could very
well be just a cheap XO that is being measured. That sort of "reasonably
stable".

Not trying to recover modulation. If I was, given that I am using an fpga
for this, I'd take a different approach. Get some use out of those SERDES'
after all. :)

So as said, for now it's just calculating the frequency. I'm just trying to
make sure that I am not overlooking something stupid...

regards,
Fred



- Original Message -
From: Bob Camp 
To: 'Tijd Dingen' 
Cc: 
Sent: Friday, May 13, 2011 5:11 PM
Subject: RE: [time-nuts] Continuous timestamping reciprocal counter question

Hi

As always with any real question, the answer is "that depends".

If you are going into a measurement process that wants well defined bins for
it's tau, then it could be a problem.

If all you want is frequency, then the start and end sample *may* have all
the information you need in them. 

If you are trying to recover modulation, then both approaches have issues.
They just have different ones.  

It all depends on what you are trying to do. 

Bob

-Original Message-
From: time-nuts-boun...@febo.com [mailto:time-nuts-boun...@febo.com] On
Behalf Of Tijd Dingen
Sent: Friday, May 13, 2011 10:56 AM
To: time-nuts@febo.com
Subject: [time-nuts] Continuous timestamping reciprocal counter question

The counters of the continuous timestamping variety I've read about all
mention taking the Nth edge of the input signal. For example:

http://www.spectracomcorp.com/Support/HowCanWeHelpYou/Library/tabid/59/Defau
lt.aspx?EntryId=450&Command=Core_Download

In "Picture 5" on page 5 you see a bunch of data points that roughly
describe a straight line. Cycle number (of the input signal) on the x-axis,
timestamps on the y-axis. Now the question is this:

Will it also work when you get the timestamp of every almost-but-not-quite
Nth edge? I'd say yes, but who knows...

To clarify ... when I say "timestamp of edge N", I mean "the time stamp of
the positive going edge of the Nth cycle of the input signal". But the
former is a bit shorter. ;)

Assume an input signal of 30 MHz. Say you decide to get every 100th edge of
this signal, then you would end up with 300k timestamps every second. These
timestamps will define a straight line with positive slope. Find the slope,
and you have the frequency. And now for the "what if"

What if the implementation does not always allow for getting the exact Nth
edge. What the implementation allows however is to /aim/ for to getting
numero