Re: [time-nuts] PLL Math Question

2014-03-14 Thread Dan Kemppainen
Bob,

Just been reading along, enjoying the conversation...

I've written a lot of hand coded assembly. Some of it very similar to
what you are doing here now. (Although, a different processor family)
I really didn't want to switch to C for anything, since code generated
is 'bloated'.

That being said, I've been writing a bunch of code for the
Pic24/disPic3x's lately with the C compiler that microchip provides.
Granted the disassembly listing is frustrating, in that I could write
faster code. However I wouldn't write that code faster than I'm doing
now in C. I'm calculating five 12th order polynomial equations in IEEE
754 floats, very quickly. Most of the time the processor is ticking
along at 32Khz drawing only 1mA!

The bottom line is that the new chips are very impressive with the math
capability. For what you are doing here, you may well be served by a
simple program in C. The pics24's don't cost much, and many may be less
than the ones you are using now! The extra memory and flash make them
really nice.

I realize that the time spent learning the new platform may be a pain.
However, the long term results may make it worth it. (Being able to spit
the results of a floating point calculation to an ascii terminal is
really nice!)

Anyway, carry on! Just my $02 here! :)

Dan



On 3/14/2014 11:42 AM, time-nuts-requ...@febo.com wrote:
> OK, gotcha.? But, this is in assembler, and anything wider than 3 bytes 
> becomes tedious.? Also, anything larger than 3 bytes starts using a lot of 
> space in a hurry.? Three byte fields allow me to use 256ths for gain and take 
> the result directly from the two high order bytes without any shifting.? And 
> as I mentioned to Hal in a separate post: when I hand-coded the exponential 
> averager the results were actually good.? I was forgetting to convert to 
> decimal to compare values to the decimal run.? For example: 0x60 doesn't look 
> like 0.375 until you convert to decimal and divide by 256.
> 
> This has been most informative and certainly gives me more options.
> 
> Bob
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

On 14/03/14 00:39, Bob Camp wrote:

Hi

Either grab a math pack (there are several for the PIC) or go to C.

Timing at the Time Nuts level is about precision. We need *lots* of digits past 
the binary point :)


Indeed. Throwing bits at the problems is relatively cheap today. 
Besides, you don't process it that often, so you can afford to let a few 
cycles bring you design margin.


Remember, you want the internal resolution to have many bits below the 
single-shot resolution. Lack of bits in frequency resolution tends to 
get you doing bang-bang regulations to approximate the frequency. With 
sufficient resolution other noise sources will assist to average out 
that quantization step. The bang-bang regulations naturally give you an 
idle-frequency, and both frequency and amplitude depends on the 
quantization step.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

On 13/03/14 13:57, Jim Lux wrote:

On 3/12/14 10:06 PM, Chris Albertson wrote:

On Wed, Mar 12, 2014 at 9:13 PM, Daniel Mendes 
wrote:

This is a FIR x IIR question...

moving average = FIR filter with all N coeficients equalling 1/N
exponential average = using a simple rule to make an IIR filter


Isn't his "moving average" just a convolution of the data with a box car
function?  That treats the last N samples equally and is likely not
optimal.   I think why he wants is a low pass filter.


A moving average (or rectangular impulse response) *is* a low pass
filter.  The frequency response is of the general sin(x)/x sort of
shape, and it has deep nulls, which can be convenient (imagine a moving
average covering 1/60th of a second, in the US.. it would have strong
nulls at the line frequency and harmonics)


This method is like

the hockey player who skates to where to puck was about 5 seconds
ago.  It
is not the best way to play the game.  He will in fact NEVER get to the
puck if the puck is moving he is domed to chase it forever..   Same here
you will never get there.


That distinction is different than the filter IIR vs FIR thing. Filters
are causal, and the output always lags the input in time.  if you want
to predict where you're going to be you need a different kind of model
or system design.  Something like a predictor corrector, for instance.





But if you have a long time constant on the control loop you have in
effect
the kind of "averaging" you want, one that tosses out erratic noisy data.
A PID controller uses only three memory locations and is likely the best
solution.


PID is popular, having been copiously analyzed and used over the past
century. It's also easy to implement in analog circuitry.

ANd, there's long experience in how to empirically adjust the gain
knobs, for some kinds of controlled plant.

However, I don't know that the simplicity justifies its use in modern
digital implementations: very, very few applications are so processor or
gate limited that they couldn't use something with better performance.

If you are controlling a physical system with dynamics that are well
suited to a PID (e.g. a motor speed control) then yes, it's the way to
go.  But if PIDs were so wonderful, then there wouldn't be all sorts of
"auto-tuning" PIDs out there (which basically complexify things by
trying to estimate the actual plant model function, and then optimize
the P,I, and D coefficients).

PID controllers don't do well when there's a priori side knowledge
available.  For instance, imagine a thermostat kind of application where
you are controlling the temperature of an object outside in the sun. You
could try to control the temperature solely by measuring the temp of the
thing controlled, and comparing it against the setpoint (a classic PID
sort of single variable loop).  Odds are, however, that if you had
information about the outside air temperature and solar loading, you
could hold the temperature a lot more tightly and smoothly, because you
could use the side information (temp and sun) to anticipate the
heating/cooling demands.

This is particularly the case where the controlled thing has long time
lags, but low inertia/mass.


Extending a PI or PID loop to incorporate aiding signals isn't hard. In 
fact that's what happens in GPS receivers. Properly done aiding signals 
will reduce the phase errors to do loop stress and allow for even 
tighter bandwidth.


Each GPS channel in a receiver contains a carrier and a code loop. The 
carrier loop aids the code loop in frequency tracking. It is also common 
to have both a frequency and phase detector and then aid the normal 
phase-driven PI loop with a frequency detector hint.


A nice aspect about frequency aiding is that it has a strong pull-in 
property when the input signal and loop is far away, and that's when the 
phase-lock-in is very weak. As the pull-in progresses, the frequency 
aiding gets weaker while the phase locked becomes stronger, as the 
Bessel polynom gets higher for the beat frequency. Eventually the 
phase-locking takes over in strength and the frequency aiding 
essentially dismisses itself. This is a great example of how a classical 
loop can be extended without getting into very esoteric systems.




We have to define "best".  I'd define it as "the error integrated over
time
is minimum".  I think PiD gets you that and it is also easy to program
and
uses very little memory.  Just three values (1) the error, (2) the
total of
all errors you've seen (in a perfect world this is zero because the
positive and negative errors cancel) and (3) the rate of change in the
error (is it getting bigger of smaller and how quickly?)  Multiply
each of
those numbers by a constant and that is the correction to the output
value.
It's maybe 6 or 10 lines of C code.   The "magic" is finding the
right
values for the constants.


And that magic is sometimes a lot of work.

And practical PID applications also need things like integrator reset to

Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

On 13/03/14 07:35, Daniel Mendes wrote:

Em 13/03/2014 01:35, Bob Stewart escreveu:

Hi Daniel,

re: FIR vs IIR


I'm not a DSP professional, though I do have an old Smiths, and I've
read some of it.  So, could you give me some idea what the FIR vs IIR
question means on a practical level for this application?  I can see
that the MA is effective and easy to code, but takes up memory space I
eventually may not have.  Likewise, I can see that the EA is hard to
code for the general case, but takes up little memory.  Any thoughts
would be appreciated unless this is straying too far from time-nuts
territory.




FIR = Finite Impulse Response

It means that if you enter an impulse into your filter after some time
the response completely vanishes. Let´s have an example:

your filter has coefficients 0.25 ; 0.25 ; 0.25 ; 0.25   (a moving
average of length 4)

instead we could define this filter by the difference equation:

y[n] = 0.25x[n] + 0.25x[n-1] + 0.25x[n-2] + 0.25x[n-3](notice that
Y[n] can be computed by looking only at present and past values of x)

your data is  0;  0; 1; 0; 0; 0; 0 .. (there´s an impulse of
amplitude 1 at n= 2)

your output will be:

0; 0; 0.25; 0.25; 0.25; 0.25; 0; 0; 0; . (this is the convolution
between the coefficients and the input data)

after 4 samples (the length of the filter) your output completely
vanishes. This means that all FIR filters are BIBO stable (BIBO =
bounded input, bounded output... if you enter numbers not infinite in
the filter the output never diverges)

IIR = infinite Impulse Response

It means that if you enter an impulse into your filter the response
never settle down again (but it can converge). Let´s have an example:

your filter cannot be described by coefficients anymore because it has
infinite response, so we need a difference equation. Let´s use the one
provided before for the exponential smoothing with a_avg = 1/8:

x_avg = x_avg + (x - x_avg) * 1/8;

this means:

y[n] = y[n-1] + (x[n] - y[n-1]) * 1/8

y[n] = y[n-1] - 1/8*y[n-1] + 1/8*x[n]

y[n] = 7/8*y[n-1] + 1/8*x[n]

you can see why this is different from the other filter: now the output
is function not only from the present and past inputs, but also from the
past output(s).

Lets try the same input as before:

your data is  0;  0; 1; 0; 0; 0; 0 .. (there´s an impulse of
amplitude 1 at t= 2)

your output will be:

y[0] = 0 = 7/8*y[-1] + 1/8*x[0]  (i´m assuming that y[-1] = 0.. and x[0]
is zero)
y[1] = 0 = 7/8*y[0] + 1/8*x[1]
y[2] = 1/8 = 7/8*y[1] + 1/8*x[2] (x[2] = 1)
y[3] = 7/64 = 7/8*y[2] + 1/8*x[3] (x[3] = 0) = 0.109
y[4] = 49/512 = 7/8*y[3] + 1/8*x[4] (x[4] = 0) = 0.095
y[5] = 343/4096 = 7/8*y[4] + 1/8*x[5] (x[5] = 0) = 0.084

You can see that without truncation this will never go to zero again.

Usually you can get more attenuation with a IIR filter having the same
computational complexity than a FIR filter but you need to take care
about stability and truncation. Well, i´ll just copy here the relevant
part from wikipedia about advantages and disadvantages:


Advantages and disadvantages

The main advantage digital IIR filters have over FIR filters is their
efficiency in implementation, in order to meet a specification in terms
of passband, stopband, ripple, and/or roll-off. Such a set of
specifications can be accomplished with a lower order (/Q/ in the above
formulae) IIR filter than would be required for an FIR filter meeting
the same requirements. If implemented in a signal processor, this
implies a correspondingly fewer number of calculations per time step;
the computational savings is often of a rather large factor.

On the other hand, FIR filters can be easier to design, for instance, to
match a particular frequency response requirement. This is particularly
true when the requirement is not one of the usual cases (high-pass,
low-pass, notch, etc.) which have been studied and optimized for analog
filters. Also FIR filters can be easily made to be linear phase
 (constant group delay
 vs frequency), a property
that is not easily met using IIR filters and then only as an
approximation (for instance with the Bessel filter
). Another issue regarding
digital IIR filters is the potential for limit cycle
 behavior when idle, due to
the feedback system in conjunction with quantization.


You need to understand that the PI loop already is a IIR filter in 
itself, and you need to understand what the averager you add inside that 
loop does to the loop properties. Generic discussions on IIR vs FIR does 
not cut it. If you do a FIR averager then you need to consider what the 
poles and zeros (yes it has both) of that do inside the PI-loop.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo

Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

Hi Bob,

On 12/03/14 23:16, Bob Stewart wrote:

"x_avg = x_avg + (x - x_avg) * a_avg;"

Hi again Magnus,

In fact, I just post-processed some data using that formula in perl.  It looks 
great, and will indeed save me code and memory space.  And, it can be a user 
variable, rather than hard-coded.  Thanks for the heads up!


Proven in battle, dead easy to code, well understood. Happy to share.

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

On 12/03/14 20:25, Hal Murray wrote:


mag...@rubidium.dyndns.org said:

Exponential averger takes much less memory. Consider this code:
x_avg = x_avg + (x - x_avg) * a_avg;
Where a_avg is the time-constant control parameter.


Also note that if a_avg is a power of 2, you can do it all with shifts rather
than multiplies.


Indeed. For many purposes it suffice with that kind of resolution for it.


Note that the shift is to the right which drops bits.  That suggests that you 
might want to work with x scaled relative to the raw data samples.  Consider 
a_avg to be 1/8, or a shift right 3 bits.  Suppose x_avg is 0 and you get a 
string of x samples of 2.  The shift throws away the 2 so x_avg never changes.


Indeed. The form I wrote it above makes it easy to understand this 
consequence.


It is always good to consider the consequence of bitwidth, and scaling 
factors for filters makes you require more. Some problems you solve by 
just throwing more bits of resolution/headroom onto it.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

Hi Bob,

On 12/03/14 19:26, Bob Stewart wrote:

Hi Magnus,

Thanks very much for this response!  It will be very easy to add the 
exponential averager to my code and do a comparison to the moving average.  I 
have no experience with PI/PID.  I'll have to look over the literature I have 
on them and relate that to what I'm controlling.

It should be mentioned that I'm more interested in the adventure than in just 
copying someone else's code or formulae and pumping this out.  I have an idea 
of how I want to do this and...


The effect of additional filtering can be a little tricky to analyze 
sometimes, but the exponential averager, the normal 1-pole low-pass 
filter, is however well-understood when the cut-off frequency is well 
above the normal PI-loop bandwith. K*4 seems to pop up in my head.


It's a hint which comes from experience of others as well as myself.
You can also look into the Stanford Research PRS-10 manual, which also 
optionally have such a filter in it's PPS slaving mode.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Hal Murray

li...@rtty.us said:
> Timing at the Time Nuts level is about precision.

What's the term for a time-nut that's trying to be not-very-nutty?

--

b...@evoria.net said:
> includes a 10-bit PWM dithered to 14 bits

When you get it all working, that's going to be one of the weak links, at 
least for some applications.  As far as I can see, your only application is 
entertaining Bob, so it won't be a problem for a while.

The problem is that there is a lot of low frequency noise in that sort of 
signal, and it's hard to filter out with typical analog components because 
the frequency is so low.  On a spectrum analyzer, they turn into spurs.

The typical PWM is on for x counts, then off for N-x counts.  You can make 
the spectrum easier to filter if you distribute the on bits throughout the N 
counts (rather than clumping them together).  Sometimes you can do that 
easily with a synchronous serial setup.  You need the "synchronous" vs 
"asynchronous" because the async mode puts in start/stop bits that you don't 
want.  That may not work with a PIC, but I've used it on an ARM.  We just 
setup the bits in memory and turned on a DMA channel.

If you really get sucked into time-nuttery, you will upgrade the PIC to an 
ARM and see how good you can make things.

-- 
These are my opinions.  I hate spam.



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Bob Camp
Hi

Either grab a math pack (there are several for the PIC) or go to C.

Timing at the Time Nuts level is about precision. We need *lots* of digits past 
the binary point :)

Bob



On Mar 13, 2014, at 7:19 PM, Bob Stewart  wrote:

> OK, gotcha.  But, this is in assembler, and anything wider than 3 bytes 
> becomes tedious.  Also, anything larger than 3 bytes starts using a lot of 
> space in a hurry.  Three byte fields allow me to use 256ths for gain and take 
> the result directly from the two high order bytes without any shifting.  And 
> as I mentioned to Hal in a separate post: when I hand-coded the exponential 
> averager the results were actually good.  I was forgetting to convert to 
> decimal to compare values to the decimal run.  For example: 0x60 doesn't look 
> like 0.375 until you convert to decimal and divide by 256.
> 
> This has been most informative and certainly gives me more options.
> 
> Bob
> 
> 
> 
> 
> 
>> 
>> From: Chris Albertson 
>> To: Bob Stewart ; Discussion of precise time and frequency 
>> measurement  
>> Sent: Thursday, March 13, 2014 5:42 PM
>> Subject: Re: [time-nuts] PLL Math Question
>> 
>> 
>> 
>> You don't really shift so much as just change the way you think about it.   
>> The way to think about it is not that you have "16th" but that you have the 
>> "binary point" force places over.   It works just like a decimal point.  If 
>> you multiply two numbers each that has four places to the right of the point 
>> you have now eight places to the right.  You can shift it or not.  If you 
>> use 64 bit "longs" you sand up not having to shift so much because those can 
>> cary up to about 32 binary places.
>> 
>> 
>> 
>> 
>> On Thu, Mar 13, 2014 at 12:10 PM, Bob Stewart  wrote:
>> 
>> Dennis,
>>> 
>>> I just realized that I could do the math in sixteenths.  So, for 7/16ths 
>>> multiply by 7 before shifting(i.e. dividing) and rounding.  That would 
>>> probably give enough granularity.  I'll have to think about it.  It does 
>>> open new doors.
>>> 
>>> thanks,
>>> 
>>> Bob
>>> 
>>> 
>>> 
>>> 
>>> 
>>>> 
>>>> From: Dennis Ferguson 
>>>> To: Discussion of precise time and frequency measurement 
>>>> 
>>>> Cc: Hal Murray 
>>>> Sent: Thursday, March 13, 2014 1:58 PM
>>> 
>>>> Subject: Re: [time-nuts] PLL Math Question
>>>> 
>>>> 
>>> 
>>>> Note that you can't do fixed-point computations exactly the same way
>>>> you would do it in floating point, you often need to rearrange the 
>>>> equations
>>>> a bit.  You can usually find a rearrangement which provides equivalent
>>>> results, however.  Let's define an extra variable, x_sum, where
>>>> 
>>>> x_avg = x_sum * a_avg;
>>>> 
>>>> The equation above can then be rewritten in terms of x_sum, i.e.
>>>> 
>>>> x_sum = x_sum * (1 - a_avg) + x;
>>>> 
>>>> With an a_avg of 1/8 you'll instead be multiplying x_sum by 7, shifting
>>>> it right 3 bits (you might want to round before the shift) and adding x.
>>>> The new value of x_avg can be computed from the new value of x_sum with a
>>>> shift (you might want to round that too), or you could pretend that x_sum
>>>> is a fixed-point number with the decimal point 3 bits from the right.
>>>> In either case x_sum carries enough bits that you don't lose precision.
>>>> 
>>>> 
>>> 
>>> ___
>>> time-nuts mailing list -- time-nuts@febo.com
>>> To unsubscribe, go to 
>>> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>>> and follow the instructions there.
>>> 
>> 
>> 
>> 
>> -- 
>> 
>> Chris Albertson
>> Redondo Beach, California 
>> 
>> 
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Bob Stewart
OK, gotcha.  But, this is in assembler, and anything wider than 3 bytes becomes 
tedious.  Also, anything larger than 3 bytes starts using a lot of space in a 
hurry.  Three byte fields allow me to use 256ths for gain and take the result 
directly from the two high order bytes without any shifting.  And as I 
mentioned to Hal in a separate post: when I hand-coded the exponential averager 
the results were actually good.  I was forgetting to convert to decimal to 
compare values to the decimal run.  For example: 0x60 doesn't look like 0.375 
until you convert to decimal and divide by 256.

This has been most informative and certainly gives me more options.

Bob





>
> From: Chris Albertson 
>To: Bob Stewart ; Discussion of precise time and frequency 
>measurement  
>Sent: Thursday, March 13, 2014 5:42 PM
>Subject: Re: [time-nuts] PLL Math Question
> 
>
>
>You don't really shift so much as just change the way you think about it.   
>The way to think about it is not that you have "16th" but that you have the 
>"binary point" force places over.   It works just like a decimal point.  If 
>you multiply two numbers each that has four places to the right of the point 
>you have now eight places to the right.  You can shift it or not.  If you use 
>64 bit "longs" you sand up not having to shift so much because those can cary 
>up to about 32 binary places.
>
>
>
>
>On Thu, Mar 13, 2014 at 12:10 PM, Bob Stewart  wrote:
>
>Dennis,
>>
>>I just realized that I could do the math in sixteenths.  So, for 7/16ths 
>>multiply by 7 before shifting(i.e. dividing) and rounding.  That would 
>>probably give enough granularity.  I'll have to think about it.  It does open 
>>new doors.
>>
>>thanks,
>>
>>Bob
>>
>>
>>
>>
>>
>>>________
>>> From: Dennis Ferguson 
>>>To: Discussion of precise time and frequency measurement 
>>>Cc: Hal Murray 
>>>Sent: Thursday, March 13, 2014 1:58 PM
>>
>>>Subject: Re: [time-nuts] PLL Math Question
>>>
>>>
>>
>>>Note that you can't do fixed-point computations exactly the same way
>>>you would do it in floating point, you often need to rearrange the equations
>>>a bit.  You can usually find a rearrangement which provides equivalent
>>>results, however.  Let's define an extra variable, x_sum, where
>>>
>>>    x_avg = x_sum * a_avg;
>>>
>>>The equation above can then be rewritten in terms of x_sum, i.e.
>>>
>>>    x_sum = x_sum * (1 - a_avg) + x;
>>>
>>>With an a_avg of 1/8 you'll instead be multiplying x_sum by 7, shifting
>>>it right 3 bits (you might want to round before the shift) and adding x.
>>>The new value of x_avg can be computed from the new value of x_sum with a
>>>shift (you might want to round that too), or you could pretend that x_sum
>>>is a fixed-point number with the decimal point 3 bits from the right.
>>>In either case x_sum carries enough bits that you don't lose precision.
>>>
>>>
>>
>>___
>>time-nuts mailing list -- time-nuts@febo.com
>>To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>>and follow the instructions there.
>>
>
>
>
>-- 
>
>Chris Albertson
>Redondo Beach, California 
>
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Chris Albertson
You don't really shift so much as just change the way you think about it.
The way to think about it is not that you have "16th" but that you have the
"binary point" force places over.   It works just like a decimal point.  If
you multiply two numbers each that has four places to the right of the
point you have now eight places to the right.  You can shift it or not.  If
you use 64 bit "longs" you sand up not having to shift so much because
those can cary up to about 32 binary places.


On Thu, Mar 13, 2014 at 12:10 PM, Bob Stewart  wrote:

> Dennis,
>
> I just realized that I could do the math in sixteenths.  So, for 7/16ths
> multiply by 7 before shifting(i.e. dividing) and rounding.  That would
> probably give enough granularity.  I'll have to think about it.  It does
> open new doors.
>
> thanks,
>
> Bob
>
>
>
>
>
> >
> > From: Dennis Ferguson 
> >To: Discussion of precise time and frequency measurement <
> time-nuts@febo.com>
> >Cc: Hal Murray 
> >Sent: Thursday, March 13, 2014 1:58 PM
> >Subject: Re: [time-nuts] PLL Math Question
> >
> >
> >Note that you can't do fixed-point computations exactly the same way
> >you would do it in floating point, you often need to rearrange the
> equations
> >a bit.  You can usually find a rearrangement which provides equivalent
> >results, however.  Let's define an extra variable, x_sum, where
> >
> >x_avg = x_sum * a_avg;
> >
> >The equation above can then be rewritten in terms of x_sum, i.e.
> >
> >x_sum = x_sum * (1 - a_avg) + x;
> >
> >With an a_avg of 1/8 you'll instead be multiplying x_sum by 7, shifting
> >it right 3 bits (you might want to round before the shift) and adding x.
> >The new value of x_avg can be computed from the new value of x_sum with a
> >shift (you might want to round that too), or you could pretend that x_sum
> >is a fixed-point number with the decimal point 3 bits from the right.
> >In either case x_sum carries enough bits that you don't lose precision.
> >
> >
> ___
> time-nuts mailing list -- time-nuts@febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>



-- 

Chris Albertson
Redondo Beach, California
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Bob Stewart
Dennis,

I just realized that I could do the math in sixteenths.  So, for 7/16ths 
multiply by 7 before shifting(i.e. dividing) and rounding.  That would probably 
give enough granularity.  I'll have to think about it.  It does open new doors.

thanks,

Bob





>
> From: Dennis Ferguson 
>To: Discussion of precise time and frequency measurement  
>Cc: Hal Murray  
>Sent: Thursday, March 13, 2014 1:58 PM
>Subject: Re: [time-nuts] PLL Math Question
> 
>
>Note that you can't do fixed-point computations exactly the same way
>you would do it in floating point, you often need to rearrange the equations
>a bit.  You can usually find a rearrangement which provides equivalent
>results, however.  Let's define an extra variable, x_sum, where
>
>    x_avg = x_sum * a_avg;
>
>The equation above can then be rewritten in terms of x_sum, i.e.
>
>    x_sum = x_sum * (1 - a_avg) + x;
>
>With an a_avg of 1/8 you'll instead be multiplying x_sum by 7, shifting
>it right 3 bits (you might want to round before the shift) and adding x.
>The new value of x_avg can be computed from the new value of x_sum with a
>shift (you might want to round that too), or you could pretend that x_sum
>is a fixed-point number with the decimal point 3 bits from the right.
>In either case x_sum carries enough bits that you don't lose precision.
>
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Dennis Ferguson

On 12 Mar, 2014, at 23:08 , Hal Murray  wrote:
> b...@evoria.net said:
>> In the moving averages I'm doing, I'm saving the last bit to be shifted out
>> and if it's a 1 (i.e. 0.5) I increase the result by 1. 
> 
> That's just rounding up at an important place.  It's probably a good idea, 
> but doesn't cover the area I was trying to point out.  Let me try again...
> 
> Suppose you are doing:
>  x_avg = x_avg + (x - x_avg) * a_avg;
> 
> For exponential smoothing, a_avg will be a fraction.  Let's pick a_avg to be 
> 1/8.  That's a right shift by 3 bits.  I don't think there is anything magic 
> about shifting, but that makes a particular case easy to spot and discuss.
> 
> Suppose x_avg is 0 and x has been 0 for a while.  Everything is stable.  Now 
> change x to 2.  (x - x_avg) is 2, the shift kicks it off the edge, so x_avg 
> doesn't change.  (It went 2 bits off, so your round up doesn't catch it.)  
> The response to small steps is to ignore them.

Note that you can't do fixed-point computations exactly the same way
you would do it in floating point, you often need to rearrange the equations
a bit.  You can usually find a rearrangement which provides equivalent
results, however.  Let's define an extra variable, x_sum, where

x_avg = x_sum * a_avg;

The equation above can then be rewritten in terms of x_sum, i.e.

x_sum = x_sum * (1 - a_avg) + x;

With an a_avg of 1/8 you'll instead be multiplying x_sum by 7, shifting
it right 3 bits (you might want to round before the shift) and adding x.
The new value of x_avg can be computed from the new value of x_sum with a
shift (you might want to round that too), or you could pretend that x_sum
is a fixed-point number with the decimal point 3 bits from the right.
In either case x_sum carries enough bits that you don't lose precision.

Dennis Ferguson
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Bob Stewart
Hi Jim,

Thanks for your thoughts.  Perhaps there are a few things that I know about my 
particular system that have been discounted.  I have mentioned them in passing, 
but haven't collected them coherently for this thread.  It's an 8-bit PIC, thus 
floating point calculations have to be "improvised", and memory is limited to 
4096 total instructions with 512 bytes of variable space.  I'm using a nav 
receiver at the moment (no sawtooth correction), and the 1PPS is pretty noisy.  
The hardware is a fixed quantity, which includes a 10-bit PWM dithered to 14 
bits.  There is a voltage divider on the EFC to limit the range, and a 
thermistor for thermal correction.  Think of it as a contest or a challenge.  
The only prize is success.

The noisy 1PPS has been a big concern, and the reason for the moving 
average/low-pass filter.  I have considered using a slotted-disk instead, or at 
least a smaller MA with a slotted-disk.  The OCXO has shown itself to be 
extremely stable in both phase and frequency.  Without eliminating the noise 
from the 1PPS, the OCXO would be needlessly moved around thus passing that 
noise through.  Given the stability of the OCXO, I don't think that being 32 
seconds behind (two 16 second moving averages at the moment) creates a problem. 
 I do actually have a usable integrator designed for a  PI system, but I'm 
trying to avoid using it.  My preference is a state machine implementation.

My approach to this is along the lines of 
1. Warmup 
2. Check and adjust the frequency "close enough"
3. If there is a frequency adjustment, decide when to (re)enable phase control
4. Compare the phase angle to the setpoint to discover which way to herd the 
phase
5. Use the smoothed slope and distance from setpoint to control the gain of any 
change applied to the DAC
6. Adjust for temperature change
7. Rinse and repeat from 2.


Perhaps this is so close to PI that it makes no difference that I'm not using a 
transliteration of Wescott's code?  I really do not relish the idea of 
implementing floating point operations in 8-bit unsigned characters on someone 
else's control code on this PIC if I can get it to work properly my way.


Bob




>
> From: Jim Lux 
>To: time-nuts@febo.com 
>Sent: Thursday, March 13, 2014 7:57 AM
>Subject: Re: [time-nuts] PLL Math Question
> 
>
>On 3/12/14 10:06 PM, Chris Albertson wrote:
>> On Wed, Mar 12, 2014 at 9:13 PM, Daniel Mendes  wrote:
>>> This is a FIR x IIR question...
>>> 
>>> moving average = FIR filter with all N coeficients equalling 1/N
>>> exponential average = using a simple rule to make an IIR filter
>> 
>> Isn't his "moving average" just a convolution of the data with a box car
>> function?  That treats the last N samples equally and is likely not
>> optimal.   I think why he wants is a low pass filter.
>
>A moving average (or rectangular impulse response) *is* a low pass filter.  
>The frequency response is of the general sin(x)/x sort of shape, and it has 
>deep nulls, which can be convenient (imagine a moving average covering 1/60th 
>of a second, in the US.. it would have strong nulls at the line frequency and 
>harmonics)
>
>
>This method is like
>> the hockey player who skates to where to puck was about 5 seconds ago.  It
>> is not the best way to play the game.  He will in fact NEVER get to the
>> puck if the puck is moving he is domed to chase it forever..   Same here
>> you will never get there.
>
>That distinction is different than the filter IIR vs FIR thing. Filters are 
>causal, and the output always lags the input in time.  if you want to predict 
>where you're going to be you need a different kind of model or system design.  
>Something like a predictor corrector, for instance.
>
>
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Jim Lux

On 3/12/14 10:06 PM, Chris Albertson wrote:

On Wed, Mar 12, 2014 at 9:13 PM, Daniel Mendes  wrote:

This is a FIR x IIR question...

moving average = FIR filter with all N coeficients equalling 1/N
exponential average = using a simple rule to make an IIR filter


Isn't his "moving average" just a convolution of the data with a box car
function?  That treats the last N samples equally and is likely not
optimal.   I think why he wants is a low pass filter.


A moving average (or rectangular impulse response) *is* a low pass 
filter.  The frequency response is of the general sin(x)/x sort of 
shape, and it has deep nulls, which can be convenient (imagine a moving 
average covering 1/60th of a second, in the US.. it would have strong 
nulls at the line frequency and harmonics)



This method is like

the hockey player who skates to where to puck was about 5 seconds ago.  It
is not the best way to play the game.  He will in fact NEVER get to the
puck if the puck is moving he is domed to chase it forever..   Same here
you will never get there.


That distinction is different than the filter IIR vs FIR thing. Filters 
are causal, and the output always lags the input in time.  if you want 
to predict where you're going to be you need a different kind of model 
or system design.  Something like a predictor corrector, for instance.






But if you have a long time constant on the control loop you have in effect
the kind of "averaging" you want, one that tosses out erratic noisy data.
A PID controller uses only three memory locations and is likely the best
solution.


PID is popular, having been copiously analyzed and used over the past 
century. It's also easy to implement in analog circuitry.


ANd, there's long experience in how to empirically adjust the gain 
knobs, for some kinds of controlled plant.


However, I don't know that the simplicity justifies its use in modern 
digital implementations: very, very few applications are so processor or 
gate limited that they couldn't use something with better performance.


If you are controlling a physical system with dynamics that are well 
suited to a PID (e.g. a motor speed control) then yes, it's the way to 
go.  But if PIDs were so wonderful, then there wouldn't be all sorts of 
"auto-tuning" PIDs out there (which basically complexify things by 
trying to estimate the actual plant model function, and then optimize 
the P,I, and D coefficients).


PID controllers don't do well when there's a priori side knowledge 
available.  For instance, imagine a thermostat kind of application where 
you are controlling the temperature of an object outside in the sun. 
You could try to control the temperature solely by measuring the temp of 
the thing controlled, and comparing it against the setpoint (a classic 
PID sort of single variable loop).  Odds are, however, that if you had 
information about the outside air temperature and solar loading, you 
could hold the temperature a lot more tightly and smoothly, because you 
could use the side information (temp and sun) to anticipate the 
heating/cooling demands.


This is particularly the case where the controlled thing has long time 
lags, but low inertia/mass.




We have to define "best".  I'd define it as "the error integrated over time
is minimum".  I think PiD gets you that and it is also easy to program and
uses very little memory.  Just three values (1) the error, (2) the total of
all errors you've seen (in a perfect world this is zero because the
positive and negative errors cancel) and (3) the rate of change in the
error (is it getting bigger of smaller and how quickly?)  Multiply each of
those numbers by a constant and that is the correction to the output value.
It's maybe 6 or 10 lines of C code.   The "magic" is finding the right
values for the constants.


And that magic is sometimes a lot of work.

And practical PID applications also need things like integrator reset to 
prevent wind-up issues, and clamps, or variable gains.


PID, or PI, is, as you say, easy to code, and often a good first start, 
if you have a system with fast response, and lots of gain to work with. 
 It's like building circuits with an opamp: big gain bandwidth product 
makes it more like an ideal amplifier where the feedback components 
completely determine the circuit behavior. Put in hysteresis, or a time 
delay, and things start to not look so wonderful.





This is worth reading
PIDforDummies.html 



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-13 Thread Hal Murray

albertson.ch...@gmail.com said:
> We have to define "best".  I'd define it as "the error integrated over time
> is minimum".  I think PiD gets you that and it is also easy to program and
> uses very little memory.  Just three values (1) the error, (2) the total of
> all errors you've seen (in a perfect world this is zero because the positive
> and negative errors cancel) and (3) the rate of change in the error (is it
> getting bigger of smaller and how quickly?)  Multiply each of those numbers
> by a constant and that is the correction to the output value. 

I think you are off by a factor of 2.  There are 6 parameters, 2 for each of 
3 channels.  Each channel has gain and time-constant.  There are separate 
channels/parameters/whatever-you-call-them for P, I, and D.



-- 
These are my opinions.  I hate spam.



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Daniel Mendes

Em 13/03/2014 01:35, Bob Stewart escreveu:

Hi Daniel,

re: FIR vs IIR


I'm not a DSP professional, though I do have an old Smiths, and I've read some 
of it.  So, could you give me some idea what the FIR vs IIR question means on a 
practical level for this application?  I can see that the MA is effective and 
easy to code, but takes up memory space I eventually may not have.  Likewise, I 
can see that the EA is hard to code for the general case, but takes up little 
memory.  Any thoughts would be appreciated unless this is straying too far from 
time-nuts territory.




FIR = Finite Impulse Response

It means that if you enter an impulse into your filter after some time 
the response completely vanishes. Let´s have an example:


your filter has coefficients 0.25 ; 0.25 ; 0.25 ; 0.25   (a moving 
average of length 4)


instead we could define this filter by the difference equation:

y[n] = 0.25x[n] + 0.25x[n-1] + 0.25x[n-2] + 0.25x[n-3](notice that 
Y[n] can be computed by looking only at present and past values of x)


your data is  0;  0; 1; 0; 0; 0; 0 .. (there´s an impulse of 
amplitude 1 at n= 2)


your output will be:

0; 0; 0.25; 0.25; 0.25; 0.25; 0; 0; 0; . (this is the convolution 
between the coefficients and the input data)


after 4 samples (the length of the filter) your output completely 
vanishes. This means that all FIR filters are BIBO stable (BIBO = 
bounded input, bounded output... if you enter numbers not infinite in 
the filter the output never diverges)


IIR = infinite Impulse Response

It means that if you enter an impulse into your filter the response 
never settle down again (but it can converge). Let´s have an example:


your filter cannot be described by coefficients anymore because it has 
infinite response, so we need a difference equation. Let´s use the one 
provided before for the exponential smoothing with a_avg = 1/8:


x_avg = x_avg + (x - x_avg) * 1/8;

this means:

y[n] = y[n-1] + (x[n] - y[n-1]) * 1/8

y[n] = y[n-1] - 1/8*y[n-1] + 1/8*x[n]

y[n] = 7/8*y[n-1] + 1/8*x[n]

you can see why this is different from the other filter: now the output is 
function not only from the present and past inputs, but also from the past 
output(s).

Lets try the same input as before:

your data is  0;  0; 1; 0; 0; 0; 0 .. (there´s an impulse of amplitude 1 at 
t= 2)

your output will be:

y[0] = 0 = 7/8*y[-1] + 1/8*x[0]  (i´m assuming that y[-1] = 0.. and x[0] is 
zero)
y[1] = 0 = 7/8*y[0] + 1/8*x[1]
y[2] = 1/8 = 7/8*y[1] + 1/8*x[2] (x[2] = 1)
y[3] = 7/64 = 7/8*y[2] + 1/8*x[3] (x[3] = 0) = 0.109
y[4] = 49/512 = 7/8*y[3] + 1/8*x[4] (x[4] = 0) = 0.095
y[5] = 343/4096 = 7/8*y[4] + 1/8*x[5] (x[5] = 0) = 0.084

You can see that without truncation this will never go to zero again.

Usually you can get more attenuation with a IIR filter having the same 
computational complexity than a FIR filter but you need to take care about 
stability and truncation. Well, i´ll just copy here the relevant part from 
wikipedia about advantages and disadvantages:


   Advantages and disadvantages

The main advantage digital IIR filters have over FIR filters is their 
efficiency in implementation, in order to meet a specification in terms 
of passband, stopband, ripple, and/or roll-off. Such a set of 
specifications can be accomplished with a lower order (/Q/ in the above 
formulae) IIR filter than would be required for an FIR filter meeting 
the same requirements. If implemented in a signal processor, this 
implies a correspondingly fewer number of calculations per time step; 
the computational savings is often of a rather large factor.


On the other hand, FIR filters can be easier to design, for instance, to 
match a particular frequency response requirement. This is particularly 
true when the requirement is not one of the usual cases (high-pass, 
low-pass, notch, etc.) which have been studied and optimized for analog 
filters. Also FIR filters can be easily made to be linear phase 
 (constant group delay 
 vs frequency), a property 
that is not easily met using IIR filters and then only as an 
approximation (for instance with the Bessel filter 
). Another issue regarding 
digital IIR filters is the potential for limit cycle 
 behavior when idle, due to 
the feedback system in conjunction with quantization.



Daniel


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Chris Albertson
On Wed, Mar 12, 2014 at 9:13 PM, Daniel Mendes  wrote:
> This is a FIR x IIR question...
>
> moving average = FIR filter with all N coeficients equalling 1/N
> exponential average = using a simple rule to make an IIR filter

Isn't his "moving average" just a convolution of the data with a box car
function?  That treats the last N samples equally and is likely not
optimal.   I think why he wants is a low pass filter.  This method is like
the hockey player who skates to where to puck was about 5 seconds ago.  It
is not the best way to play the game.  He will in fact NEVER get to the
puck if the puck is moving he is domed to chase it forever..   Same here
you will never get there.

But if you have a long time constant on the control loop you have in effect
the kind of "averaging" you want, one that tosses out erratic noisy data.
A PID controller uses only three memory locations and is likely the best
solution.

We have to define "best".  I'd define it as "the error integrated over time
is minimum".  I think PiD gets you that and it is also easy to program and
uses very little memory.  Just three values (1) the error, (2) the total of
all errors you've seen (in a perfect world this is zero because the
positive and negative errors cancel) and (3) the rate of change in the
error (is it getting bigger of smaller and how quickly?)  Multiply each of
those numbers by a constant and that is the correction to the output value.
   It's maybe 6 or 10 lines of C code.   The "magic" is finding the right
values for the constants.

This is worth reading
PIDforDummies.html 

-- 

Chris Albertson
Redondo Beach, California
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Bob Stewart
Hi Daniel,

re: FIR vs IIR


I'm not a DSP professional, though I do have an old Smiths, and I've read some 
of it.  So, could you give me some idea what the FIR vs IIR question means on a 
practical level for this application?  I can see that the MA is effective and 
easy to code, but takes up memory space I eventually may not have.  Likewise, I 
can see that the EA is hard to code for the general case, but takes up little 
memory.  Any thoughts would be appreciated unless this is straying too far from 
time-nuts territory.


Bob




>
> From: Daniel Mendes 
>To: Discussion of precise time and frequency measurement  
>Sent: Wednesday, March 12, 2014 11:13 PM
>Subject: Re: [time-nuts] PLL Math Question
> 
>
>This is a FIR x IIR question...
>
>moving average = FIR filter with all N coeficients equalling 1/N
>exponential average = using a simple rule to make an IIR filter
>
>
>Daniel
>
>
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Daniel Mendes

This is a FIR x IIR question...

moving average = FIR filter with all N coeficients equalling 1/N
exponential average = using a simple rule to make an IIR filter


Daniel

Em 13/03/2014 00:55, Bob Stewart escreveu:

Hal says: "For exponential smoothing, a_avg will be a fraction.  Let's pick a_avg to 
be 1/8.  That's a right shift by 3 bits.  I don't think there is anything magic about 
shifting, but that makes a particular case easy to spot and discuss."

Hi Hal,

Yeah, I've been sitting here manually running some sample data and I haven't 
been happy with my efforts so far.  I think I'll just stay with what I know for 
now: moving averages.  I've got a number of places I can reduce memory usage 
when I run a bit shorter, so I think it'll work out.  And I suspect I'm being 
far too conservative; i.e. averaging way too long  If not, maybe there will be 
a good gain value that will be convenient to code the exponential average.

Thanks for the help,

Bob





From: Hal Murray 
To: Bob Stewart ; Discussion of precise time and frequency 
measurement 
Cc: hmur...@megapathdsl.net
Sent: Wednesday, March 12, 2014 10:08 PM
Subject: Re: [time-nuts] PLL Math Question



b...@evoria.net said:

In the moving averages I'm doing, I'm saving the last bit to be shifted out
and if it's a 1 (i.e. 0.5) I increase the result by 1.

That's just rounding up at an important place.  It's probably a good idea,
but doesn't cover the area I was trying to point out.  Let me try again...

Suppose you are doing:
   x_avg = x_avg + (x - x_avg) * a_avg;

For exponential smoothing, a_avg will be a fraction.  Let's pick a_avg to be
1/8.  That's a right shift by 3 bits.  I don't think there is anything magic
about shifting, but that makes a particular case easy to spot and discuss.

Suppose x_avg is 0 and x has been 0 for a while.  Everything is stable.  Now 
change x to 2.  (x - x_avg) is 2, the shift kicks it off the edge, so x_avg 
doesn't change.  (It went 2 bits off, so your round up doesn't catch it.)  The 
response to small steps is to ignore them.

If you have noisy data, things probably work out OK.  If you need to process 
low level (very) low frequency changes (which seems desirable for a GPSDO) you 
probably want some fractional bits.  For me, the easy way to do that is to use
   y = x * k
Let's use k = 16, a 4 bit left shift.
For the same step of x=2, y= 32, (y - y_avg) is 32, shifted right by 3 that's 
4, so y_avg is 4.

I'm sure this is all business-as-usual for the people who write control loops 
in small CPUs using fixed point arithmethic.  Of course, you have to worry 
about shifting too far left (overflow) and things like that.

If you have enough cycles, you can use floating point.  :)


--
These are my opinions.  I hate spam.







___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Bob Stewart
Hal says: "For exponential smoothing, a_avg will be a fraction.  Let's pick 
a_avg to be 1/8.  That's a right shift by 3 bits.  I don't think there is 
anything magic about shifting, but that makes a particular case easy to spot 
and discuss."

Hi Hal,

Yeah, I've been sitting here manually running some sample data and I haven't 
been happy with my efforts so far.  I think I'll just stay with what I know for 
now: moving averages.  I've got a number of places I can reduce memory usage 
when I run a bit shorter, so I think it'll work out.  And I suspect I'm being 
far too conservative; i.e. averaging way too long  If not, maybe there will be 
a good gain value that will be convenient to code the exponential average.

Thanks for the help,

Bob



>
> From: Hal Murray 
>To: Bob Stewart ; Discussion of precise time and frequency 
>measurement  
>Cc: hmur...@megapathdsl.net 
>Sent: Wednesday, March 12, 2014 10:08 PM
>Subject: Re: [time-nuts] PLL Math Question
> 
>
>
>b...@evoria.net said:
>> In the moving averages I'm doing, I'm saving the last bit to be shifted out
>> and if it's a 1 (i.e. 0.5) I increase the result by 1. 
>
>That's just rounding up at an important place.  It's probably a good idea, 
>but doesn't cover the area I was trying to point out.  Let me try again...
>
>Suppose you are doing:
>  x_avg = x_avg + (x - x_avg) * a_avg;
>
>For exponential smoothing, a_avg will be a fraction.  Let's pick a_avg to be 
>1/8.  That's a right shift by 3 bits.  I don't think there is anything magic 
>about shifting, but that makes a particular case easy to spot and discuss.
>
>Suppose x_avg is 0 and x has been 0 for a while.  Everything is stable.  Now 
>change x to 2.  (x - x_avg) is 2, the shift kicks it off the edge, so x_avg 
>doesn't change.  (It went 2 bits off, so your round up doesn't catch it.)  The 
>response to small steps is to ignore them.
>
>If you have noisy data, things probably work out OK.  If you need to process 
>low level (very) low frequency changes (which seems desirable for a GPSDO) you 
>probably want some fractional bits.  For me, the easy way to do that is to use
>  y = x * k
>Let's use k = 16, a 4 bit left shift.
>For the same step of x=2, y= 32, (y - y_avg) is 32, shifted right by 3 that's 
>4, so y_avg is 4.
>
>I'm sure this is all business-as-usual for the people who write control loops 
>in small CPUs using fixed point arithmethic.  Of course, you have to worry 
>about shifting too far left (overflow) and things like that.
>
>If you have enough cycles, you can use floating point.  :)
>
>
>-- 
>These are my opinions.  I hate spam.
>
>
>
>
>
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Hal Murray

b...@evoria.net said:
> In the moving averages I'm doing, I'm saving the last bit to be shifted out
> and if it's a 1 (i.e. 0.5) I increase the result by 1. 

That's just rounding up at an important place.  It's probably a good idea, 
but doesn't cover the area I was trying to point out.  Let me try again...

Suppose you are doing:
  x_avg = x_avg + (x - x_avg) * a_avg;

For exponential smoothing, a_avg will be a fraction.  Let's pick a_avg to be 
1/8.  That's a right shift by 3 bits.  I don't think there is anything magic 
about shifting, but that makes a particular case easy to spot and discuss.

Suppose x_avg is 0 and x has been 0 for a while.  Everything is stable.  Now 
change x to 2.  (x - x_avg) is 2, the shift kicks it off the edge, so x_avg 
doesn't change.  (It went 2 bits off, so your round up doesn't catch it.)  The 
response to small steps is to ignore them.

If you have noisy data, things probably work out OK.  If you need to process 
low level (very) low frequency changes (which seems desirable for a GPSDO) you 
probably want some fractional bits.  For me, the easy way to do that is to use
  y = x * k
Let's use k = 16, a 4 bit left shift.
For the same step of x=2, y= 32, (y - y_avg) is 32, shifted right by 3 that's 
4, so y_avg is 4.

I'm sure this is all business-as-usual for the people who write control loops 
in small CPUs using fixed point arithmethic.  Of course, you have to worry 
about shifting too far left (overflow) and things like that.

If you have enough cycles, you can use floating point.  :)


-- 
These are my opinions.  I hate spam.



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Bob Stewart
"x_avg = x_avg + (x - x_avg) * a_avg;"

Hi again Magnus,

In fact, I just post-processed some data using that formula in perl.  It looks 
great, and will indeed save me code and memory space.  And, it can be a user 
variable, rather than hard-coded.  Thanks for the heads up!

Bob





>
> From: Magnus Danielson 
>To: time-nuts@febo.com 
>Sent: Wednesday, March 12, 2014 12:51 PM
>Subject: Re: [time-nuts] PLL Math Question
> 
>
>Bob,
>
>
>
>Exponential averger takes much less memory. Consider this code:
>
>x_avg = x_avg + (x - x_avg) * a_avg;
>
>Where a_avg is the time-constant control parameter.
>
>Cheers,
>Magnus
>
>
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Bob Stewart
Hi Hal,

In the moving averages I'm doing, I'm saving the last bit to be shifted out and 
if it's a 1 (i.e. 0.5) I increase the result by 1.

Bob





>
> From: Hal Murray 
>To: Discussion of precise time and frequency measurement  
>Cc: hmur...@megapathdsl.net 
>Sent: Wednesday, March 12, 2014 2:25 PM
>Subject: Re: [time-nuts] PLL Math Question
> 
>
>
>mag...@rubidium.dyndns.org said:
>> Exponential averger takes much less memory. Consider this code:
>> x_avg = x_avg + (x - x_avg) * a_avg;
>> Where a_avg is the time-constant control parameter. 
>
>Also note that if a_avg is a power of 2, you can do it all with shifts rather 
>than multiplies.
>
>Note that the shift is to the right which drops bits.  That suggests that you 
>might want to work with x scaled relative to the raw data samples.  Consider 
>a_avg to be 1/8, or a shift right 3 bits.  Suppose x_avg is 0 and you get a 
>string of x samples of 2.  The shift throws away the 2 so x_avg never changes.
>
>-- 
>These are my opinions.  I hate spam.
>
>
>
>___
>time-nuts mailing list -- time-nuts@febo.com
>To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>and follow the instructions there.
>
>
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Hal Murray

mag...@rubidium.dyndns.org said:
> Exponential averger takes much less memory. Consider this code:
> x_avg = x_avg + (x - x_avg) * a_avg;
> Where a_avg is the time-constant control parameter. 

Also note that if a_avg is a power of 2, you can do it all with shifts rather 
than multiplies.

Note that the shift is to the right which drops bits.  That suggests that you 
might want to work with x scaled relative to the raw data samples.  Consider 
a_avg to be 1/8, or a shift right 3 bits.  Suppose x_avg is 0 and you get a 
string of x samples of 2.  The shift throws away the 2 so x_avg never changes.

-- 
These are my opinions.  I hate spam.



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Bob Stewart
Hi Magnus,

Thanks very much for this response!  It will be very easy to add the 
exponential averager to my code and do a comparison to the moving average.  I 
have no experience with PI/PID.  I'll have to look over the literature I have 
on them and relate that to what I'm controlling.

It should be mentioned that I'm more interested in the adventure than in just 
copying someone else's code or formulae and pumping this out.  I have an idea 
of how I want to do this and...

Bob




>
> From: Magnus Danielson 
>To: time-nuts@febo.com 
>Sent: Wednesday, March 12, 2014 12:51 PM
>Subject: Re: [time-nuts] PLL Math Question
> 
>
>Bob,
>
>On 12/03/14 18:24, Bob Stewart wrote:
>> Now that I've got the TIC going, I'm working on the PLL math
>> for my GPSDO.  My question is about moving averages.  I've
>> put in a moving average for the TIC.  From that, I've
>> calculated the slope, and have put a moving average on the
>> slope to settle it down.  I think this boils down to a
>> moving average of a moving average.  If both are 16 seconds
>> long, is this essentially a 32 second moving average of the
>> TIC, or is it some other function?  I read briefly about
>> averages of averages last night, but I'm not sure I
>> understood the conclusion.  This is all "clean code" so I
>> may be over-complicating things, but I'm OK with that.
>
>When you serialize two averages you maintain the same time-constant of the 
>average, but you get two 6 dB slopes on top of each other to form a 12 dB 
>slope, while it is flat on the pass-band.
>
>You should be careful about use of averager inside the loop. A moving averager 
>adds a zero in the loop, and you want to make sure you understand what that 
>zero will do to the overall control-loop. Here you have two of them, as you 
>run two average zeros in series.
>
>I prefer to use a PI or PID loop for such a control-loop, and potentially an 
>exponential averager or two in there. If you make sure the exponential 
>averager has a wide enough bandwidth, you can use standard PI dimensioning 
>formulas, but achieve the tighter slope which the exponential averagers 
>contribute to.
>
>> NOTE: The reason I'm using 16 seconds is that I'm becoming memory limited.  
>> I'm switching to an 18F2320, but that only gets me more program memory.  I'm 
>> constrained to this chip on an existing board.
>
>Exponential averger takes much less memory. Consider this code:
>
>x_avg = x_avg + (x - x_avg) * a_avg;
>
>Where a_avg is the time-constant control parameter.
>
>Cheers,
>Magnus
>
>___
>time-nuts mailing list -- time-nuts@febo.com
>To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>and follow the instructions there.
>
>
>
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-12 Thread Magnus Danielson

Bob,

On 12/03/14 18:24, Bob Stewart wrote:

Now that I've got the TIC going, I'm working on the PLL math
for my GPSDO.  My question is about moving averages.  I've
put in a moving average for the TIC.  From that, I've
calculated the slope, and have put a moving average on the
slope to settle it down.  I think this boils down to a
moving average of a moving average.  If both are 16 seconds
long, is this essentially a 32 second moving average of the
TIC, or is it some other function?  I read briefly about
averages of averages last night, but I'm not sure I
understood the conclusion.  This is all "clean code" so I
may be over-complicating things, but I'm OK with that.


When you serialize two averages you maintain the same time-constant of 
the average, but you get two 6 dB slopes on top of each other to form a 
12 dB slope, while it is flat on the pass-band.


You should be careful about use of averager inside the loop. A moving 
averager adds a zero in the loop, and you want to make sure you 
understand what that zero will do to the overall control-loop. Here you 
have two of them, as you run two average zeros in series.


I prefer to use a PI or PID loop for such a control-loop, and 
potentially an exponential averager or two in there. If you make sure 
the exponential averager has a wide enough bandwidth, you can use 
standard PI dimensioning formulas, but achieve the tighter slope which 
the exponential averagers contribute to.



NOTE: The reason I'm using 16 seconds is that I'm becoming memory limited.  I'm 
switching to an 18F2320, but that only gets me more program memory.  I'm 
constrained to this chip on an existing board.


Exponential averger takes much less memory. Consider this code:

x_avg = x_avg + (x - x_avg) * a_avg;

Where a_avg is the time-constant control parameter.

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.