Re: [time-nuts] Any Isotemp OCXO107-10 Info?

2014-03-14 Thread Poul-Henning Kamp

Since I had it out, I decided to let it run and this morning
I measured the EFC characteristic.

In my case perfect frequency is at 4.025V and the sensitivity
is 0.2317 PPM/Volt so the design EFC range is probably +/- 1PPM

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Any Isotemp OCXO107-10 Info?

2014-03-14 Thread Ed Palmer

The specs that I found here:

http://web.archive.org/web/20010302193035/http://www.isotemp.com/ocxo107.htm

say the electrical EFC range is  0.1 PPM, but that's for the version 
with the D/A converter.  I can't find any hint about our version.


My unit is starting to settle down.  Yesterday aging was in the e-8 
range, today it's in the e-9 range.  Spec is  2e-10/day.


Earlier I babbled something about a 200 Hz tuning range.  I don't know 
what I was thinking.  The tuning range is 5 MHz +3 Hz to -7 Hz. So far 
the drift has been upwards so I have lots of room to slow it down.


Ed

On 3/14/2014 3:30 AM, Poul-Henning Kamp wrote:

Since I had it out, I decided to let it run and this morning
I measured the EFC characteristic.

In my case perfect frequency is at 4.025V and the sensitivity
is 0.2317 PPM/Volt so the design EFC range is probably +/- 1PPM



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

Hi Bob,

On 12/03/14 19:26, Bob Stewart wrote:

Hi Magnus,

Thanks very much for this response!  It will be very easy to add the 
exponential averager to my code and do a comparison to the moving average.  I 
have no experience with PI/PID.  I'll have to look over the literature I have 
on them and relate that to what I'm controlling.

It should be mentioned that I'm more interested in the adventure than in just 
copying someone else's code or formulae and pumping this out.  I have an idea 
of how I want to do this and...


The effect of additional filtering can be a little tricky to analyze 
sometimes, but the exponential averager, the normal 1-pole low-pass 
filter, is however well-understood when the cut-off frequency is well 
above the normal PI-loop bandwith. K*4 seems to pop up in my head.


It's a hint which comes from experience of others as well as myself.
You can also look into the Stanford Research PRS-10 manual, which also 
optionally have such a filter in it's PPS slaving mode.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

On 12/03/14 20:25, Hal Murray wrote:


mag...@rubidium.dyndns.org said:

Exponential averger takes much less memory. Consider this code:
x_avg = x_avg + (x - x_avg) * a_avg;
Where a_avg is the time-constant control parameter.


Also note that if a_avg is a power of 2, you can do it all with shifts rather
than multiplies.


Indeed. For many purposes it suffice with that kind of resolution for it.


Note that the shift is to the right which drops bits.  That suggests that you 
might want to work with x scaled relative to the raw data samples.  Consider 
a_avg to be 1/8, or a shift right 3 bits.  Suppose x_avg is 0 and you get a 
string of x samples of 2.  The shift throws away the 2 so x_avg never changes.


Indeed. The form I wrote it above makes it easy to understand this 
consequence.


It is always good to consider the consequence of bitwidth, and scaling 
factors for filters makes you require more. Some problems you solve by 
just throwing more bits of resolution/headroom onto it.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

Hi Bob,

On 12/03/14 23:16, Bob Stewart wrote:

x_avg = x_avg + (x - x_avg) * a_avg;

Hi again Magnus,

In fact, I just post-processed some data using that formula in perl.  It looks 
great, and will indeed save me code and memory space.  And, it can be a user 
variable, rather than hard-coded.  Thanks for the heads up!


Proven in battle, dead easy to code, well understood. Happy to share.

Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

On 13/03/14 07:35, Daniel Mendes wrote:

Em 13/03/2014 01:35, Bob Stewart escreveu:

Hi Daniel,

re: FIR vs IIR


I'm not a DSP professional, though I do have an old Smiths, and I've
read some of it.  So, could you give me some idea what the FIR vs IIR
question means on a practical level for this application?  I can see
that the MA is effective and easy to code, but takes up memory space I
eventually may not have.  Likewise, I can see that the EA is hard to
code for the general case, but takes up little memory.  Any thoughts
would be appreciated unless this is straying too far from time-nuts
territory.




FIR = Finite Impulse Response

It means that if you enter an impulse into your filter after some time
the response completely vanishes. Let´s have an example:

your filter has coefficients 0.25 ; 0.25 ; 0.25 ; 0.25   (a moving
average of length 4)

instead we could define this filter by the difference equation:

y[n] = 0.25x[n] + 0.25x[n-1] + 0.25x[n-2] + 0.25x[n-3](notice that
Y[n] can be computed by looking only at present and past values of x)

your data is  0;  0; 1; 0; 0; 0; 0 .. (there´s an impulse of
amplitude 1 at n= 2)

your output will be:

0; 0; 0.25; 0.25; 0.25; 0.25; 0; 0; 0; . (this is the convolution
between the coefficients and the input data)

after 4 samples (the length of the filter) your output completely
vanishes. This means that all FIR filters are BIBO stable (BIBO =
bounded input, bounded output... if you enter numbers not infinite in
the filter the output never diverges)

IIR = infinite Impulse Response

It means that if you enter an impulse into your filter the response
never settle down again (but it can converge). Let´s have an example:

your filter cannot be described by coefficients anymore because it has
infinite response, so we need a difference equation. Let´s use the one
provided before for the exponential smoothing with a_avg = 1/8:

x_avg = x_avg + (x - x_avg) * 1/8;

this means:

y[n] = y[n-1] + (x[n] - y[n-1]) * 1/8

y[n] = y[n-1] - 1/8*y[n-1] + 1/8*x[n]

y[n] = 7/8*y[n-1] + 1/8*x[n]

you can see why this is different from the other filter: now the output
is function not only from the present and past inputs, but also from the
past output(s).

Lets try the same input as before:

your data is  0;  0; 1; 0; 0; 0; 0 .. (there´s an impulse of
amplitude 1 at t= 2)

your output will be:

y[0] = 0 = 7/8*y[-1] + 1/8*x[0]  (i´m assuming that y[-1] = 0.. and x[0]
is zero)
y[1] = 0 = 7/8*y[0] + 1/8*x[1]
y[2] = 1/8 = 7/8*y[1] + 1/8*x[2] (x[2] = 1)
y[3] = 7/64 = 7/8*y[2] + 1/8*x[3] (x[3] = 0) = 0.109
y[4] = 49/512 = 7/8*y[3] + 1/8*x[4] (x[4] = 0) = 0.095
y[5] = 343/4096 = 7/8*y[4] + 1/8*x[5] (x[5] = 0) = 0.084

You can see that without truncation this will never go to zero again.

Usually you can get more attenuation with a IIR filter having the same
computational complexity than a FIR filter but you need to take care
about stability and truncation. Well, i´ll just copy here the relevant
part from wikipedia about advantages and disadvantages:


Advantages and disadvantages

The main advantage digital IIR filters have over FIR filters is their
efficiency in implementation, in order to meet a specification in terms
of passband, stopband, ripple, and/or roll-off. Such a set of
specifications can be accomplished with a lower order (/Q/ in the above
formulae) IIR filter than would be required for an FIR filter meeting
the same requirements. If implemented in a signal processor, this
implies a correspondingly fewer number of calculations per time step;
the computational savings is often of a rather large factor.

On the other hand, FIR filters can be easier to design, for instance, to
match a particular frequency response requirement. This is particularly
true when the requirement is not one of the usual cases (high-pass,
low-pass, notch, etc.) which have been studied and optimized for analog
filters. Also FIR filters can be easily made to be linear phase
http://en.wikipedia.org/wiki/Linear_phase (constant group delay
http://en.wikipedia.org/wiki/Group_delay vs frequency), a property
that is not easily met using IIR filters and then only as an
approximation (for instance with the Bessel filter
http://en.wikipedia.org/wiki/Bessel_filter). Another issue regarding
digital IIR filters is the potential for limit cycle
http://en.wikipedia.org/wiki/Limit_cycle behavior when idle, due to
the feedback system in conjunction with quantization.


You need to understand that the PI loop already is a IIR filter in 
itself, and you need to understand what the averager you add inside that 
loop does to the loop properties. Generic discussions on IIR vs FIR does 
not cut it. If you do a FIR averager then you need to consider what the 
poles and zeros (yes it has both) of that do inside the PI-loop.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to 

Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

On 13/03/14 13:57, Jim Lux wrote:

On 3/12/14 10:06 PM, Chris Albertson wrote:

On Wed, Mar 12, 2014 at 9:13 PM, Daniel Mendes dmend...@gmail.com
wrote:

This is a FIR x IIR question...

moving average = FIR filter with all N coeficients equalling 1/N
exponential average = using a simple rule to make an IIR filter


Isn't his moving average just a convolution of the data with a box car
function?  That treats the last N samples equally and is likely not
optimal.   I think why he wants is a low pass filter.


A moving average (or rectangular impulse response) *is* a low pass
filter.  The frequency response is of the general sin(x)/x sort of
shape, and it has deep nulls, which can be convenient (imagine a moving
average covering 1/60th of a second, in the US.. it would have strong
nulls at the line frequency and harmonics)


This method is like

the hockey player who skates to where to puck was about 5 seconds
ago.  It
is not the best way to play the game.  He will in fact NEVER get to the
puck if the puck is moving he is domed to chase it forever..   Same here
you will never get there.


That distinction is different than the filter IIR vs FIR thing. Filters
are causal, and the output always lags the input in time.  if you want
to predict where you're going to be you need a different kind of model
or system design.  Something like a predictor corrector, for instance.





But if you have a long time constant on the control loop you have in
effect
the kind of averaging you want, one that tosses out erratic noisy data.
A PID controller uses only three memory locations and is likely the best
solution.


PID is popular, having been copiously analyzed and used over the past
century. It's also easy to implement in analog circuitry.

ANd, there's long experience in how to empirically adjust the gain
knobs, for some kinds of controlled plant.

However, I don't know that the simplicity justifies its use in modern
digital implementations: very, very few applications are so processor or
gate limited that they couldn't use something with better performance.

If you are controlling a physical system with dynamics that are well
suited to a PID (e.g. a motor speed control) then yes, it's the way to
go.  But if PIDs were so wonderful, then there wouldn't be all sorts of
auto-tuning PIDs out there (which basically complexify things by
trying to estimate the actual plant model function, and then optimize
the P,I, and D coefficients).

PID controllers don't do well when there's a priori side knowledge
available.  For instance, imagine a thermostat kind of application where
you are controlling the temperature of an object outside in the sun. You
could try to control the temperature solely by measuring the temp of the
thing controlled, and comparing it against the setpoint (a classic PID
sort of single variable loop).  Odds are, however, that if you had
information about the outside air temperature and solar loading, you
could hold the temperature a lot more tightly and smoothly, because you
could use the side information (temp and sun) to anticipate the
heating/cooling demands.

This is particularly the case where the controlled thing has long time
lags, but low inertia/mass.


Extending a PI or PID loop to incorporate aiding signals isn't hard. In 
fact that's what happens in GPS receivers. Properly done aiding signals 
will reduce the phase errors to do loop stress and allow for even 
tighter bandwidth.


Each GPS channel in a receiver contains a carrier and a code loop. The 
carrier loop aids the code loop in frequency tracking. It is also common 
to have both a frequency and phase detector and then aid the normal 
phase-driven PI loop with a frequency detector hint.


A nice aspect about frequency aiding is that it has a strong pull-in 
property when the input signal and loop is far away, and that's when the 
phase-lock-in is very weak. As the pull-in progresses, the frequency 
aiding gets weaker while the phase locked becomes stronger, as the 
Bessel polynom gets higher for the beat frequency. Eventually the 
phase-locking takes over in strength and the frequency aiding 
essentially dismisses itself. This is a great example of how a classical 
loop can be extended without getting into very esoteric systems.




We have to define best.  I'd define it as the error integrated over
time
is minimum.  I think PiD gets you that and it is also easy to program
and
uses very little memory.  Just three values (1) the error, (2) the
total of
all errors you've seen (in a perfect world this is zero because the
positive and negative errors cancel) and (3) the rate of change in the
error (is it getting bigger of smaller and how quickly?)  Multiply
each of
those numbers by a constant and that is the correction to the output
value.
It's maybe 6 or 10 lines of C code.   The magic is finding the
right
values for the constants.


And that magic is sometimes a lot of work.

And practical PID applications also need things like integrator 

Re: [time-nuts] PLL Math Question

2014-03-14 Thread Magnus Danielson

On 14/03/14 00:39, Bob Camp wrote:

Hi

Either grab a math pack (there are several for the PIC) or go to C.

Timing at the Time Nuts level is about precision. We need *lots* of digits past 
the binary point :)


Indeed. Throwing bits at the problems is relatively cheap today. 
Besides, you don't process it that often, so you can afford to let a few 
cycles bring you design margin.


Remember, you want the internal resolution to have many bits below the 
single-shot resolution. Lack of bits in frequency resolution tends to 
get you doing bang-bang regulations to approximate the frequency. With 
sufficient resolution other noise sources will assist to average out 
that quantization step. The bang-bang regulations naturally give you an 
idle-frequency, and both frequency and amplitude depends on the 
quantization step.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] PLL Math Question

2014-03-14 Thread Dan Kemppainen
Bob,

Just been reading along, enjoying the conversation...

I've written a lot of hand coded assembly. Some of it very similar to
what you are doing here now. (Although, a different processor family)
I really didn't want to switch to C for anything, since code generated
is 'bloated'.

That being said, I've been writing a bunch of code for the
Pic24/disPic3x's lately with the C compiler that microchip provides.
Granted the disassembly listing is frustrating, in that I could write
faster code. However I wouldn't write that code faster than I'm doing
now in C. I'm calculating five 12th order polynomial equations in IEEE
754 floats, very quickly. Most of the time the processor is ticking
along at 32Khz drawing only 1mA!

The bottom line is that the new chips are very impressive with the math
capability. For what you are doing here, you may well be served by a
simple program in C. The pics24's don't cost much, and many may be less
than the ones you are using now! The extra memory and flash make them
really nice.

I realize that the time spent learning the new platform may be a pain.
However, the long term results may make it worth it. (Being able to spit
the results of a floating point calculation to an ascii terminal is
really nice!)

Anyway, carry on! Just my $02 here! :)

Dan



On 3/14/2014 11:42 AM, time-nuts-requ...@febo.com wrote:
 OK, gotcha.? But, this is in assembler, and anything wider than 3 bytes 
 becomes tedious.? Also, anything larger than 3 bytes starts using a lot of 
 space in a hurry.? Three byte fields allow me to use 256ths for gain and take 
 the result directly from the two high order bytes without any shifting.? And 
 as I mentioned to Hal in a separate post: when I hand-coded the exponential 
 averager the results were actually good.? I was forgetting to convert to 
 decimal to compare values to the decimal run.? For example: 0x60 doesn't look 
 like 0.375 until you convert to decimal and divide by 256.
 
 This has been most informative and certainly gives me more options.
 
 Bob
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Mains frequency

2014-03-14 Thread Jim Sanford

Hal:

Here's a url for the task-force report: 
http://energy.gov/oe/downloads/us-canada-power-system-outage-task-force-final-report-implementation-task-force


I live near Pittsburgh, PA.  I think there is ZERO interconnection 
between PJM (grid operator we're on) and yours (forgot the name). 
INcidentally, if you read the report, you'll see some incompetence, bad 
decisions, and bad management in the 2003 blackout.  Only reason we 
didn't go dark here is because PJM saw what was happening in Cleveland 
and cut them off.  (Reminding me of one night on a certain ship in the 
late 70's, when one plant was getting unstable and the other plant cut 
them off -- half of ship went dark/lost propulsion, but not the whole ship!)


I do not remember when my clocks started acting up; it WAS after the 
announcement of the relaxation (or requested relaxation).


I have read about the NY blackout you describe in IEEE pubs (I'm a 
member of the power energy society) but don't remember much detail.


All the best,
Jim
wb4...@amat.org

On 3/13/2014 2:17 AM, Hal Murray wrote:

[Context is maybe(?) withdrawing the proposal to stop keeping time on the US
power line.]

wb4...@wb4gcs.org said:

Since then, large amounts of generation (primarily coal) has been shut
down, so I was not at all surprised by the request.
I missed the announcement that the request was withdrawn, and actually
thought it had been approved and enacted -- all my line-frequency based
clocks are now erratic and not very accurate.

I could easily be wrong on the withdrawing part.  I haven't seen any recent
comments either way.

Where are you located?  Did you notice when your clocks started acting
erratic?  Do you have any solid data?

I have 2 old, synchronous, line clocks (stove and clock-radio).  They seem to
be working normally, but I don't pay a lot of attention to how accurate they
are.

I'm in Silicon Valley.  I do monitor the line with a typical time-nut setup.
That's using the Linux PPS stuff to count cycles.  Here is an updated graph
covering the last 12 weeks.
   http://www.megapathdsl.net/~hmurray/time-nuts/60Hz/Dec-2013.png
The 0 on the left is arbitrary.  Peak-to-peak is 15 seconds.  So if I set my
mechanical clock correctly, even at the worst time, it would still be within
15 seconds of correct.

That's from counting cycles and dividing by 60.  A single cycle is a big
event.  Off by one is easy to spot if you look at the right graph.  Here is a
sample of a glitch:
   http://www.megapathdsl.net/~hmurray/time-nuts/60Hz/60Hz-2014-Feb-20-pick.png
I've only seen one event where a cycle was picked, none for dropped.  I might
have missed something interesting.  Look at the longer graph above.  It's
pretty clear I haven't missed a huge pattern either way.

I saw one comment (don't remember where) that the problem was that the power
companies had to file a lot of paperwork whenever the line frequency dipped
below X.  (I don't remember the numbers.)  If they were running slow, (say
targeting 59.98) to catch up for running too fast, and an event that dropped
the frequency happened, it was much more likely to trigger the paperwork.

(Seems like they should fix the paperwork-filing rules to allow for that case, 
but maybe it's more complicated than I can see.)

-



   This is what initiated the 2003
blackout in parts of the US  Canada.  A utility had a paucity of  reactive
generation on a day with large reactive load, and one of its  generators
tripped on over-excitation to prevent damage to the generator  and voltage
regulator.  This initiated the cascading events that left  many in the dark.
  (The Joint US/Canada task force on that event is a  /fascinating/ read!)

Do you have a URL?

In the late 70's there was a big blackout in NYC.  I remember reading the IEEE article on 
it.  I don't remember any frequency graphs.  Did they archive that sort of data back 
then?  The deal was that an important line bringing power in to NYC was knocked out by 
lightning.  Power lines have several load capacities, depending on time.  Thus they can 
carry X forever, X+x for a half hour, and X+xx for 5 minutes.  A line from Long Island 
was carrying it's 5 minute rating for way more than 5 minutes.  Somebody in the control 
room had their thumb on the shut up button.  They knew that line was a 
critical resource, but they couldn't shift any load.  Eventually, it sagged enough to hit 
a tree.  Then that line when out and so did all of NYC.  (That's my memory from 35 years 
ago.)

Blackouts:
   http://en.wikipedia.org/wiki/Northeast_blackout_of_1965
   http://en.wikipedia.org/wiki/New_York_City_blackout_of_1977
   http://en.wikipedia.org/wiki/Northeast_blackout_of_2003
   http://en.wikipedia.org/wiki/List_of_major_power_outages

From 1965:
the same song recordings played at normal speed reveal that approximately six 
minutes before blackout the line frequency was 56 Hz, and just two minutes 
before the blackout that frequency dropped to 51 Hz.

51Hz ??!!  Wow.