Jim Lux wrote:

>
>
> Consider two RF carriers, at 10.001 and 10.002 MHz
>
> If the DDS is perfect, at 10 MHz, and the sampler is perfect, at, say,
> 10 kHz, then you'll get two sine waves in the digitized sequence. One
> at 10 samples per cycle (the 1kHz audio) and the other at 5 samples
> per cycle (the 2 kHz audio).  The ratio between the two will be 1:2
>
> If the DDS is off, but the sampler is perfect, then, both RF
> frequencies will be shifted by the same amount.  Say the DDS is at
> 9.999 MHz (a kHz low).  the two audio frequencies will be 2 and 3 kHz,
> instead of 1 and 2 kHz, so your sampled data stream will have a 5
> samples/cycle (the 2 kHz) and a 3.33 samples/cycle (the 3 kHz). The
> ratio is nolonger 1:2 but something else (5:3.33)
>
> If the DDS is perfect, but the sampler is slow (say at 9kHz, instead
> of 10 kHz), then you'll get two signals at 1 and 2 kHz, but the
> sampled data stream will have a tone at 9 samples/cycle and one at 4.5
> samples/cycle. The ratio is 1:2, but the actual value is different.
>
> The effect is the same as the difference between playing a tape fast
> or slow (which preserves the harmonic relations, even if the pitch
> changes) and tuning high or low with SSB (which does not).

And this is where I'm confused.  I'm not (for this experiment) looking
at the linearity of the passband, but rather the absolute accuracy of
the frequency transformation.

The change in pitch is what I'm measuring -- if everything is perfect, I
know that an input precisely on the frequency the radio is tuned to will
yield an output (audio tone) that's precisely 600Hz.  If the sampling
rate is off, the 600Hz tone will be off, which translates into a
frequency error -- if the tone is 599Hz, that's the same as the radio
being tuned 1Hz low in frequency.  That's the frequency error I'm trying
to measure.

John

Reply via email to