At 10:49 PM 7/4/2005, Lyle Johnson wrote:
...If you're looking at the analog output of the sound card, which is essentially the "looped back" analog input to the card, then the sampling rate may not make any difference.

Keep in mind that the SDR-1000 is not baseband in, baseband out, but uses an IF of 11 to 15 kHz depending on the exact frequency it is tuned to and if spur reduction is on or off.

Thus, the sound card oscillator is used to derive an oscillator at 11 to 15 kHz, which mixes the quadrature signal down to baseband. This is what John is attempting to quantify.


OK, then.. That makes the measurement methodology somewhat more complex.

Seems that a better way to measure clock accuracy on the sound card is to generate a sine wave in software and run it out to the (external reference locked) counter. No hassles with the SDR, etc. One could also use the SDR as a precision frequency audio source (albeit with spurs and phase noise contributions) to validate that the A/D samples at the same frequency as the D/A samples.

When I was characterizing the AC97 codecs on my Via Mini-ITX mobos, I just created a big file with a sine wave in it and played it back with APLAY (on a Linux box), then ran that to the measurement system.

You're always better off to measure pieces than combinations. Then you can measure the combination and see if the uncertainties of the pieces combine like they should.

This would be a good way to find subtle signal processing software bugs such as dropping samples. i.e. the frequency plan model is fairly straightforward, so you should be able to predict what's going on, and if it's different, then you know there's a problem.

A subtle bug might be hiccups in the OS pushing the data in and out of the sound card interface. I don't know how big the buffering is, but since Windows isn't hard Real Time, it's not inconceivable that there might be a buffer overrun or underrun because the kernel got busy doing something else (handling mouse clicks or network I/O), and, because it's random, you might not see it. Certainly, Windows isn't going to reliably tell you about it, because their (consumer) orientation is towards audible defects, and dropping a sample every second or so isn't going to be audible, especially if you "catch up" so that overall the sound stays synchronized.

I'd also be interested to know if there's a variable latency between input and output. That is, is the A/D and D/A clock synchronized, or at least, in a phase stable relationship.



If the SDR-1000 detector output were at baseband, then the sampling rate accuracy would indeed be irrelevant.

Except that any jitter on the sampling still contributes to noise levels.



73,

Lyle KK7P

James Lux, P.E.
Spacecraft Radio Frequency Subsystems Group
Flight Communications Systems Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
tel: (818)354-2075
fax: (818)393-6875


Reply via email to