At 08:09 AM 9/1/2005, Frank Brickle wrote:
Bob --

Now. Consider the SDR1K software.

Under Linux, the DSP and audio chain are *already* modularized in the
VST sense, with a tremendous array of options and existing plugins.

Is it? Are the various aspects of the receiver and transmit processing truly a series of interconnected little boxes with data flowing continuously through them (in a "you can replace a module, without knowing what's happening in the adjacent modules, recompile, and it still works" sense). Seems that in almost every software receiver design I've ever worked with that there's a remarkable amount of coupling among boxes (usually for performance reasons)... Especially if you're doing any "framed" signal processing (such as FFT based filtering), you have those "frame" and "unframe" or "rebuffering" steps, which tend to add latency, even if the overall processor loading isn't the issue.

For instance, you might have your IF filtering done in frequency domain, but, then, be doing your modulation/demodulation on a continuous time series (a trivial example would be an AM detector that simply does (I(t)^2+Q(t)^2)

How does changing sampling rates through the chain fit in: e.g. the classic technique if bring the data in wideband, do an fft, then, for multiple streams, grab some adjacent fft bins, and reconvert back to a lower sample rate, producing several lower bandwidth parallel signals.

Commercial products like Matlab/Simulink do this (a bit clunkily, in some cases) and will actually generate real live DSP code (which is totally incomprehensible... but then, the argument is that nobody expects to read the object code from a C compiler either).

Labview, VEE, and their clones do this kind of thing too (although I'm pretty sure I wouldn't write hard real-time code in Labview).

Given that Matlab is available in an inexpensive academic edition, and that Octave provides (for free) many of Matlab's capabilities, one might want to make sure there's some way to do this. It's a lot more congenial as a DSP developer(at least at the beginning of a development) to spend a few more bucks on a faster processor and accept the inefficiencies of Simulink.

Never for a minute would I expect something of that level of flexibility to be developed specifically for the SDR1K. I use it as an example of a fairly flexible, modularized approach to defining the signal processing chain.


The hardware control is unlikely to be a subject of much tinkering, once
it's completely remotable.

I fully agree, although I have some questions about how you'd integrate things like adjusting the phase offset register of the DDS into the DSP stream. I suppose, at a basic level, as long as the interface exposes the raw DDS registers (in addition to whatever other features), you've got that fallback. This, by the way, was the problem we encountered trying to use hamlib for our work. The hamlib library wants to make everything look like a radio, and some aspects of the SDR1K aren't particularly "radio-like". We started out modifying hamlib, and then when someone asked for a copy of our modified library we discovered the issue with GPL. (I should also point out that we, JPL, do (now) have a way to do GPL stuff.. )


The console is already in Python under Linux, and the consensus is that
in the new version, the "virtual radio" layer will be in Python too, in
all versions. Once again, that means that it's designed and written in
such a way as to yield the benefits of "modularization" without having
that feature designed in explicitly. That leaves, literally, only the
gui open for debate. Since the Linux version (and the eventual Mac
version) shares the good traits of Python and PyRo with the virtual
layer, the question is moot on that platform. It leaves only the Windows
gui up for debate.




Where we wind up is at a fork. My contention is that, in order to
achieve the kind of modularization in the DSP under Windows that already
exists under Linux, substantial modifications might have to be made that
would cause the two versions to develop independently.

Probably, if only because the interprocess communications mechanisms in Windows are different than *nix. Unless you (or someone else has already done it) want to create some sort of generic DSP engine, but that strikes me as a huge ordeal, and realistically not worth it.

However, the fork can still have a lot of common code. When all is said and done, both sides will be in C, and the basic algorithms that need to be implemented are the same, so maybe it's a matter of suitable calling protocols to the grunt level routines and OS specific wrappers. I've never even contemplated this for a real time application and it might be unrealistic. My overall experience with shoehorning hard realtime DSP into any general purpose OS has been quite grim. I'm quite impressed that it works well with both.


Similarly the
gui, for those who prefer to keep it in C#.
If the interface is defined at an appropriate level, I see no reason why one would care what the gui is written in. It could be Excel macros, if you were really, really ambitious (or foolhardy<grin>).

Ultimately, all the UI does is push parameters to various pieces of the DSP and get telemetry back.

Of course, part of the UI is timing critical things like audio input and playback (maybe?) or things like CW keying (or is CW keying part of the DSP chain.. it gets back to the "variable data rate" thing I was talking about above).



James Lux, P.E.
Spacecraft Radio Frequency Subsystems Group
Flight Communications Systems Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
tel: (818)354-2075
fax: (818)393-6875



Reply via email to