Let me expand on this after first agreeing with everything expressed here.

Phil, N8VB, is working on the ideal solution for us. With (say) two of his proposed virtual sound cards we can do the following:

1) audio processing such as microphone audio processing will be doable by ANY program you want. You can take your favorite cheap free program or your favorite professional program and tweak audio to your hearts content. That will be the audio that goes through the audio chain in a pristine fashion. We will build, to be called by the console, some rudimentary graphic equalizer functions, etc. NOTHING we are doing now in the code can be said to be anything but a stopgap while we get
  the professional sound program "tap in" done.

2) audio processing on the output can be sent anywhere to any other program. with another virtual tap, we can hook up all sound card programs up to the radio. MixW, MMTTY, MMSSTV, Winwarbler, Writelog, etc. can all hook here.

not using N8VB necessarily but we could:

3) we are building in a tap for "RF" processing in the frequency domain. We will have a tap before and after the filter has been
  applied.  What can be done there is going to amaze you.

The ring buffer stalls are fixed (high volume noise, screeches, pops, and grinds).

the CPDR has been modified to produce up to 105% max on peaks.

On the Linux side, these facilities being provide by Phil are native to jack.

The TX audio chain is now in a very primitive state with basically a clipper and it is otherwise linear unless you engage the COMP or CPDR. We are working to provide the perfect adaptation facility.

Bob






Frank Brickle wrote:

The move to 1.4 and beyond carried along with it some significant changes that didn't receive a lot of attention at the time. In particular, the decision was made to move all voice and speech processing (except compression and compansion) out of the DSP core altogether.

What this means is that processing like EQ or split-band compression will be on the "console" side.

A number of factors played in this decision. One of them is that there is already a smorgasbord of facilities for this kind of outboard processing in Linux, and it seemed undesirable to reinvent the wheel.

Another consideration is that the audio chain is likely to be a favorite area for experimentation. Since audio processing is relatively independent of other aspects of the DSP, it seemed most reasonable to put the audio out where people could play with it and program for it most easily. Thus the decision to put it out where much of the experimentation and independent development has been taking place already.

Nevertheless: as far as the existing EQ is concerned, there is no technical limitation preventing the kind of shaping that Alan is asking for. The facility exists to apply spectral shaping through an abitrary curve expressed as finely as the DSP blocksize will allow. As with many features of the radio -- in fact, as with most reasonably interesting technology -- the current limitations are practical consequences of making a comprehensible user interface. Going beyond those limitations is really an adventure in interface design.

73
Frank
AB2KT




Reply via email to