I think what we can all agree on is; 

1) right tool for the right job
2) right level of knowledge to use the tool

:)

there is no one magic bullet for any solution.

Paula

On 2018-07-26 21:11, robert bristow-johnson wrote:

> ---------------------------- Original Message ----------------------------
> Subject: Re: [music-dsp] Creating new sound synthesis equipment
> From: "Sound of L.A. Music and Audio" <solastu...@gmx.de>
> Date: Thu, July 26, 2018 3:16 pm
> To: music-dsp@music.columbia.edu
> --------------------------------------------------------------------------
> 
>> I agree that FPGAs do offer design techniques that cannot be done with
>> DPSs. But I hardly see them being made real in music instruments. The
>> reason might be that most people switch from C++ to the FPGA world and
>> try to copy their methods to VHDL only, so they do not make use of all
>> their abilities. Another point is inspiration what one can do :-)
>> 
>> What I mostly see is the usage of pure calculation power and here, the
>> advantage of FPGAs decreases and decreases. When I started with FPGAs
>> there were many more reasons to use them then nowadays.
> 
> from what i have seen, the need to design synthesis using FPGAs has to do 
> with polyphony of, perhaps, 100s of voices.  at least many dozens. 
> 
> this doesn't reveal anything that hadn't been public knowledge before, but 
> the approach to synthesis hardware of Kurzwiel Music Systems has been with 
> dedicated ASIC chips (that are expensive to spin), but they may since have 
> migrated to FPGA.  but the use of either, over the choice of a DSP (TI or 
> SHArC) or an ARM, is simply the number of voices.  if your application is, 
> say, 20 voices or less, a single DSP can do virtually any previously-used 
> synthesis method, even additive.  with a 245 MHz SHArC, there are 2500 
> instructions per sample at Fs=96 kHz.  20 voices would leave more than 100 
> instructions per sample per voice.  but this needs to be traded with 
> bandwidth needed to do channel effects on the mixed voices. 
> 
> even though i have never myself developed anything for any FPGA (i come from 
> the time of PALs), i still think the non-recurring engineering costs (NRE) is 
> much higher for FPGA than with an off-the-shelf CPU or DSP. 
> 
>> When talking to musicans I often hear that MIDI processing and knob
>> scanning can be done with a little microprocessor because MIDI was slow.
>> In return there is no need for fast MIDI since people cannot press so
>> many knobs the same time, "we" cannot hear MIDI jitter, since musicians
>> do not totally stick to the measure either and so on.
>> 
>> Facts are different and again in the "non music business" no subject of
>> discussion. In the industrial field, data transmission speed, bandwidth,
>> jitter and phase noise is calculated and the design is done correctly to
>> avoid issues.
>> 
>> MIDI to me appeared to be a limiter for the evolution of synthesizers as
>> soon as I recognized it and understood the needs. I had a million of
>> talks about that too. You maybe know about my self designed high speed
>> MIDI. The strange thing about that is, that the serial transmission rate
>> of simple PC UARTs already exceeded 900kb 15 years ago, while MIDI still
>> was stuck at 31kHz. 
> 
> if we just let MIDI be what MIDI is primarily intended for, i still think the 
> slow baud rate, opto-isolators, and the mostly simple protocol will still 
> service live music applications well.  MIDI has an event rate of  but merging 
> MIDI streams from several devices will always result in issues because we're 
> pushing MIDI beyond its original intent.  as it is, we can do about 1500 
> events per second (assuming Running Status on the MIDI channel messages) or 
> about 1000 events per second without Running Status.  that's not a bad sample 
> rate for knobs, foot pedals, a **single** keyboard player, even the output of 
> control signals for envelope or LFO (having rates of 10 Hz or less).  but 
> controlling a dozen devices with a single MIDI stream gets to be problematic. 
> 
>> Am 26.07.2018 um 12:30 schrieb pa...@synth.net:
>>> 
>>> I think where FPGAs score is in their ability to do lots of things at
>>> once, something not possible with a CPU or DSP. So going from Mono to
>>> poly is often quite simply a copy/paste (ok, I'm over simplifying it).
>>> 
>>> I 100% agree about off loading stuff like USB and MIDI to a CPU, which
>>> is where the ZinQ and Cyclone SoC range really come into their own. 
> 
> actually, for a small device, like a stomp box that takes MIDI (say for a 
> continuous pedal control), MIDI events can be decoded and dispatched in the 
> "foreground" process.  whatever MIDI 1.0 you receive, decoding the MIDI bytes 
> takes maybe a half dozen evaluate and branch instructions in the MIDI parser 
> state machine (i have C code that does this, i'm sure so do others), and 
> executing the MIDI instruction should take just another dozen or so 
> instructions.  except for Program Change, that can take a lot more. 
> 
> but, if it's a full-on synth with dozens and dozens of layered voices, you 
> will need a central CPU anyway because the sound-generating modules (be they 
> FPGA or other chips) will be busy. 
> 
> --
> 
> r b-j                         r...@audioimagination.com
> 
> "Imagination is more important than knowledge."
> 
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to