Hi Paula and others
I wrote so many articles about where and when to use FPGAs for wave
synthesis, that I cannot count them anymore. Only some short words in reply:
I agree that FPGAs do offer design techniques that cannot be done with
DPSs. But I hardly see them being made real in music instruments. The
reason might be that most people switch from C++ to the FPGA world and
try to copy their methods to VHDL only, so they do not make use of all
their abilities. Another point is inspiration what one can do :-)
What I mostly see is the usage of pure calculation power and here, the
advantage of FPGAs decreases and decreases. When I started with FPGAs
there were many more reasons to use them then nowadays.
Starting with my system I implemented things like S/PDIF transceivers,
PWM/PDM Converters and sample rate converters in the FPGA just to
overcome the limits of the existing chips. Today lot of that stuff is
obsolete, since chips are present and/or functions like S/PDIF can be
found in microprocessors already. No need to waste FPGA power for.
I see the same with my clients:
For instance a wave generation application from 2005 for one of my
clients formerly done with a Cyclone II FPGA now runs in two
ARM-Processors, since they overcame the FPGA (and are cheaper!). A radar
application from 2008 done in a Virtex with Power PC is now partly
performed by an Intel I7 multi core system - even the FFT! Same reasons.
So the range for the "must be an FPGA" in the audio field somehow is
shrinking. This is why I wonder, why music companies now start with
FPGAs. When I talked to companies to present my synth, there was low
interest. Maybe FPGAs were too mysterious for some of them. :-)
Well the advantage of the high sample rate always had been there, but
people mostly do not see the necessarity. While at that point of time,
the discussion was to increase audio quality to 96kHz - now, everybody
listens to mp3 and so what do we need a better quality for?
What changed?
The audio and music clients hardly have the requirement for better
hardware which is also a matter of understanding: I recently had a
discussion about bandwidth in analog systems and which sample rate we
have to apply to represent the desired pulse waves correctly. The audio
/ loudspeaker experts came out of totally different results than the
engineers for super sonic wave processing who were closer to my
proposals although both having the same frequency range in mind.
Obviously physics in music business is different.
Maybe I should put the questions also here :-)
The same is with MIDI (my best liked topic):
When talking to musicans I often hear that MIDI processing and knob
scanning can be done with a little microprocessor because MIDI was slow.
In return there is no nead for fast MIDI since people cannot press so
many knobs the same time, "we" cannot hear MIDI jitter, since musicians
do not totally stick to the measure either and so on.
Facts are different and again in the "non music business" no subject of
discussion. In the industrial field, data transmission speed, bandwidth,
jitter and phase noise is calculated and the design is done correctly to
avoid issues.
MIDI to me appeared to be a limiter for the evolution of synthesizers as
soon as I recognized it and understood the needs. I had a million of
talks about that too. You maybe know about my self designed high speed
MIDI. The strange thing about that is, that the serial transmission rate
of simple PC UARTs already exceeded 900kb 15 years ago, while MIDI still
was stuck at 31kHz.
I think THIS is also a big reason why some people moved to VST, in terms
to avoid wiring and synchronisation issues. Whereby even with USB they
still might run into problems in getting their 10 finger accord
transformed to sound quickly enough using windows :-)
> The main advantage over softsynths (like VSTs, etc) is that musicians
> prefer a "tactile" surface rather than a keyboard/mouse when "playing".
> Though I know a lot of composers (including film composers) who prefer
> scoring using VSTs.
Am 26.07.2018 um 12:30 schrieb pa...@synth.net:
> Rolf,
>
> My tuppence worth ;)
>
Am 26.07.2018 um 12:30 schrieb pa...@synth.net:
Rolf,
My tuppence worth ;)
I think where FPGAs score is in their ability to do lots of things at
once, something not possible with a CPU or DSP. So going from Mono to
poly is often quite simply a copy/paste (ok, I'm over simplifying it).
I 100% agree about off loading stuff like USB and MIDI to a CPU, which
is where the ZinQ and Cyclone SoC range really come into their own.
The main advantage over softsynths (like VSTs, etc) is that musicians
prefer a "tactile" surface rather than a keyboard/mouse when "playing".
Though I know a lot of composers (including film composers) who prefer
scoring using VSTs.
I also agree that MIDI is now at a stage where it's not adequate to
meet the demands of modern synths (VST, DSP, FPGA, or otherwise). Yes
you can use NRPNs and Yes OSC exists, but noether of these are widely
used. There are rumours about a MIDI V2, though I suspect that's a long
way away from being ratified and set in stone.
So in short, I think FPGAs have lots to offer, but I also believe that
DSP/CPUs have plenty more to offer too.
Paula
On 2018-07-24 15:59, rolfsassin...@web.de wrote:
Hello Theo
the word "hip" regarding FPGA seems to be a good hint. In several
music groups the new music machines are discussed heavily. In terms
analog modelling and recreating these formerly analog machines we know
the digital way. At the first sight FPGAs are consequent descision
what me and others doing such designs in professional business have a
clear view on design speed, cost, amount of work and in many cases
FPGAs are not acceptable and totally fail in comparison to DSPs. Yes,
FPGAs have become cheaper and more powerfull in the recent decade and
so did DSPs and CPUs too. If you look and todays options with multi
core CPUs and GPUs, VSTs could take advantage off, i hardly see cases
where FPGAs can do well.
I tried FPGA sound synthesis myself and also completed some designs,
but found that MIDI treatment is better hosted in the softcore part or
in a hard core like in Cyclone V's ARM architecture. A 600MHz ARM
design does all required for MIDI rapidly and totally sufficient. The
same is with USB. Writing an USB core in FPGA is no fun I can tell you
and is also better donw in a CPU / MCU architecture. Things like
changes, new requirements, testing and simulation is much easier and
can be done in the CPU / PC domain. We have sandboxes, testboxes,
trigger cases all available in Python and CC+Libs ready for usage. And
they can be accesses by any person for free. FPGA high tech
development and simulation requires a profesional license when you
want to do it effectively.
What should be discussed regarding MIDI and accurate timing is things
like channel handling and controllers. Todays synthesis units and VSTs
have tons of parameters and MIDI does not really support this. It is
already a hazzle to join 2 or more controllers to have 16 channels to
control a DAW and add a third one to run the tunes. Synchronisation is
an issue too.
Rolf
*Gesendet:* Sonntag, 22. Juli 2018 um 22:21 Uhr
*Von:* "Theo Verelst" <theo...@theover.org>
*An:* "A discussion list for music-related DSP"
<music-dsp@music.columbia.edu>
*Betreff:* [music-dsp] Creating new sound synthesis equipment
Hi DSPers,
I would like to reflect a bit about creating (primarily music)
synthesis machines,
or possibly software, as sort of talking about a dream that has been
of some people
since let's say the first (mainly analog!) Moogs in the 60s. What is
that idea of
creating a nice piece of electronic equipment to create blips and
mieaauuws,
thundering basses, imitation instruments, and as recently has
"relived" all kinds
of more or less exciting new sounds that maybe have never been used in
music before.
As for some it's like a designer's dream to create exactly the sound
they have in
mind for a piece of musique concrète, for others it's maybe to
compensate for their
lack of compositional skills or musical instrument training, so that
somehow through
the use of one of those cool synthetic sounds they may express
something which
otherwise would be doomed to stay hidden, and unknown.
Digital computer programs for sound synthesis in some sense are
thought to take
over from the analog devices and the digital sound synthesis machines
like "ROMplers"
and analog synthesizer simulations. It's not true this has become the
decisive reality
thus far: there's quite a renewed interest in those wonderful analog
synthesis sounds,
various manufacturers recreate old ones, and some advanced ones make
new ones, too.
Even though it is realistic that most folks at home will undoubtedly
listen most of the
time to digital music sources, at the same time there's a lot of
effort still in the
analog domain, and obviously a lot of attempts at processing digital
sound in order
to achieve a certain target quality or coolness of sound or something
else ?
Recently there's been a number of interesting combinations of analog and
digital processing as well as specific digital simulation machines (of
analogue
type of sound synthesis) like the Prophets (DSI), The Valkyrie
(Waldorf "Kyrie" IIRC)
based on FPGA high sampling frequency digital waveform synthesis and
some others.
Myself I've done a Open Source hard- AND software digital synthesizer
design based on
a DSP ( http://www.theover.org/Synth ) over a decade ago, before this
all was considered
the hip, and I have to say there's still good reason for hardware over
software synthesis,
while I of course can understand it is likely computers will get
better and
better at producing quality synthesis software. At the time I made my
design, I liked to
try out the limits I liked as a musician, such as extremely low, and
very stable latency
(one audio sample, with accurate timed Midi message reading in
programmable logic)
straight signal path (no "Xruns" ever, no missed samples or
re-sampling ever, no multi
processing quirks, etc). My experience is that a lot of people just
want to mess around
with audio synthesizers in a box! They like sounds and turning some
knobs, and if a
special chip gives better sound, for instance because of higher
processing potential
than a standard processor, they like that too, as well as absence of
strange software
sound- and control-interface latency.
I'm very sure there are a lot of corners being cut in many digital
processing based
synthesis products, even if the makers aren't too aware, for instance
related to sample
reconstruction reality compared with idealized design theories as well
as a hope for
congruency between the Z transform with a proper Hilbert transform,
which is unfortunately
a fairy tale. It is possible to create much better sounding synthesis
in the digital
domain, but it's still going to demand a lot of processing power, so
people interested in
FPGA acceleration, parallel software, supercomputing, etc, might well
have a hobby for
quite a while to come, in spite of all kinds of adds about music
software suggesting
perfection is in reach!
Theo V
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp