[LAD] Software sound

2014-08-28 Thread W.Boeke

Hi fellow audio developers,

This forum is apparently mainly about audio production. But there's another side regarding audio, and that is: how to create interesting and/or beautiful sounds in software? Many sound generating 
programs try to emulate the sounds of vintage instruments as close as possible, sometimes with impressive results, but software has many more possibilities then electro-mechanic or early electronic 
intruments.


I try to imagine how the Hammond organ was developed. There must have been a person with some ideas how he could generate organ-like sounds using spinning tone wheels, each capable to generate one 
sine waveform, and to combine them using drawbars. Then he implemented this idea, listening carefully to the results, adding and removing different components. The key-clicks, caused by bouncing 
contacts, formed a serious problem, however musicians seemed to like them, and they became part of the unique Hammond sound.


Compared to the available technical possibilities of the past, software designers nowadays have a much easier life. A computer and a MIDI keyboard is all you need, you can try all kinds of sound 
creation, so why should you stick trying to reproduce the sounds of yore?


Maybe there are one or two eccentrics like me reading this post? In my opinion a software musical instrument must be controllable in a simple and intuitive way. So not a synthesizer with many knobs, 
or an FM instrument with 4 operators and several envelope generators. You must be able to control the sounds while playing. A tablet (Android or iOS) would be an ideal control gadget. And: not only 
sliders and knobs, but real-time, informative graphics.


As an example let me describe an algorithm that I implemented in a (open-source) program CT-Farfisa. I use virtual drawbars controlling the different harmonics (additive synthesis). The basic waveform 
is not a sine, but also modelled with virtual drawbars. The basic waveform can have a duty cycle of 1, 0.7, 0.5 etcetera. The final waveform is shortened with the same amount. The beauty of this is 
that you can control the duty cycle with the modulation wheel of the MIDI keyboard, so it's easy to modify the sound while playing. The program has build-in patches that have names of existing 
instruments, but that's only meant as an indication: they do not sound very similar to those instruments. This description might sound a bit complicated, but coding it is not that difficult. Also 
several attack sounds are provided, which is very important for the final result. The program has a touch-friendly interface, runs under Linux (for easy development and experimentation) and Android 
(for playing).


It is not my aim to provide another software tool that you can download and use or not, but to exchange ideas about sound generation. I know there are many technics, e.g. wave guides, physical 
modelling, granular synthesis, but I think that often it's difficult to control and modify the sound while playing, in an intuitive way. By the way, did you know that Yamaha, creator of the famous DX7 
FM synth, had only 1 or 2 employees who could really program the instrument?


Wouter Boeke
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Software sound

2014-08-28 Thread W.Boeke

Hi Ralph,
You didn't really read my post didn't you? You are slghtly off-topic, it reads like the catalogus of a keyboard shop. Look at the name of this forum. Linux: that is about software. Developers: that 
are people interested in creating something new, not in purchaging all kinds of gear.


Still: thanks for the information.
W.

On 08/28/2014 11:53 AM, Ralf Mardorf wrote:

Programming a sound using what kind of synthesis ever needs knowledge
and many parameters. But there's another way to easily make new sounds
based on existing sounds. E.g the Yamaha TG33's joystick, the vector
control records a mixing sequence, where the volume and/or the tuning of
4 sound can be mixed. Since you mentioned touch screens, Alchemy for the
iPad allows to morph sounds by touching the screen similar to the
joystick used by the TG33, but it also can be used to control filters,
effects and arpeggiator. There already are several old school synth and
AFAIK new workstations, especially new proprietary virtual synth that
provide what you describes. Btw. 2 of the 4 TG33 sounds are FM sounds,
not that advanced as provided by the DX7, the other two are AWM (sound
samples). Regarding the complexity of DX7 sound programming, the biggest
issue is that it has got no knobs. There are books about DX7
programming, such as Yasuhiko Fukuda's, but IMO it's easier to learn by
trail and error. JFTR e.g. the Roland Juno-106 provides just a few
controllers, but you easily can get a lot of sounds, without much
knowledge http://www.vintagesynth.com/roland/juno106.php , in theory
this could be emulated by virtual synth, in practise the hardware allows
to use specialized microchips that produce analog sound, that can't be
emulated that easily, not to mention that at the end of the computers
sound chain there always is a sound card, so if you emulate several
synth with the same computer, it's not the same as having several real
instruments, a B3, Minimoog etc..




___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Software sound

2014-08-28 Thread Ralf Mardorf
On Thu, 2014-08-28 at 14:52 +0100, W.Boeke wrote:
 You didn't really read my post didn't you? You are slghtly off-topic,
 it reads like the catalogus of a keyboard shop. Look at the name of
 this forum. Linux: that is about software. Developers: that 
 are people interested in creating something new, not in purchaging all
 kinds of gear.

What I wanted to point out is, that one feature for virtual Linux synth
is missing, that could do what you are looking for. By joystick, mouse
and/or touchscreen provide to record mix sequences of sound volumes,
(de)tuning, to manipulate filters, effects and/or the arpeggiator and
let this mix become part of the sound that is stored to a preset.
Yoshimi would be a good candidate to add such a feature.

My point is, that there are already good solutions provided by old synth
and by proprietary virtual synths for other OSs. You could adapt it and
provide this for Linux, use a synth like Yoshimi, use the available
sounds and let users make new sounds by using mouse, joystick and/or
touchscreens, just by recording mix sequences that manipulate the volume
or a filter etc.. Vector thingies like this could provide amazing
sounds and are available since the 80th for stand alone synth,
proprietary virtual synth adapted this. I'm not aware that there are
virtual Linux synth providing it. To generate a new sound by what ever
synthesis needs knowhow and some effort, using existing sounds and
manipulate them using a joystick or similar is easy to do. 

Regards,
Ralf

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Ardour sampler feature?

2014-08-28 Thread James Morris
On Wed, 27 Aug 2014 10:15:47 +0200
Thorsten Wilms s...@thorstenwilms.com wrote:

 On 26.08.2014 19:39, Devin Venable wrote:
  My dream feature?  Click on segment, select convert to sample and
  a new midi track appears linked to a plugin sampler, ready to play.
 
 I could have used something like that a few times.
 
 Being an architecture astronaut by hobby, I've been wondering about
 the differences and commonalities between (audio-)sequencers and
 samplers. In both cases you have an abstraction on top of actual
 audio-files and playback start/stop. But playback is parameterized
 differently. The sequencer has a timeline position and transport
 state, while the sampler takes notes (and potentially a bunch of
 control events).
 
 Lets think of a track in a sequencer to actually be a mapping of a 
 playlist to a playback graph. The playlist can be reduced to a
 sequence of events. The playback graph consists of at least a
 playback engine and may contain an effect chain.
 
 In typical sequencer use, for pure audio tracks, a playlist will
 contain regions that map to audio-files. That part of it can be
 reduced to a sequence of audio-sample-values, as main input to the
 playback engine. There may be other events/automation, all being
 inputs to the playback graph. The general idea is that you always
 deal with sequences of events as inputs to a playback graph.
 
 Now all of that is coupled to a single, global playback control, 
 consisting of transport state and timeline position. What if you
 could choose to decouple tracks from global control and make them
 take note-events for playback control instead? If the playback engine
 can variate playback speed in relation to note-value, you have a
 basic sampler. If the playback engine also offers realtime 
 pitch-shifting/time-stretching and formant control, you have
 wonderland.
 
 Note that you would not be limited to control a decoupled track's 
 position and state by notes. There could also be direct control with 
 start/stop/speed (including negative) and locate/go-to events. Tracks 
 that play tracks ... Mmwuhahahaha!
 

That's a more detailed description of what I thought about. I think it
could still be possible to have layers - like in a sampler - but
another editor view would be required. I think also loop points would
work fine. The problem of course would be representation, and, well,
development.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev