Re: [linux-audio-dev] Catching up with XAP

2003-01-17 Thread Tim Hockin
> > Does continuous control mean continuous sound?
> 
> No, because one of the controls is often gate or amplitude.
 
But that is the result of some other control - by default, these things are
always on, they may be gated or muted, but they are oscillating.
 
> (Analogue) monosynths do not have init latched values. I guess if you're
> trying to mimic a digital monosynth you might want a VOICE, but I can't
> see how it would be anything but confusing when youre trying to implement
> a monosynth model.

> Isn't the easiest thing just to make the instrument declare wether its
> polyphonic or not, if it is (NB it can have a polyphony of one) it will
> receive VVIDs, if not, it wont.

So I *think* the confusion I have been having is that when you say
mono-synth, I think TB303, or Juno.  A synth that has poly=1.

I guess it is reasonable (and nice) to not have to deal with VOICE_ON/OFF
for things like a modular synth module (essentially an oscillator).

This revelation came when I started trying to think up things that didn't
need VOICE and the one that came to mind (this was my revelation) was a
theremin.  VOICE makes absolutely no sense for this. It is always on and
ready to go, just waiting for some control (hand-distance or something).

So I've come around somewhat on this.  However, what I don't see is how
these things can be polyphonic, with the exception of multiple channels
(which are essentially different instances with some shared stuff).

So am I finally "Getting It?".




Re: [linux-audio-dev] Catching up with XAP

2003-01-16 Thread David Olofson
On Thursday 16 January 2003 18.39, Fons Adriaensen wrote:
[...]
> 2. A poly synth. Here normally 'a new note is a new note', and
> things like the effect described above are not possible because
> the synth does not know the relations between the existing set
> of notes and any new ones. Anther example, you play a 3-note chord,
> and then a second one, and you want notes to slide individually
> from the first chord to the second. Once your masterpiece is in
> MIDI format it't impossible to find out which notes are related.
>
> If course, if you try to play this on a keyboard, you can not even
> express what you want, but that only a limitation of the the
> interface, and should not imply that it can't be done.

Right. With a guitar synth, you generally have individual control of 
6 channels, so that would be one controller you can use for this. A 
tablet with two tools and/or using X and Y as two pitch controls 
would be another alternative.


> If you look
> beyond the traditional 'pop' music scene, lots of composers are
> using other means to enter their scores, such as scripts or even
> algorithms.

Yes, that's a good point.

I also think it's important to note that score editors, piano rolls 
and the like can make use of this rather easilly. The easiest way is 
to have an optional feature that reuses the released VVID with the 
closest pitch. You could also insert that information manually. 
(Obviously, both methods work even if the data was recorded from a 
keyboard or other controller.) In a piano roll, you could just link 
notes with "rubber band" lines from the end of one note to the start 
of the next, or something like that, to mark them as chained.


> What should be clear from this, is that as a results of the
> limitations of MIDI, a poly synth is *not* the same thing as a set
> of mono synths.
>
> If you want that (polyphony by a set of mono synths) the only way
> to get it is by abusing the channel mechanism. This forces you to
> work in a way that is completely different from normal poly mode,
> which is extremely unpractical.

Yes... Especially if you have a braindead sequencer that won't let 
you edit multiple channels at once in the piano roll. :-(


> Anyway channels are not meant for
> this, they are meant to multiplex data intended for different
> devices over a single cable.

Exactly. Using multiple channels with the same patch might even 
result in significant waste of resources, unless the synth is smart 
enough to share data internally. If the synth has internal channel 
insert effects, you may not even be able to avoid running multiple 
instances of any effects you use.


> The explicit use of VVIDs would allow us to unify the interface to
> the 'normal' (in the MIDI sense) polyphonic synth, and the 'set of
> of monophonic synths'.

Yes.


> And it would indeed allow the player to take the normally automatic
> voice assignment into his own hands, but it does *not* force him to
> do so.

Right. You have broken MIDI style polyphony (can't tell notes with 
the same pitch apart), completely individual notes as well as chained 
notes, all with the same interface.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Catching up with XAP

2003-01-16 Thread David Olofson
On Thursday 16 January 2003 12.09, Steve Harris wrote:
[...]
> > 2) continuous control - this includes things like a violin, which
> > receives streams of parameters.  This gets a VVID for each new
> > voice.  If you want glissando, you would tell the violin synth
> > that fact, and it would handle it.
> >
> > Mono-synths need a VOICE, too.  Modular synth modules might not. 
> > They are effectively ON all the time, just silent because of some
> > other control. This works for modular-style synths, because they
> > are essentially pure oscillators, right?  Mono-synths can still
> > have init-latched values, and releases etc.  Does it hurt
> > anything to give modular synths a VOICE control, if it buys
> > consistency at the higher levels?  Maybe - I can be swayed on
> > that.  But a modular synth is NOT the same as just any
> > mono-synth.
>
> (Analogue) monosynths do not have init latched values. I guess if
> you're trying to mimic a digital monosynth you might want a VOICE,
> but I can't see how it would be anything but confusing when youre
> trying to implement a monosynth model.

Exactly. That's why I think it should be legal to not have a VOICE 
control input, or just ignore the events. And I can't see any 
problems with doing so.


> There are also effects that
> might want to receive note information, but wouldn't want (or
> expect) to have to deal with voices, eg. pitch shift.

That's OK, as long as the effects expect monophonic "instrument 
control data". For a harmonizer (which is basically a polyphonic 
pitch shifter), you'd still need to deal with VVIDs, just like 
anything that wants to track more than one voice.

Anyway, these are perfect examples of why thinking of synths and 
effects as different is just pointless. There's so much overlap that 
we can't even agree on what to look at to tell them apart - so why 
bother? The distinction is completely irrelevant anyway.


> Isn't the easiest thing just to make the instrument declare wether
> its polyphonic or not, if it is (NB it can have a polyphony of one)
> it will receive VVIDs, if not, it wont.

I think it would be more much less confusing to just have a hint that 
indicates whether or not the synth cares about VVIDs. If it has more 
than 1 voice, it will have to use VVIDs obviously, but this isn't 
*really* because the synth is polyphonic, but because it needs the 
VVIDs for addressing.

This isn't all there is to it, though. You *can* implement a 
polyphonic synth without real VVIDs. The distinction between real and 
fake VVIDs I originaly wanted to make relates only to the synth side 
of VVIDs. Real VVIDs come with some "user space", whereas fake VVIDs 
are just integers that are unique from the synth POV.

So, there are actually *three* classes:

No VVIDs
Fixed value, or vvid field not initialized.
(Means you should never even *look* at them!)

Fake VVIDs
Unique values only. You can use these to mark
voices or whatever, internally, so you *can*
still do polyphonic; it'll just be more
expensive.

Real VVIDs
Unique values that are also indices into a
host managed array of VVID Entries. A VVID
Entry is a 32 bit integer that the synth may
use in any way it desires. The idea is that
synths can use this for instant voice lookup,
instead of implementing hash based searching
or similar to address voices.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Catching up with XAP

2003-01-16 Thread Fons Adriaensen
The whole discussion about VVIDs has become a rather complicated
web of opinions and examples that sometimes are understood, and
sometimes not. This is how I see it.


Why we need explicit VVIDs.

With MIDI, you can have

1. A mono synth. If there is any relation between a new note
and another one, it's always clear wich one is meant (the
previous one). This allows things like for example, not restarting
an ADSR if you play a second note before releasing the previous one. 

2. A poly synth. Here normally 'a new note is a new note', and
things like the effect described above are not possible because
the synth does not know the relations between the existing set
of notes and any new ones. Anther example, you play a 3-note chord,
and then a second one, and you want notes to slide individually from
the first chord to the second. Once your masterpiece is in MIDI format
it't impossible to find out which notes are related.

If course, if you try to play this on a keyboard, you can not even
express what you want, but that only a limitation of the the
interface, and should not imply that it can't be done. If you look
beyond the traditional 'pop' music scene, lots of composers are using
other means to enter their scores, such as scripts or even algorithms.

What should be clear from this, is that as a results of the
limitations of MIDI, a poly synth is *not* the same thing as a set of
mono synths.

If you want that (polyphony by a set of mono synths) the only way to
get it is by abusing the channel mechanism. This forces you to work 
in a way that is completely different from normal poly mode, which
is extremely unpractical. Anyway channels are not meant for this,
they are meant to multiplex data intended for different devices over
a single cable.

The explicit use of VVIDs would allow us to unify the interface to the
'normal' (in the MIDI sense) polyphonic synth, and the 'set of of
monophonic synths'.

And it would indeed allow the player to take the normally automatic
voice assignment into his own hands, but it does *not* force him to
do so.

A lot more could be said, but I have to go.

-- 
Fons Adriaensen






Re: [linux-audio-dev] Catching up with XAP

2003-01-16 Thread David Olofson
On Thursday 16 January 2003 01.14, Tim Hockin wrote:
[...]
> > The only problem I have with it is that it's completely
> > irrelevant to continous control synths - but they can just ignore
> > it, or not have the control at all.
>
> Does continuous control mean continuous sound?

No. A synth has to be able to shut up, I think. If the VOICE control 
is compulsory, it might make sense for continous control synths to 
use it as a gate, but "normal operation" of such synths would make 
use of continous controls only.


> > > This is not at all what I see as intuitive.  VOICE is a
> > > separate control used ONLY for voice control.  Instruments have
> > > it.  Effects do not.
> >
> > There's this distinct FX vs instrument separation again. What is
> > the actual motivation for enforcing that these are kept totally
> > separate?
>
> They are not totally separate, but VOICE control is something
> unique to Instruments.

Yeah... Something that has VOICE controls is a synth?

But what about continous control synths? If VOICE isn't continous, 
it's of no use to such synths, unless we require that they implement 
it in *some* way, whether it makes sense or not.

Anyway, I don't see why this matters, really, as a plugin is just a 
synth because it is by some definition - not because it's using a 
different API. We could call anything that makes use of a particular 
API feature a synth, but it doesn't seem like the VOICE control, or 
VVIDs could be that feature.


> > > > What might be confusing things is that I don's consider
> > > > "voice" and "context" equivalent - and VVIDs refer to
> > > > *contexts* rather
> > >
> > > I disagree - a VVID refers to a voice at some point in time.  A
> > > context can not be re-used.  Once a voice is stopped and the
> > > release has ended, that VVID has expired.
> >
> > Why? Is there a good reason why a synth must not be allowed to
> > function like the good old SID envelope generator, which can be
> > switched on and off as desired?
>
> They still can - it's just the responsibility of the synth to know
> this, and not the sequencer or user.

I'm not talking about responsibility, but the ability say when you 
want to make use of such features, if they exist. You can't do that 
without either VVIDs, or some explicit extra feature.


> > Also, remember that there is nothing binding two notes at the
> > same pitch together with our protocol, since (unlike MIDI) VVID
> > != pitch. This means that a synth cannot reliably handle a new
> > note starting before the release phase of a previous pitch has
> > ended. It'll just have to allocate a new voice, completely
> > independent of the old voice, and that's generally *not* what you
> > want if you're trying to emulate real instruments.
>
> ack!  So now you WANT old MIDI-isms? For new instruments (which do
> not want to feel like their MIDI brethren) this is EXACTLY what we
> want.  For instruments which are supposed to behave like old MIDI
> synths, that is the responsibility of the synth to handle, NOT the
> API or sequencer or user.

MIDI doesn't get this right; it only works for mono synths, and only 
for notes at the same pitch for poly synths. In MIDI, note->note 
relations are implicit and "useful" only by luck, basically.

What I'm suggesting is not this same brokenness, but a way to 
*explicitly* say if you intend notes to be related or not.


> > For example, if you're playing the piano with the sustain pedal
> > down, hitting the same key repeatedly doesn't really add new
> > strings for that note, does it...?
>
> But should we force that on the API?

It's not forced, really. You can say that one note is somehow related 
to a previous not, or that it's a completely new note - irregardless 
of arbitrary controls, such as PITCH.


> No, we should force that on
> the synth.

You *could* do it for this particular case by looking at PITCH, but 
that works only when the note->note relation is implied by "same 
PITCH", and breaks down if you try to use adaptive scales (like 12t 
with "dynamic" temperament) or anything like that.


[...]
> > >  If your
> > > instrument has a glissando control, use it.  It does the right
> > > thing.
> >
> > How? It's obvious for monophonic synths, but then, so many other
> > things are. Polyphonic synths are more complicated, and I'm
> > rather certain that the player and/or controller knows better
> > which not should slide to which when you switch from one chord to
> > another. Anything else will result in "random glisandos in all
> > directions", since the synth just doesn't have enough
> > information.
>
> Unless I am musically mistaken, a glissando is not a slide.  If you
> want to do chord slides, you should program it as such.

Yes, you're right. I'm thinking about portamento. (Glissando means 
you mark each scale tone or semitone, depending on instrument. It's 
never a note->note *slide*, right?)


> > > Reusing a VVID seems insane to me.  It just doesn't jive with
> > > anything I can comprehe

Re: [linux-audio-dev] Catching up with XAP

2003-01-16 Thread Steve Harris
On Wed, Jan 15, 2003 at 04:14:23 -0800, Tim Hockin wrote:
> I'm breaking these into two emails for the two subjects.  I'm replying to
> each subject in  big reply - so you can see the evolution of my position :)
> 
> *** From: David Olofson <[EMAIL PROTECTED]>
> > > The trigger is a virtual control which really just says whether the
> > > voice is on or not.  You set up all your init-latched controls in
> > > the init window, THEN you set the voice on.
> > 
> > The only problem I have with it is that it's completely irrelevant to 
> > continous control synths - but they can just ignore it, or not have 
> > the control at all.
> 
> Does continuous control mean continuous sound?

No, because one of the controls is often gate or amplitude.
 
> 2) continuous control - this includes things like a violin, which receives
> streams of parameters.  This gets a VVID for each new voice.  If you want
> glissando, you would tell the violin synth that fact, and it would handle
> it.

> Mono-synths need a VOICE, too.  Modular synth modules might not.  They are
> effectively ON all the time, just silent because of some other control.
> This works for modular-style synths, because they are essentially pure
> oscillators, right?  Mono-synths can still have init-latched values, and
> releases etc.  Does it hurt anything to give modular synths a VOICE control, 
> if it buys consistency at the higher levels?  Maybe - I can be swayed on
> that.  But a modular synth is NOT the same as just any mono-synth.

(Analogue) monosynths do not have init latched values. I guess if you're
trying to mimic a digital monosynth you might want a VOICE, but I can't
see how it would be anything but confusing when youre trying to implement
a monosynth model. There are also effects that might want to receive note
information, but wouldn't want (or expect) to have to deal with voices,
eg. pitch shift.

Isn't the easiest thing just to make the instrument declare wether its
polyphonic or not, if it is (NB it can have a polyphony of one) it will
receive VVIDs, if not, it wont.

- Steve 



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread Tim Hockin
I'm breaking these into two emails for the two subjects.  I'm replying to
each subject in  big reply - so you can see the evolution of my position :)

*** From: David Olofson <[EMAIL PROTECTED]>
> > The trigger is a virtual control which really just says whether the
> > voice is on or not.  You set up all your init-latched controls in
> > the init window, THEN you set the voice on.
> 
> The only problem I have with it is that it's completely irrelevant to 
> continous control synths - but they can just ignore it, or not have 
> the control at all.

Does continuous control mean continuous sound?

> > This is not at all what I see as intuitive.  VOICE is a separate
> > control used ONLY for voice control.  Instruments have it.  Effects
> > do not.
> 
> There's this distinct FX vs instrument separation again. What is the 
> actual motivation for enforcing that these are kept totally separate?

They are not totally separate, but VOICE control is something unique to
Instruments.

> > > What might be confusing things is that I don's consider "voice"
> > > and "context" equivalent - and VVIDs refer to *contexts* rather
> >
> > I disagree - a VVID refers to a voice at some point in time.  A
> > context can not be re-used.  Once a voice is stopped and the
> > release has ended, that VVID has expired.
> 
> Why? Is there a good reason why a synth must not be allowed to 
> function like the good old SID envelope generator, which can be 
> switched on and off as desired?

They still can - it's just the responsibility of the synth to know this, and
not the sequencer or user.

> Also, remember that there is nothing binding two notes at the same 
> pitch together with our protocol, since (unlike MIDI) VVID != pitch. 
> This means that a synth cannot reliably handle a new note starting 
> before the release phase of a previous pitch has ended. It'll just 
> have to allocate a new voice, completely independent of the old 
> voice, and that's generally *not* what you want if you're trying to 
> emulate real instruments.

ack!  So now you WANT old MIDI-isms?  For new instruments (which do not want
to feel like their MIDI brethren) this is EXACTLY what we want.  For
instruments which are supposed to behave like old MIDI synths, that is the
responsibility of the synth to handle, NOT the API or sequencer or user.

> For example, if you're playing the piano with the sustain pedal down, 
> hitting the same key repeatedly doesn't really add new strings for 
> that note, does it...?

But should we force that on the API?  No, we should force that on the synth.

> And when you enter the release phase? If have yet to see a MIDI synth 
> where voices stop responding to pitch bend and other controls after 
> NoteOff, and although we're talking about *voice* controls here, I 
> think the same logic applies entirely.
> 
> Synths *have* to be able to receive control changes for as long as a 
> voice could possibly be producing sound, or there is a serious 
> usability issue.

OK, I'll buy this.  A VOICE_OFF does not automatically mean no more events.
I can deal with that.

> >  If your
> > instrument has a glissando control, use it.  It does the right
> > thing.
> 
> How? It's obvious for monophonic synths, but then, so many other 
> things are. Polyphonic synths are more complicated, and I'm rather 
> certain that the player and/or controller knows better which not 
> should slide to which when you switch from one chord to another. 
> Anything else will result in "random glisandos in all directions", 
> since the synth just doesn't have enough information.

Unless I am musically mistaken, a glissando is not a slide.  If you want to
do chord slides, you should program it as such.

> > Reusing a VVID seems insane to me.  It just doesn't jive with
> > anything I can comprehend as approaching reality.
> 
> MIDI sequencers are reusing "IDs" all the time, since they just don't 
> have a choice, the way the MIDI protocol is designed. Now, all of a 
> sudden, this should no longer be *possible*, at all...?

I don't see what it ACHIEVES besides complexity.  This is twice in one email
you tout MIDI brokenness as a feature we need to have.  You're scaring me!

> Either way, considering the polyphonic glisando example, VVIDs 
> provide a dimension of addressing that is not available in MIDI, and 
> that seems generally useful. Why throw it away for no technical (or 
> IMHO, logical) reason?

I had a sleep on it, and I am contemplating adjusting my thought processes,
but your glissando example is not helping :)

> > Stop button is different than not sending a note-off.  Stop should
> > automatically send a note-off to any VVIDs.  Or perhaps more
> > accurately, it should send a stop-all sound event.

I was wrong - it should NOT send a stop-all.  It should stop any notes it
started.  MIDI, line-in monitoring, etc should not stop, or should have a
different stop button.

> I disagree. I think most people expect controls to respond during the 
> full dura

Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread Frank van de Pol
On Wed, Jan 15, 2003 at 11:29:20PM +0100, David Olofson wrote:
> 
> Because that's just the way it is, even if you can "stretch" the 
> concept slightly. Ever implemented a MIDI synth?

In fact I did :-)


> 
> > If you doubt, feel free to come over to my studio and hear my AKAI
> > sampler play multiple times the same sample at the same pitch :-)
> 
> I have hardware that does that as well, but it doesn't demonstrate 
> anything more than possibly a minor hole in the MIDI specification 
> AFAIK, there is no official statement as to whether synths should do 
> this or not, and either way, you'll find synths doing it in several 
> different ways. "Restart" and "new voice" are just two possibilities. 
> (I've mentioned other alternatives previously.)
> 
> Anyway, yes, many synths and samplers allocate new voices when you 
> send multiple NoteOns for the same pitch, but:
> 
>   1. For many sounds, this is simply *incorrect behavior*.
>  Examples would be many percussion instruments, most
>  string instruments with fixed per-string tuning,
>  most pipe, tube, electromechanical and other organs,...
> 
>   2. What happens when you send Poly Pressure...? One of two
>  things: a) the synth screws up and applies the effect
>  on a "random" voice, or b) the synth applies the effect
>  on all voicen playing on that pitch. 
> 
>   3. What happens when you send NoteOff? Well, I have yet
>  to see a synth that even tries to match NoteOns and
>  NoteOffs - and it would be rather random anyway. What
>  happens is that the synth stops *all* notes playing
>  that pitch on the channel.
> 
>   4. If we were to use separate events for VOICE_ON and
>  VOICE_OFF, nothing would prevent XAP synths from doing
>  the same thing. (However useless it is, when pitch is
>  separated from VVID.)
> 

I agree with you David.


> 
> > I see the use of the VVIDs but for some reason I get an
> > uncomfortable feeling seeing it; it just reminds me of over
> > engineering and adding unneeded complexity.
> 
> So, how do you propose we deal with voice/note addressing? Take the 
> MIDI approach, and forget about continous pitch...?
> 
> 
> > I'm quite glad my MIDI
> > devices are smart enough to do their voice allocation
> 
> And XAP plugins would be no different in any way. VVIDs are just a 
> more powerful, but not really fundamentally different addressing 
> method.
> 
> This is not about voice allocation, but about voice *addressing*. 
> I've stated many times before that I specifically *do not* want 
> senders to have anything to do with the details of voice allocation.
> 
> 
> > Sorry, couldn't resist it.
> > Frank.
> 
> Sorry, but I still claim that MIDI note pitch is equivalent to VVIDs 
> when it comes to voice management. VVIDs are just more powerful. :-)
> 

In MIDI all of this is typically worked around by using multiple channels
using the same sounds. I understand your point and must admit that the VVIDs
are indeed very powerful. 

Frank.


-- 
+ --- -- -  -   -- 
| Frank van de Pol  -o)A-L-S-A
| [EMAIL PROTECTED]/\\  Sounds good!
| http://www.alsa-project.org  _\_v
| Linux - Why use Windows if we have doors available?



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread David Olofson
On Wednesday 15 January 2003 21.56, Frank van de Pol wrote:
> On Wed, Jan 15, 2003 at 01:07:30PM +0100, David Olofson wrote:
> 
>
> > With MIDI, this is obvious, since VVID == note pitch. It's not
> > that easy with our protocol, and I don't think it's a good idea
> > to turn a vital feature like this into something that synths will
> > have to implement through arbitrary hacks, based on the PITCH
> > control. (Hacks that may not work at all, unless the synth is
> > aware of which scale you're using, BTW.)
>
> Sorry for nitpicking, but why do you presume MIDI presents any
> relation between VVID and note pitch?

Because that's just the way it is, even if you can "stretch" the 
concept slightly. Ever implemented a MIDI synth?


> Though valid in many cases,
> it is definately not always true.
>
> In MIDI the 'note on' event starts a note, and 'note off' (which
> can be note on with velocity == 0) ends it. The note pitch pitch is
> actually just an attribute for that note.

It's an attribute alright, but no polyphonic synth can operate 
correctly without also using it as a VID.


> If VVID == note pitch would be true; the note would be a singleton,

That's *your* definition; not mine. :-)


> and triggering the same note multiple times would never be
> possible. Note that I'm not saying _re_triggering :-)

Well, the way I think about VVIDs, indeed, you can't trig multiple 
notes on the same VVID, but that's really only because I like to 
think of VOICE as a control - and you can't set it to 1 more than 
once without setting it to 0 in between.

With separate VOICE_ON and VOICE_OFF events that restriction would 
not apply, but what would be the point? If you really want multiple 
notes at the same pitch, just play them using separate VVIDs. That's 
in fact one of the major points with separating VVID from pitch - and 
it has the distinct advantage over MIDI that you still have *full* 
control over each note playing.


> If you doubt, feel free to come over to my studio and hear my AKAI
> sampler play multiple times the same sample at the same pitch :-)

I have hardware that does that as well, but it doesn't demonstrate 
anything more than possibly a minor hole in the MIDI specification 
AFAIK, there is no official statement as to whether synths should do 
this or not, and either way, you'll find synths doing it in several 
different ways. "Restart" and "new voice" are just two possibilities. 
(I've mentioned other alternatives previously.)

Anyway, yes, many synths and samplers allocate new voices when you 
send multiple NoteOns for the same pitch, but:

1. For many sounds, this is simply *incorrect behavior*.
   Examples would be many percussion instruments, most
   string instruments with fixed per-string tuning,
   most pipe, tube, electromechanical and other organs,...

2. What happens when you send Poly Pressure...? One of two
   things: a) the synth screws up and applies the effect
   on a "random" voice, or b) the synth applies the effect
   on all voicen playing on that pitch. 

3. What happens when you send NoteOff? Well, I have yet
   to see a synth that even tries to match NoteOns and
   NoteOffs - and it would be rather random anyway. What
   happens is that the synth stops *all* notes playing
   that pitch on the channel.

4. If we were to use separate events for VOICE_ON and
   VOICE_OFF, nothing would prevent XAP synths from doing
   the same thing. (However useless it is, when pitch is
   separated from VVID.)


> I see the use of the VVIDs but for some reason I get an
> uncomfortable feeling seeing it; it just reminds me of over
> engineering and adding unneeded complexity.

So, how do you propose we deal with voice/note addressing? Take the 
MIDI approach, and forget about continous pitch...?


> I'm quite glad my MIDI
> devices are smart enough to do their voice allocation

And XAP plugins would be no different in any way. VVIDs are just a 
more powerful, but not really fundamentally different addressing 
method.

This is not about voice allocation, but about voice *addressing*. 
I've stated many times before that I specifically *do not* want 
senders to have anything to do with the details of voice allocation.


> Sorry, couldn't resist it.
> Frank.

Sorry, but I still claim that MIDI note pitch is equivalent to VVIDs 
when it comes to voice management. VVIDs are just more powerful. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread Frank van de Pol
On Wed, Jan 15, 2003 at 01:07:30PM +0100, David Olofson wrote:

>
> With MIDI, this is obvious, since VVID == note pitch. It's not that 
> easy with our protocol, and I don't think it's a good idea to turn a 
> vital feature like this into something that synths will have to 
> implement through arbitrary hacks, based on the PITCH control. (Hacks 
> that may not work at all, unless the synth is aware of which scale 
> you're using, BTW.)
> 

Sorry for nitpicking, but why do you presume MIDI presents any relation
between VVID and note pitch? Though valid in many cases, it is definately
not always true.

In MIDI the 'note on' event starts a note, and 'note off' (which can be
note on with velocity == 0) ends it. The note pitch pitch is actually just
an attribute for that note.

If VVID == note pitch would be true; the note would be a singleton, and
triggering the same note multiple times would never be possible. Note that
I'm not saying _re_triggering :-)

If you doubt, feel free to come over to my studio and hear my AKAI sampler
play multiple times the same sample at the same pitch :-)

I see the use of the VVIDs but for some reason I get an uncomfortable
feeling seeing it; it just reminds me of over engineering and adding
unneeded complexity. I'm quite glad my MIDI devices are smart enough to do
their voice allocation

Sorry, couldn't resist it.
Frank.

-- 
+ --- -- -  -   -- 
| Frank van de Pol  -o)A-L-S-A
| [EMAIL PROTECTED]/\\  Sounds good!
| http://www.alsa-project.org  _\_v
| Linux - Why use Windows if we have doors available?



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread David Olofson
On Wednesday 15 January 2003 18.10, Steve Harris wrote:
> On Wed, Jan 15, 2003 at 03:43:52 +0100, David Olofson wrote:
> > Another observation:
> >
> > There are two ways you could start notes on a monophonic synth:
> > 1. Use the same VVID for all notes
> > 2. Use a new VVID for each note.
>
> I dont think that a (typical) monosynth should have or use VVIDs at
> all.

No, and it doesn't have to. It can still respond to VVID allocation 
events by just assuming they're about this single, physically 
non-existent, fake VVID. All you need to know is when the sender 
wants you to start a new context, and that's what the VVID allocation 
event tells you.


> However I do now see your paino example, if we can implement this
> cleanly its a natural way for the gong example to work too.

Can't see any real issues with it so far. You basically just have 
each voice of a poly synth act as a mono synth.

The closest thing to an issue is that you need to implement *some* 
form of "continue" feature for the synth to do something sensible 
when a sender uses this feature. This could actually just be a 
recommendation to synth authours, but I think it should be easy 
enough to do something sensible in practically any synth.

The nice way for a sampler or similar to handle it would be to set 
the current voice to a quick declick release, while grabbing a new 
voice for the new note. The old voice will be released as soon 
as the declick envelope has finished. Obviously, you can implement 
that as part of each voice or something like that as well. Point is 
that reusing a VVID effectively makes the context steal it's own 
voice. *How* this is done is an implementation issue - but "nice" 
voice stealing is something every serious synth must have anyway. 
(This even goes for some kinds of mono synths.)

If you're *real* lazy, you can just treat new notes while the voice 
is playing by restarting the envelopes. Given that you handle pitch 
in the usual way (ie as continous pitch - like MIDI pitch + pitch 
bend, basically), you get primitive legato when reusing VVIDs. Pretty 
much a free bonus feature.

A virtual analog synth would just retrig envelopes and stuff, but 
(depending on the patch) perhaps not reset the oscillator phase. 
(Rather similar to the "free legato", that is.) Entirely 
implementation and patch dependent, though - this is just an example. 

Glisando and similar pitch effects would be handled in similar ways.

The common logic here is that the current state of the context 
effectively becomes parameters for the new note - and this applies to 
both mono and poly synths.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread Steve Harris
On Wed, Jan 15, 2003 at 03:43:52 +0100, David Olofson wrote:
> Another observation:
> 
> There are two ways you could start notes on a monophonic synth:
>   1. Use the same VVID for all notes
>   2. Use a new VVID for each note.

I dont think that a (typical) monosynth should have or use VVIDs at all.

However I do now see your paino example, if we can implement this cleanly
its a natural way for the gong example to work too. 

- Steve



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread David Olofson
On Wednesday 15 January 2003 16.54, Fons Adriaensen wrote:
> Hi all. I joined the list today, and have been following the
> vivid discussion on VVIDs with interest.

Welcome! :-)


[...polyphony, glisando, reusing VVIDs etc...]
> One thing you can't express with (polyhonic) MIDI is the following
> : Suppose you have a note A playing, and you want to start another
> one (B) 'in the context of A'. What this means is that B will
> probably (but not necessarily) terminate A, and that the attack
> phase of B will be influenced by the fact that it 'replaces' A.
> This may be a glissando, but it could be something much more
> complicated than that.

This is *exactly* the kind of stuff I'm thinking about. I'm hoping 
that we can do away with these limitations of MIDI, instead of 
sticking with the same old, ugly hacks to do this kind of stuff.


> As an example, consider what happens when a violin player changes
> the note he's playing while remaining on the same string. If he
> would play the second note on another string, the transition would
> sound completely different.

Actually, these kind of instruments are generally better simulated 
using multiple Channels, mostly because each string has it's own 
distinct sound, even when they all play the same pitch.

Anyway, that's a special case. It does not apply to instruments with 
fixed pitch per string (like pianos), and certainly not to all "real" 
synths, since they can chose to do all sorts of weird stuff rather 
than just simulating real instruments.


> Using the terminology of the present discussion, what you want is
> that the VVID that was playing A should get the NOTE ON event and
> all associated parameters for B. What exactly it will do depends on
> the patch, the current state of A, and the parameters for B. The
> essential point is that whatever generates B should be aware of the
> relation with A.

Exactly.


> One a mono synth you can do this sort of thing, event with MIDI,
> because the relation is implicit. With polyphonic MIDI, it't
> impossible, except in some cases, and as was pointed out before,
> if you repeat the same note.
>
> So the association between VVIDs and notes should be made by the
> player (caller), not by the instrument (plugin).

Yes.


[...]
>  > There should obviously be some sort of "panic button" feature
>  > *as well*, but I don't see why it makes any sense to hardwire it
>  > from the sequencer "stop button". "All Notes Off", "Stop All
>  > Sound" and the like are emergency features that are not used
>  > normally.
>
> Again I tend to agree with David. Stopping the stream of events
> should not imply that all sounds stop, even if this may be what you
> want most of the time.

Well, most of the time, I want everything to behave like real 
hardware - and that basically means keep running until someone throws 
the power switch. So, no, I rarely want *all* sounds to stop; not 
even all notes, actually, since some of them may be under the control 
of an external controller rather than the sequencer. I certainly 
don't want the sequencer to kill the keyboard just because I happen 
to jam past the end of the sequence! (Seen that "feature" in action, 
and it's nothing but useless and annoying.)


BTW, sending "Reset All Controls" to Audiality 0.1.1 will result in 
all FX plugins being removed, and routing being reset to "factory 
defaults". Definitely not something I'd like it to do every time the 
sequencer is stopped! Since the Audiality "mixer over MIDI" interface 
is inherently non-trivial to deal with, it's nice to be able to reset 
and start over without restarting the synth. (Especially since 
reloading and rerendering all the sounds can take a while...) This is 
what "Reset All Controls" is for, and at least, the sequencers I've 
been using *only* send that when you tell them to.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread Fons Adriaensen

Hi all. I joined the list today, and have been following the
vivid discussion on VVIDs with interest. On the whole, I agree
with David Olofson. There are a number of limitations in the 
MIDI protocol and it should not be the model for any new API.


David Olofson writes:
 > On Wednesday 15 January 2003 14.46, Steve Harris wrote:
 > [...]
 > > > > Starting a new note on a VVID when a previous note is still in
 > > > > the release phase would cause a glisando, while if the VVID has
 > > > > no playing voice, one would be activated and started as needed
 > > > > to play a new note. The sender can't reliably know which action
 > > > > will be taken for each new note, so it really *has* to be left
 > > > > to the synth to decide. And for this, the lifetime of
 > > > > VVIDs/contexts need to span zero or more notes, with no upper
 > > > > limit.
 > > >
 > > > I don't follow you at all - a new note is a new note.  If your

It's not always that simple (see below).

 > > > instrument has a glissando control, use it.  It does the right
 > > > thing.  Each new note gets a new VVID.
 > >
 > > I agree with Tim about this.
 > 
 > What I'm not getting is how this is supposed to work with polyphonic 
 > synths. Or isn't it, just because it doesn't really work with MIDI 
 > synths...?
 > 
 > ...

One thing you can't express with (polyhonic) MIDI is the following :
Suppose you have a note A playing, and you want to start another one
(B) 'in the context of A'. What this means is that B will probably
(but not necessarily) terminate A, and that the attack phase of B
will be influenced by the fact that it 'replaces' A. This may be
a glissando, but it could be something much more complicated than
that.

As an example, consider what happens when a violin player changes 
the note he's playing while remaining on the same string. If he would
play the second note on another string, the transition would sound
completely different.

Using the terminology of the present discussion, what you want is that
the VVID that was playing A should get the NOTE ON event and all
associated parameters for B. What exactly it will do depends on the
patch, the current state of A, and the parameters for B. The essential
point is that whatever generates B should be aware of the relation
with A.

One a mono synth you can do this sort of thing, event with MIDI,
because the relation is implicit. With polyphonic MIDI, it't
impossible, except in some cases, and as was pointed out before,
if you repeat the same note.

So the association between VVIDs and notes should be made by the 
player (caller), not by the instrument (plugin).

 
 > > > Stop button is different than not sending a note-off.  Stop
 > > > should automatically send a note-off to any VVIDs.  Or perhaps
 > > > more accurately, it should send a stop-all sound event.
 > >
 > > Yes, I think you want all sound generating things to shut up if you
 > > send stop all, not just note based things.
 > 
 > That depends on what kind of setup you have. If you're running 
 > monitor sound through the software, stopping all effects means all 
 > monitor sound dies when you stop the sequencer. This applies whether 
 > you're monitoring external audio or output from synths, and it just 
 > doesn't seem logical to me. Stopping the sequencer is in no way 
 > equivalent to killing the whole net, IMNSHO, and whether or not some 
 > hosts will still do this, the API should definitely not enforce it.
 > 
 > There should obviously be some sort of "panic button" feature *as 
 > well*, but I don't see why it makes any sense to hardwire it from the 
 > sequencer "stop button". "All Notes Off", "Stop All Sound" and the 
 > like are emergency features that are not used normally.


Again I tend to agree with David. Stopping the stream of events should
not imply that all sounds stop, even if this may be what you want most
of the time.


-- 
Fons Adriaensen
Alcatel Space




Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread David Olofson
On Wednesday 15 January 2003 15.28, David Olofson wrote:
> On Wednesday 15 January 2003 14.46, Steve Harris wrote:
> > > I don't follow you at all - a new note is a new note.  If your
> > > instrument has a glissando control, use it.  It does the right
> > > thing.  Each new note gets a new VVID.
> >
> > I agree with Tim about this.

Another observation:

There are two ways you could start notes on a monophonic synth:
1. Use the same VVID for all notes
2. Use a new VVID for each note.

If the synth actually looks at VVIDs, it could interpret these 
differently. For example:

Same VVID:  Retrig envelope, slide pitch etc.
New VVID:   Full reset; restart oscillators etc.


Now, looking at the default behavior of mono synths ignoring VVIDs, 
what does that imply? Well, since a mono synth generally has only one 
voice, it seems more or less obvious that the "same VVID" case is 
what's implied when VVIDs are ignored. This is also obvious when you 
look at the sender side; you need only one VVID to drive a mono 
synth, so why should you keep reassigning it to the same voice all 
the time?


Anyway, it would make more sense to me if mono synths were basically 
just poly synths that support at most one voice, than if they were 
completely special-cased WRT voice control.

If you can control the voice of a mono synth with a single VVID (or 
an implicit, fixed VVID, if the synth doesn't even check the VVID 
field), why not control each voice of a poly synth in the same manner?


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread David Olofson
On Wednesday 15 January 2003 14.46, Steve Harris wrote:
[...]
> > > Starting a new note on a VVID when a previous note is still in
> > > the release phase would cause a glisando, while if the VVID has
> > > no playing voice, one would be activated and started as needed
> > > to play a new note. The sender can't reliably know which action
> > > will be taken for each new note, so it really *has* to be left
> > > to the synth to decide. And for this, the lifetime of
> > > VVIDs/contexts need to span zero or more notes, with no upper
> > > limit.
> >
> > I don't follow you at all - a new note is a new note.  If your
> > instrument has a glissando control, use it.  It does the right
> > thing.  Each new note gets a new VVID.
>
> I agree with Tim about this.

What I'm not getting is how this is supposed to work with polyphonic 
synths. Or isn't it, just because it doesn't really work with MIDI 
synths...?

Anyway, another example: It's rather common for mono synths to just 
retrig the envelope when a new note is starting before the current 
note has ended. Polyphonic synths often do this on a per-key basis, 
but due to the restrictions of MIDI, they obviously cannot take it 
further than that.

Why enforce that notes on polyphonic synths are completely unrelated, 
when they're not on monophonic synths, and don't have to be on 
polyphonic synths either? The *only* reason why MIDI can't do this is 
that it abuses note pitch for VVID, and we don't have that 
restriction - unless we for some reason decide it's just not allowed 
to make use of this fact.


> > Stop button is different than not sending a note-off.  Stop
> > should automatically send a note-off to any VVIDs.  Or perhaps
> > more accurately, it should send a stop-all sound event.
>
> Yes, I think you want all sound generating things to shut up if you
> send stop all, not just note based things.

That depends on what kind of setup you have. If you're running 
monitor sound through the software, stopping all effects means all 
monitor sound dies when you stop the sequencer. This applies whether 
you're monitoring external audio or output from synths, and it just 
doesn't seem logical to me. Stopping the sequencer is in no way 
equivalent to killing the whole net, IMNSHO, and whether or not some 
hosts will still do this, the API should definitely not enforce it.

There should obviously be some sort of "panic button" feature *as 
well*, but I don't see why it makes any sense to hardwire it from the 
sequencer "stop button". "All Notes Off", "Stop All Sound" and the 
like are emergency features that are not used normally.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread Steve Harris
On Wed, Jan 15, 2003 at 01:42:27 -0800, Tim Hockin wrote:
> > This is very "anti modular synth". NOTE/VOICE/GATE is a control type 
> > hint. I see no reason to imply that it can only be used for a certain 
> > kind of controls, since it's really just a "name" used by users 
> > and/or hosts to match ins and outs.
> 
> This is not at all what I see as intuitive.  VOICE is a separate control
> used ONLY for voice control.  Instruments have it.  Effects do not.

I would say only polyphonic instruments have VOICE control. Modualar synths
are not polyphonic (at the module level).
 
> > Starting a new note on a VVID when a previous note is still in the 
> > release phase would cause a glisando, while if the VVID has no 
> > playing voice, one would be activated and started as needed to play a 
> > new note. The sender can't reliably know which action will be taken 
> > for each new note, so it really *has* to be left to the synth to 
> > decide. And for this, the lifetime of VVIDs/contexts need to span 
> > zero or more notes, with no upper limit.
> 
> I don't follow you at all - a new note is a new note.  If your instrument
> has a glissando control, use it.  It does the right thing.  Each new note
> gets a new VVID.

I agree with Tim about this.

> Stop button is different than not sending a note-off.  Stop should
> automatically send a note-off to any VVIDs.  Or perhaps more accurately, it
> should send a stop-all sound event.

Yes, I think you want all sound generating things to shut up if you send stop
all, not just note based things.

- Steve 



Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread David Olofson
On Wednesday 15 January 2003 10.42, Tim Hockin wrote:
> > [Lost touch with the list, so I'm trying to catch up here... I
> > did notice that gardena.net is gone - but I forgot that I was
> > using [EMAIL PROTECTED] for this list! *heh*]
>
> Woops!  Welcome back!

Well, thanks. :-)


[...]
> > The easiest way is to just make one event the "trigger", but I'm
> > not sure it's the right thing to do. What if you have more than
> > one control of this sort, and the "trigger" is actually a product
> > of both? Maybe just assume that synths will use the standardized
>
> The trigger is a virtual control which really just says whether the
> voice is on or not.  You set up all your init-latched controls in
> the init window, THEN you set the voice on.
>
> It is conceptually simple, similar to what people know and it fits
> well enough.  And I can't find any problems with it technically.

The only problem I have with it is that it's completely irrelevant to 
continous control synths - but they can just ignore it, or not have 
the control at all.


> > > And the NOTE/VOICE starter is a voice-control, so any
> > > Instrument MUST have that.
> >
> > This is very "anti modular synth". NOTE/VOICE/GATE is a control
> > type hint. I see no reason to imply that it can only be used for
> > a certain kind of controls, since it's really just a "name" used
> > by users and/or hosts to match ins and outs.
>
> This is not at all what I see as intuitive.  VOICE is a separate
> control used ONLY for voice control.  Instruments have it.  Effects
> do not.

There's this distinct FX vs instrument separation again. What is the 
actual motivation for enforcing that these are kept totally separate?

I don't see the separation as very intuitive at all. The only 
differences are that voices are (sort of) dynamically allocated, and 
that they have an extra dimension of addressing - and that applies 
*only* to polyphonic synths. For mono synths, a Channel is equivalent 
to a Voice for all practical matters.


> > About VVID management:
> > Since mono synths won't need VVIDs, host shouldn't have to
> > allocate any for them. (That would be a waste of resources.)
> > The last case also indicates a handy shortcut you can take
> > if you *know* that VVIDs won't be considered. Thus, I'd
> > suggest that plugins can indicate that they won't use VVIDs.
>
> This is a possible optimization.  I'll add it to my notes.  It may
> really not be worth it at all.

It's also totally optional. If you don't care to check the hint, just 
always use real VVIDs with Voice Controls, and never connect Channel 
Control outs to Voice Control ins, and everything will work fine.


[...]
> > What might be confusing things is that I don's consider "voice"
> > and "context" equivalent - and VVIDs refer to *contexts* rather
> > than voices. There will generally be either zero or one voice
> > connected to a context, but the same context may be used to play
> > several notes.
>
> I disagree - a VVID refers to a voice at some point in time.  A
> context can not be re-used.  Once a voice is stopped and the
> release has ended, that VVID has expired.

Why? Is there a good reason why a synth must not be allowed to 
function like the good old SID envelope generator, which can be 
switched on and off as desired?

Also, remember that there is nothing binding two notes at the same 
pitch together with our protocol, since (unlike MIDI) VVID != pitch. 
This means that a synth cannot reliably handle a new note starting 
before the release phase of a previous pitch has ended. It'll just 
have to allocate a new voice, completely independent of the old 
voice, and that's generally *not* what you want if you're trying to 
emulate real instruments.

For example, if you're playing the piano with the sustain pedal down, 
hitting the same key repeatedly doesn't really add new strings for 
that note, does it...?

With MIDI, this is obvious, since VVID == note pitch. It's not that 
easy with our protocol, and I don't think it's a good idea to turn a 
vital feature like this into something that synths will have to 
implement through arbitrary hacks, based on the PITCH control. (Hacks 
that may not work at all, unless the synth is aware of which scale 
you're using, BTW.)


> > > No. It means I want the sound on this voice to stop. It implies
> > > the above, too. After a VOICE_OFF, no more events will be sent
> > > for this VVID.
> >
> > That just won't work. You don't want continous pitch and stuff to
> > work except when the note is on?
>
> More or less, yes!  If you want sound, you should tell the synth
> that by allocating a VVID for it, and truning it on.

And when you enter the release phase? If have yet to see a MIDI synth 
where voices stop responding to pitch bend and other controls after 
NoteOff, and although we're talking about *voice* controls here, I 
think the same logic applies entirely.

Synths *have* to be able to receive control changes for as long as a 
voi

Re: [linux-audio-dev] Catching up with XAP

2003-01-15 Thread Tim Hockin
> [Lost touch with the list, so I'm trying to catch up here... I did 
> notice that gardena.net is gone - but I forgot that I was using 
> [EMAIL PROTECTED] for this list! *heh*]

Woops!  Welcome back!

> > If flags are standardized, it can. Int32: 0 = unused, +ve = plugin 
> > owned, -ve = special meaning.
> 
> Sure. I just don't see why it would be useful, or why the VVID 
> subsystem should be turned into some kind of synth status API.

You originally suggested it.  More on this later.  Let's drop it for now :)

> VVIDs can't end spontaneously. Only synth voices can, and VVIDs are 
> only temporary references to voices. A voice may detach itself from 
> "it's" VVID, but the VVID is still owned by the sender, and it's 
> still effectively bound to the same context.

OK, we're having more term conflicts and some idealogical conflicts - read
on.

> > Ok, let me make it more clear. Again, same example. The host wants
> > to send 7 parameters to the Note-on. It sends 3 then VELOCITY. But
> > as soon as VELOCITY is received 'init-time' is over. This is bad.
> 
> Yes, it's event ordering messed up. This will never happen unless the 
> events are *created* out of order, or mixed up by some event 

Imagine a simple host that has a dialog to edit 'init' params for a new
note.  The host can't know what order to send init-latched events unless it
knows there is a safe 'go' that it can send.  That is the VOICE_ON.

> Why? So it can "automatically" reorder events at some point?

It may not have any clue as to what events come first/last.

> The easiest way is to just make one event the "trigger", but I'm not 
> sure it's the right thing to do. What if you have more than one 
> control of this sort, and the "trigger" is actually a product of 
> both? Maybe just assume that synths will use the standardized 

The trigger is a virtual control which really just says whether the voice is
on or not.  You set up all your init-latched controls in the init window,
THEN you set the voice on.

It is conceptually simple, similar to what people know and it fits well
enough.  And I can't find any problems with it technically.

> > And the NOTE/VOICE starter is a voice-control, so any Instrument
> > MUST have that. 
> 
> This is very "anti modular synth". NOTE/VOICE/GATE is a control type 
> hint. I see no reason to imply that it can only be used for a certain 
> kind of controls, since it's really just a "name" used by users 
> and/or hosts to match ins and outs.

This is not at all what I see as intuitive.  VOICE is a separate control
used ONLY for voice control.  Instruments have it.  Effects do not.

> About VVID management:
>   Since mono synths won't need VVIDs, host shouldn't have to
>   allocate any for them. (That would be a waste of resources.)
>   The last case also indicates a handy shortcut you can take
>   if you *know* that VVIDs won't be considered. Thus, I'd
>   suggest that plugins can indicate that they won't use VVIDs.

This is a possible optimization.  I'll add it to my notes.  It may really
not be worth it at all.

> > > Why? What does "end a voice" actually mean?
> > 
> > It means that the host wants this voice to stop. If there is a
> > release phase, go to it. If not, end this voice (in a
> > plugin-dpecific way).
> > Without it, how do you enter the release phase?
> 
> Right, then we agree on that as well. What I mean is just that "end a 
> voice" doesn't *explicitly* kill the voice instantly.

Ok, we agree on this.

> What might be confusing things is that I don's consider "voice" and 
> "context" equivalent - and VVIDs refer to *contexts* rather than 
> voices. There will generally be either zero or one voice connected to 
> a context, but the same context may be used to play several notes.

I disagree - a VVID refers to a voice at some point in time.  A context can
not be re-used.  Once a voice is stopped and the release has ended, that
VVID has expired.

> > No. It means I want the sound on this voice to stop. It implies the
> > above, too. After a VOICE_OFF, no more events will be sent for this
> > VVID.
> 
> That just won't work. You don't want continous pitch and stuff to 
> work except when the note is on?

More or less, yes!  If you want sound, you should tell the synth that by
allocating a VVID for it, and truning it on.

> Another example that demonstrates why this distinction matters would 
> be a polyphonic synth with automatic glisando. (Something you can 
> 
> Starting a new note on a VVID when a previous note is still in the 
> release phase would cause a glisando, while if the VVID has no 
> playing voice, one would be activated and started as needed to play a 
> new note. The sender can't reliably know which action will be taken 
> for each new note, so it really *has* to be left to the synth to 
> decide. And for this, the lifetime of VVIDs/contexts need to span 
> zero or more notes, with no upper limit.

I don't follow you at all - a new note is a new note.  If your