Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-12 Thread Tim Hockin
Paging David, David are you there?  I'm bored - send me more arguments!!

> 
> > > I personally find this notion bizarre and counter-intuitive.  The
> > > idea that the note is turned on by some random control is just
> > > awkward.  I'm willing to concede it, but I just want to be on the
> > > record that I find it bizarre.
> 
> > The way I see it, this "random control" that triggers a note is 
> > equivalent to MIDI NoteOn. It can even be a standardized NOTE control 
> > that all synths must respond to, one way or another.
> 
> OK - in my own notes I had been referring to it as the VOICE control.  You
> send the VOICE control a VOICE_ON message and a VOICE_OFF message, or
> simpler send 0 and 1.
> 
> > > comprehend, and more consistent. I want to toss MIDI, but not
> > > where the convention makes things easy to understand.  I think that
> > > explaining the idea that a voice is created but not 'on' until the
> > > instrument decides is going to confuse people.  Over engineered. 
> > 
> > I think the alternative would render continous control synths even 
> > more confusing. "Why do I have to send a VOICE_ON to make the synth 
> > work at all?"
> 
> > Anyway, it's really an implementation issue. Just don't mention it in 
> > the API docs. Just say that the NOTE control corresponds to MIDI 
> > NoteOn/Off. Problem solved!
> 
> So stroking the NOTE control in your mind is *identical* to sending a 1 to
> the VOICE control in my mind.
> 
> > > The plugin CAN use the VVID table to store flags about the voice,
> > > as you suggested.  I just want to point out that this is
> > > essentially the same as the plugin communicating to the host about
> > > voices, just more passively.
> > 
> > Only the host can't really make any sense of the data.
> 
> If flags are standardized, it can.  Int32:  0 = unused, +ve = plugin owned,
> -ve = special meaning.
> 
> > > It seems useful.
> > 
> > Not really, because of the latency, the polling requirement and the
> > coarse timing.
> 
> When does the host allocate from the VVID list?  Between blocks.  As long as
> a synth flags or releases a VVID during it's block, the host benefits from
> it.  The host has to keep a list of which VVIDs it still is working with,
> right?
> 
> > > If the plugin can flag VVID table entries as released, the host can
> > > have a better idea of which VVIDs it can reuse.
> > 
> > Why would this matter? Again, the host does *not* do physical voice 
> > management.
> > 
> > You can reuse a VVID at any time, because *you* know whether or not 
> > you'll need it again. The synth just doesn't care, as all it will 
> 
> right, but if you hit the ned of the list and loop back to the start, you
> need to find the next VVID that is not in use by the HOST.  That can include
> VVIDs that have ended spontaneously (again, hihat sample or whatever).  The
> host just needs to discard any currently queued events for that (expired)
> VVID.  The plugin is already ignoreing them.
> 
> > > > This is where the confusion/disagreement is, I think: I don't
> > > > think of this event as "INIT_START", but rather as
> > > > "CONTEXT_START". I don't see the need for a specific "init" part
> > > > of the lifetime of a context. Initialization ends whenever the
> > > > synth decides to start playing instead of just tracking controls.
> > >
> > > Right - this is the bit I find insane.  From the user perspective: 
> > > I want to start a note.  Not whenever you feel like it.  Now.  Here
> > > are the non-default control values for this voice.  Anything I did
> > > not send you, assume the default.  Go.
> > 
> > So, bowed string instruments, wind instruments and the like are 
> > insane designs? :-)
> 
> No, they just might not have init params.  The voice is started when the bow
> contacts the string.
> 
> > A bowed string instrument is "triggered" by the bow pressure and 
> > speed exceeding certain levels; not directly by the player thinking 
> 
> Disagree.  SOUND is triggered by pressure/velocity.  The instrument is ready
> as soon as bow contacts the string.
> 
> > > The difference comes when the host sends the 'magic' start-voice
> > > control too soon.
> > >
> > > Assume a synth with a bunch of init-latched controls.
> > > Assume velocity is the 'magic trigger'.
> > > time0: Host sends VOICE_START/ALLOC/whatever
> > > time0: Host sends controls A, B, C  (latched, but no effect from
> > > the synth) time0: Host sends control VELOCITY (host activates
> > > voice)
> > > time0: Host sends controls D, E, F (ignored - they are
> > > init-latched, and init is over!)
> > >
> > > Do you see the problem?
> > 
> > No, I see a host sending continous control data to an init-latched 
> > synth. This is nothing that an API can fix automatically.
> 
> Ok, let me make it more clear.  Again, same example.  The host wants to send 
> 7 parameters to the Note-on.  It sends 3 then VELOCITY.  But as soon as
> VELOCITY is received 'init-time' is over.  This is bad.  The host has to
> know which con

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-09 Thread Tim Hockin
> > I personally find this notion bizarre and counter-intuitive.  The
> > idea that the note is turned on by some random control is just
> > awkward.  I'm willing to concede it, but I just want to be on the
> > record that I find it bizarre.

> The way I see it, this "random control" that triggers a note is 
> equivalent to MIDI NoteOn. It can even be a standardized NOTE control 
> that all synths must respond to, one way or another.

OK - in my own notes I had been referring to it as the VOICE control.  You
send the VOICE control a VOICE_ON message and a VOICE_OFF message, or
simpler send 0 and 1.

> > comprehend, and more consistent. I want to toss MIDI, but not
> > where the convention makes things easy to understand.  I think that
> > explaining the idea that a voice is created but not 'on' until the
> > instrument decides is going to confuse people.  Over engineered. 
> 
> I think the alternative would render continous control synths even 
> more confusing. "Why do I have to send a VOICE_ON to make the synth 
> work at all?"

> Anyway, it's really an implementation issue. Just don't mention it in 
> the API docs. Just say that the NOTE control corresponds to MIDI 
> NoteOn/Off. Problem solved!

So stroking the NOTE control in your mind is *identical* to sending a 1 to
the VOICE control in my mind.

> > The plugin CAN use the VVID table to store flags about the voice,
> > as you suggested.  I just want to point out that this is
> > essentially the same as the plugin communicating to the host about
> > voices, just more passively.
> 
> Only the host can't really make any sense of the data.

If flags are standardized, it can.  Int32:  0 = unused, +ve = plugin owned,
-ve = special meaning.

> > It seems useful.
> 
> Not really, because of the latency, the polling requirement and the
> coarse timing.

When does the host allocate from the VVID list?  Between blocks.  As long as
a synth flags or releases a VVID during it's block, the host benefits from
it.  The host has to keep a list of which VVIDs it still is working with,
right?

> > If the plugin can flag VVID table entries as released, the host can
> > have a better idea of which VVIDs it can reuse.
> 
> Why would this matter? Again, the host does *not* do physical voice 
> management.
> 
> You can reuse a VVID at any time, because *you* know whether or not 
> you'll need it again. The synth just doesn't care, as all it will 

right, but if you hit the ned of the list and loop back to the start, you
need to find the next VVID that is not in use by the HOST.  That can include
VVIDs that have ended spontaneously (again, hihat sample or whatever).  The
host just needs to discard any currently queued events for that (expired)
VVID.  The plugin is already ignoreing them.

> > > This is where the confusion/disagreement is, I think: I don't
> > > think of this event as "INIT_START", but rather as
> > > "CONTEXT_START". I don't see the need for a specific "init" part
> > > of the lifetime of a context. Initialization ends whenever the
> > > synth decides to start playing instead of just tracking controls.
> >
> > Right - this is the bit I find insane.  From the user perspective: 
> > I want to start a note.  Not whenever you feel like it.  Now.  Here
> > are the non-default control values for this voice.  Anything I did
> > not send you, assume the default.  Go.
> 
> So, bowed string instruments, wind instruments and the like are 
> insane designs? :-)

No, they just might not have init params.  The voice is started when the bow
contacts the string.

> A bowed string instrument is "triggered" by the bow pressure and 
> speed exceeding certain levels; not directly by the player thinking 

Disagree.  SOUND is triggered by pressure/velocity.  The instrument is ready
as soon as bow contacts the string.

> > The difference comes when the host sends the 'magic' start-voice
> > control too soon.
> >
> > Assume a synth with a bunch of init-latched controls.
> > Assume velocity is the 'magic trigger'.
> > time0: Host sends VOICE_START/ALLOC/whatever
> > time0: Host sends controls A, B, C  (latched, but no effect from
> > the synth) time0: Host sends control VELOCITY (host activates
> > voice)
> > time0: Host sends controls D, E, F (ignored - they are
> > init-latched, and init is over!)
> >
> > Do you see the problem?
> 
> No, I see a host sending continous control data to an init-latched 
> synth. This is nothing that an API can fix automatically.

Ok, let me make it more clear.  Again, same example.  The host wants to send 
7 parameters to the Note-on.  It sends 3 then VELOCITY.  But as soon as
VELOCITY is received 'init-time' is over.  This is bad.  The host has to
know which control ends init time.  Thus the NOTE/VOICE control we seem to
be agreeing on.

> Yes, it has to be triggered by a standardized control, so hosts 
> and/or users will know how to hook synths up with sequencers, 
> controllers and other senders.

Precisely.

> If it has no voice controls, there 

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-09 Thread David Olofson
On Thursday 09 January 2003 10.17, Tim Hockin wrote:
> > > that it has N voices on at all times.  Hmm.  Still, VOICE_ALLOC
> > > is akin to note_on.
> >
> > Well, if a voice can be "started" and still actually both silent
> > and physically deactivated (that is, acting as or being a dump
> > control tracker) - then yes.
>
> I personally find this notion bizarre and counter-intuitive.  The
> idea that the note is turned on by some random control is just
> awkward.  I'm willing to concede it, but I just want to be on the
> record that I find it bizarre.

Well, maybe it's just that continous control synths are bizarre by 
definition? They just work this way, and there's nothing an API can 
do about it.

The way I see it, this "random control" that triggers a note is 
equivalent to MIDI NoteOn. It can even be a standardized NOTE control 
that all synths must respond to, one way or another.

The event that binds a voice to a VVID however, doesn't really have a 
counterpart in MIDI, and I belive that's where the confusion is. 
Initializing a VVID is actually rather similar to picking a MIDI 
channel for future notes, in that it doesn't trigger a real action in 
the synth.

Well, MIDI *does* actually have virtual voice allocation. It's 
implicit: voice ID == MIDI pitch. The only difference with our VVIDs 
is that they have no fixed relation to note pitch, so you have to 
allocate them in some other way. (Unless you just simulate the MIDI 
approach, that is.)


> I still believe that the VOICE_ON approach is simpler to
> comprehend, and more consistent. I want to toss MIDI, but not
> where the convention makes things easy to understand.  I think that
> explaining the idea that a voice is created but not 'on' until the
> instrument decides is going to confuse people.  Over engineered. 

I think the alternative would render continous control synths even 
more confusing. "Why do I have to send a VOICE_ON to make the synth 
work at all?"

I think VOICE_ON is like telling people that sequencers know better 
than synths when to start and stop physical voices, and that's very 
far from the truth, especially with continous control synths.


Anyway, it's really an implementation issue. Just don't mention it in 
the API docs. Just say that the NOTE control corresponds to MIDI 
NoteOn/Off. Problem solved!


[...]
> > having hosts provide VVID entries (which they never access) is
> > stretching it, but it's the cleanest way of avoiding "voice
> > searching" proposed so far, and it's a synth<->local host thing
> > only.
>
> If the host never accesses the VVID table, why is it in the host
> domain and not the plugins?  Simpler linearity?  I don't buy that.

No, the real reason is that having synths allocate the entries would 
force senders to indirectly communicate with synths when making 
connections. And it move the anagement work from the host into 
plugins, obviously.

I just don't see a good reason not to have the host do it if it can.

* Less code in plugins.
* Less risk of plugins leaking memory.
* Hosts don't have to ask synths for extra VVIDs
  when making connections.
* VVID entry allocation can be made RT safe by the host,
  instead of requiring fully RT safe generic memory
  management for RT safe connections.


> The plugin CAN use the VVID table to store flags about the voice,
> as you suggested.  I just want to point out that this is
> essentially the same as the plugin communicating to the host about
> voices, just more passively.

Only the host can't really make any sense of the data.


> It seems useful.

Not really, because of the latency, the polling requirement and the 
coarse timing.


> If the plugin can flag VVID table entries as released, the host can
> have a better idea of which VVIDs it can reuse.

Why would this matter? Again, the host does *not* do physical voice 
management.

You can reuse a VVID at any time, because *you* know whether or not 
you'll need it again. The synth just doesn't care, as all it will 
ever notice is that you stopped talking about whatever voice was 
previously attached to that VVID.


[...]
> Am I losing my mind or are we back at a prior scenario?
>
> Init controls:
>  time X: ALLOC_VOICE
>  time X: CONTROL A SET
>  time X: CONTROL B SET
>
> This tastes just like VOICE_ON, SET, SET.



Except that

1) VOICE_ON implies that something is actually "started"
   instantly, whereas ALLOC_VOICE just says "I'm going to
   talk about a new voice, referring to it using this VVID."

2) There is no requirement that the controls are set at
   time X.


>  If controls A and B are
> both required to start a voice, the synth has to expect both.

Actually, *values* would trigger the synth; not the events 
themselves. (The difference is important when dealing with ramp 
events!)

The synth may not bother to start playing until A > 0.5 and B > 0, 
for example. If B is > 0 by default, you only need

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-09 Thread Tim Hockin
> > that it has N voices on at all times.  Hmm.  Still, VOICE_ALLOC is
> > akin to note_on.
> 
> Well, if a voice can be "started" and still actually both silent and 
> physically deactivated (that is, acting as or being a dump control 
> tracker) - then yes.

I personally find this notion bizarre and counter-intuitive.  The idea that
the note is turned on by some random control is just awkward.  I'm willing
to concede it, but I just want to be on the record that I find it bizarre.

I still believe that the VOICE_ON approach is simpler to comprehend, and
more consistent.  I want to toss MIDI, but not where the convention makes
things easy to understand.  I think that explaining the idea that a voice is
created but not 'on' until the instrument decides is going to confuse
people.  Over engineered.  I've said my peace.  If everyone (speak up ppl!)
wants to pursue this notion, I'll go along.

On to more topics..

> Apart from that, I just think it's ugly having both hosts and senders 
> mess with "keys" that really belong in the synth internals. Even 

I agree to some extent.  I'm just idearizing still.

> having hosts provide VVID entries (which they never access) is 
> stretching it, but it's the cleanest way of avoiding "voice 
> searching" proposed so far, and it's a synth<->local host thing only.

If the host never accesses the VVID table, why is it in the host domain and
not the plugins?  Simpler linearity?  I don't buy that.

The plugin CAN use the VVID table to store flags about the voice, as you
suggested.  I just want to point out that this is essentially the same as
the plugin communicating to the host about voices, just more passively.  It
seems useful.

If the plugin can flag VVID table entries as released, the host can have a
better idea of which VVIDs it can reuse.

> Well, what *actually* made me comment on that, is that I thought 
> "vvid_is_active()" had something to do with whether or not the 
> *synth* is using the VVID.
> 
> Was that the idea? If so; again; VVID entries are not for feedback; 
> they simply do not exist to senders. They're a host provided VVID 
> mapping service for synths; nothing else.

That wasn't the idea until you suggested using int32 for error status :)

> They can handle it by actually doing what the hint suggests; sample 
> the "initializer" control values only when a note is started. That 
> way, they'll "sort of" do the right thing even when driven by data 
> generated/recorded for synths/sounds that use these controls as 
> continous. And more interestingly; continous control synth/sounds can 
> be driven properly by initializer oriented data.

Am I losing my mind or are we back at a prior scenario?

Init controls:
 time X: ALLOC_VOICE
 time X: CONTROL A SET
 time X: CONTROL B SET

This tastes just like VOICE_ON, SET, SET.  If controls A and B are both
required to start a voice, the synth has to expect both.

> > Agreed - they are semantically the same.  The question is whether
> > or not it has a counterpart to say that init-time controls are
> > done.
> 
> This is where the confusion/disagreement is, I think: I don't think 
> of this event as "INIT_START", but rather as "CONTEXT_START". I don't 
> see the need for a specific "init" part of the lifetime of a context. 
> Initialization ends whenever the synth decides to start playing 
> instead of just tracking controls.

Right - this is the bit I find insane.  From the user perspective:  I want
to start a note.  Not whenever you feel like it.  Now.  Here are the
non-default control values for this voice.  Anything I did not send you,
assume the default.  Go.

I can map this onto your same behavior, but I don't like the way you
characterize it.  The init period is instantaneous.  If I did not provide
enough information (e.g.: no velocity, and default is 0.0) then the voice is
silent.  Same behavior different characterization.

The difference comes when the host sends the 'magic' start-voice control too
soon.

Assume a synth with a bunch of init-latched controls.
Assume velocity is the 'magic trigger'.
time0: Host sends VOICE_START/ALLOC/whatever
time0: Host sends controls A, B, C  (latched, but no effect from the synth)
time0: Host sends control VELOCITY (host activates voice)
time0: Host sends controls D, E, F (ignored - they are init-latched, and
init is over!)

Do you see the problem?  It is easily solved by declaring a context, then
setting init controls, then activating the voice.  But the activation of the
voice has to be consistent for all Instruments, or the host can't get it
right.

If that means that the whitenoise generator with no per-voice controls has
to receive VOICE_ALLOC(vvid) and VOICE_ON(vvid), then that is OK.

> physically when the sender has reassigned the VVID and the synth has 
> killed the voice. Thus, no need for a "VOICE_END" or similar event 
> either.

The host still has to be able to end a voice, without starting a new one.

> For a continous vel

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
[Replying to some posts by Tim Hockin.]

On Wednesday 08 January 2003 23.55, Tim Hockin wrote:
[...]
> a voice has to be started.  Maybe not.  Maybe the synth can report
> that it has N voices on at all times.  Hmm.  Still, VOICE_ALLOC is
> akin to note_on.

Well, if a voice can be "started" and still actually both silent and 
physically deactivated (that is, acting as or being a dump control 
tracker) - then yes.

(A sensible API shouldn't make this illegal, and I can't really see 
how it could prevent it.)


> > Also, keep in mind that any feedback of this kind requires a real
> > connection in the reverse direction. This makes the API and hosts
> > more complex - and I still can't see any benefit, really.
>
> We already have a rudimentary reverse path.

Where? (The VVID entries? No, those are "private" to synths!)


[...]
> > > The only problem is that it requires dialog.
> >
> > That's a rather serious problem, OTOH...
>
> And that is what I am not convinced of - I'm not against VVIDs, I
> just want to play it out..

Well, unless I'm forgetting something, it's basicaly "just" the 
latency issue, which seems to make controlling remote synths 
impractical, if at all possible. (I think it needs further API logic 
to work. See below.)

Apart from that, I just think it's ugly having both hosts and senders 
mess with "keys" that really belong in the synth internals. Even 
having hosts provide VVID entries (which they never access) is 
stretching it, but it's the cleanest way of avoiding "voice 
searching" proposed so far, and it's a synth<->local host thing only.


BTW, we've already agreed on doing this for control addressing. (The 
cookies that replaced control indices.) The big difference is that 
that's not a sample accurate real time protocol.


> > > * no carving of a VVID namespace for controller plugins
> >
> > No, plugins have to do that in real time instead.
>
> No, plugins have a fixed namespace - they dole out real VIDs to
> whomever asks for them.

Well, yes. It's just that VIDs become illegal as voices are stolen, 
so synths have to remember which voices should not accept direct 
addressing. (Those should respond only to the temporary negative 
virtual VIDs, until the host starts using the real VID - whenever it 
find out about that. That's where >1 block latency becomes seriously 
troublesome - the *real* latency issue.)

I think that's really rather counter-intuitive. Isn't part of the 
point with handing out a VID that you generate yourself that you 
shouldn't have to check incoming VIDs all the time?

Further, I don't like the idea of forcing senders to virtualize VIDs 
internally, just to be able to use *both* VIDs they invent themselves 
(the negative ones) and "real" VIDs returned from the synths.


>  Are VVIDs global or per-plugin?  Sorry, I
> forget what we had decided on that..

For practical reasons, I think they're best handled as per-host. That 
is, they're "global" as long as you're only talking to local synths, 
but you really should think of them as *per-plugin* when you want to 
talk to synths. (That way, you can connect to remote synths 
transparently WRT VVIDs.)

So, officially:
VVIDs are per plugin.

For host authors:
VVIDs *can* be managed globaly for the local host.


> > >/* find a vvid that is not currently playing */
> > >do {
> > > this_vvid = vvid_next++;
> > >while (vvid_is_active(this_vvid);
> >
> > Again, this is voice allocation. Leave this to the synth.
>
> You have a pool of VVIDs.  Some of them are long-lasting.  Some are
> short lasting.  You always want to find the LRU VVID.  If there are
> available VVIDs, take the LRU free one.  If there are not, you
> either need to voice-steal at the host or alloc more VVIDs.  Right?

Well, what *actually* made me comment on that, is that I thought 
"vvid_is_active()" had something to do with whether or not the 
*synth* is using the VVID.

Was that the idea? If so; again; VVID entries are not for feedback; 
they simply do not exist to senders. They're a host provided VVID 
mapping service for synths; nothing else.

So, as to "allocating" VVIDs, it's nothing more than a matter of 
keeping "note contexts" apart. In many cases, you don't even need a 
VVID manager for that. All you have to guarantee is that there is one 
VVID for each note you want to control at any time.


On Thursday 09 January 2003 00.06, Tim Hockin wrote:
> > Yes, and that's my problem with it. Or rather; it's ok for synths
> > to be able to hint that they use controls this way, but designing
> > the voice addressing/allocation scheme around it has serious
> > implications.
>
> Yes, definitely hint.  Anal-retentive hosts will show it as an
> init-option only if it is flagged as such.  Synths still need to
> handle spurious and incorrect events (fall through I'd guess), but
> it is a nice hint to the UI.

They can handle it by actually doing what the hint suggests; sample 
the "initializer" control values only when a no

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> Another alternative (which is looking more and more attractive to me) 
> is to use 'int' instead of 'void *' for VVID entries. Then you can 
> just use one value (-1) for "no voice" and another (-2) for "dead 
> VVID". Other values would be physical voice indices or whatever fits 
> the implementation.

It's attractive to use special values for meanings.  The plugin could tell
the host that a VVID is done by setting it to a value.  Same for errors.

How is this different from a plugin sending event back to the host?  And
what of remote plugins?  Do they have shared-memory access to the VVID table
or is it function-call based?

Tim



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> That said, one VVID == one physical voice, or possibly no voice at 
> all. (And if you care, the latter only happens when you're 
> overloading your polyphony.) That is, you can still do whatever you 

Or when the hihat sound is done, but the user has programmed events for
later on that same VVID.

> That's great, but with VVIDs + table, you can have that as well - 
> without two-way communication or senders having to know anything 
> about it. (Senders deal only with indices, and the host manages the 
> table. Only synths will ever access the entries in the table)

Synth needs to know where the table is - We'd need to do double-indirection
so that the table can move if needed.

struct voice *v = myvoices[(*(host->vvid_table))[vvid]];




Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> Seriously, though; there has to be *one* of DETACH_VVID and 
> VOICE_ALLOCATE. From the implementation POV, it seems to me that they 
> are essentially equivalent, as both are effectively used when you 
> want to say "I'll use this VVID for a new context from now on." 
> VOICE_ALLOCATE is obviously closer to the right name for that action.

Agreed - they are semantically the same.  The question is whether or not it
has a counterpart to say that init-time controls are done.  As for
redundancy - I see it as minimum requirement.  Suppose I want to turn a
voice on with no control changes from the default (no velocity, nothing).  I
need to send SOMETHING to say "connect this VVID to a voice".  The minimum
required is a VOICE_ON or similar.  For something that has no per-voice
controls (e.g. a white-noise machine) you still need to send some event.
And I'd rather see the voice-on protocol be consistent for all instruments.
If that means you have to send two events (VOICE_ALLOC, VOICE_ON) for the
white-noise maker, then I can live with that.

Tim



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> Yes, and that's my problem with it. Or rather; it's ok for synths to 
> be able to hint that they use controls this way, but designing the 
> voice addressing/allocation scheme around it has serious implications.

Yes, definitely hint.  Anal-retentive hosts will show it as an init-option
only if it is flagged as such.  Synths still need to handle spurious and
incorrect events (fall through I'd guess), but it is a nice hint to the UI.



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> It doesn't because the host/sender doesn't really care. A controller 
> or sequencer is just supposed to deliver events. It shouldn't care 
> more about voice activation than it does about the exact relation 
> between events and audio output.

OK, and we seem to agree that a VOICE_ALLOC or PREP_VOICE or something is
needed to mark that voice as re-usable.  

VOICE_INIT_START
VOICE_CONTROL_SET
VOICE_INIT_STOP

It's not NAMED VOICE_ON, but it is semantically the same, yes?

> > If VOICE_ON doesn't make sense for some synth, then it still makes
> > sense for the user.
> 
> Why? (Unless the user is a tracker die-hard.)

Consistency between instruments.  You always need to create a voice, with
some (0-n) initial params.

> VOICE_ON, assuming that there can be continous velocity synths, has 

a voice has to be started.  Maybe not.  Maybe the synth can report that it
has N voices on at all times.  Hmm.  Still, VOICE_ALLOC is akin to note_on.

> Also, keep in mind that any feedback of this kind requires a real 
> connection in the reverse direction. This makes the API and hosts 
> more complex - and I still can't see any benefit, really.

We already have a rudimentary reverse path.

> So, how do you perform voice stealing?
> 
> You have to tell the host/sender when a voice index becomes invalid, 

yep - as you point out, it requires 1-block latency.

> > It has none of the problems of VVIDs.
> 
> Probably not, if I understand the above correctly.
> 
> > The only problem is that it requires dialog.
> 
> That's a rather serious problem, OTOH...

And that is what I am not convinced of - I'm not against VVIDs, I just want
to play it out..

> > * no carving of a VVID namespace for controller plugins
> 
> No, plugins have to do that in real time instead.

No, plugins have a fixed namespace - they dole out real VIDs to whomever
asks for them.  Are VVIDs global or per-plugin?  Sorry, I forget what we had
decided on that..

> >/* find a vvid that is not currently playing */
> >do {
> > this_vvid = vvid_next++;
> >while (vvid_is_active(this_vvid);
> 
> Again, this is voice allocation. Leave this to the synth.

You have a pool of VVIDs.  Some of them are long-lasting.  Some are short
lasting.  You always want to find the LRU VVID.  If there are available
VVIDs, take the LRU free one.  If there are not, you either need to
voice-steal at the host or alloc more VVIDs.  Right?

Tim



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 23.28, Steve Harris wrote:
> On Wed, Jan 08, 2003 at 10:38:44 +0100, David Olofson wrote:
> > > That doesn't properly represetn how it works though, I would
> > > expect VOICE_ON to map to a new gong instance.
> >
> > That means you can't strike a vibrating gong again...? Not quite
> > following here.
>
> Well there would be some other control for hitting the thing.

Which means VOICE_ON actually means... what? (Now it seems redundant 
even for the synths I thought could use it! ;-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 22.50, Tim Hockin wrote:
> > > Here are the ones we all agree on:
> >
> > [...]
> >
> > Can I grab that for the site?
>
> That's why I posted it :)

(Well, I kind of suspected that. ;-)
Thanks! Will get to it as soon as I've caught up with the mail...


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 22.25, Steve Harris wrote:
> On Wed, Jan 08, 2003 at 09:36:08 +0100, David Olofson wrote:
> > For implementational reasons, I'm claiming that it makes a lot of
> > sense to assume that synths know what to do when they receive the
> > first event to a VVID that doesn't have a voice. I don't see why
> > the case "this VVID has no voice" should be something that the
> > host/sender *has* to worry about. (Especially since the whole
> > concept is irrelevant to monophonic synths. These will ignore
> > VVIDs anyway.)
>
> But the instrument has to be able to tell control changes to stolen
> voices from control changes for a new voice from a voice that
> should replace the new voice in the same id (like with midi's
> pitch=id).
>
> Its easier if it doesn't have to keep track of which are which.

Well, instead of NULLing VVID entries for stolen voices, connect them 
to a dummy voice.

Another alternative (which is looking more and more attractive to me) 
is to use 'int' instead of 'void *' for VVID entries. Then you can 
just use one value (-1) for "no voice" and another (-2) for "dead 
VVID". Other values would be physical voice indices or whatever fits 
the implementation.

(Well, you *can* do that by pointing VVID entries at various objects 
that are not voices, and check the pointers, but that's not too 
sexy... :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Wed, Jan 08, 2003 at 10:38:44 +0100, David Olofson wrote:
> > That doesn't properly represetn how it works though, I would expect
> > VOICE_ON to map to a new gong instance.
> 
> That means you can't strike a vibrating gong again...? Not quite 
> following here.

Well there would be some other control for hitting the thing.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 20.09, Tim Hockin wrote:
> > > Hrrm, I can't argue that a temp ID is pretty, but I _like_ that
> > > the plugin allocates the VID.
> >
> > As a plugin author I like that the host does it ;) One less thing
> > to take care of.
>
> As a host author, I like that I can account for your voices by REAL
> voice IDs. :)

Well, I still can't see why. Trackers used to do it this way, but h/w 
synths stopped doing it about when CV started to lose popularity, 
AFAIK.

That said, one VVID == one physical voice, or possibly no voice at 
all. (And if you care, the latter only happens when you're 
overloading your polyphony.) That is, you can still do whatever you 
like with a single voice for as long as you like. The problems that 
MIDI has with addressing voices does not apply here, whether voices 
are addressed directly or virtually.


> Besides that, don't you have an array of voice
> structs or something similar in EVERY plugin?

Nope - so why care what the synth actually does?


> Just return me your
> key.

Why? What do you want it for?

Why not just give *me* (the synth) somewhere to put my key instead, 
so it comes with the VVID without anyone but me ever having to look 
at it?


> They don't have to be sequential or even small, just positive
> (or if you prefer, non-zero).

How about 32 bit, or void *; no additional restrictions?


> In the end, it makes your life easy, too, since your key is my key
> - you don't need to hash or map anything.

That's great, but with VVIDs + table, you can have that as well - 
without two-way communication or senders having to know anything 
about it. (Senders deal only with indices, and the host manages the 
table. Only synths will ever access the entries in the table)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 19.56, Steve Harris wrote:
> On Wed, Jan 08, 2003 at 10:32:20AM -0800, Tim Hockin wrote:
> > > I dont like the fact that the id changes after the first block
> > > its used in and that the instrument has to comminicate the
> > > internal VID back to the host.
> >
> > Hrrm, I can't argue that a temp ID is pretty, but I _like_ that
> > the plugin allocates the VID.
>
> As a plugin author I like that the host does it ;) One less thing
> to take care of.

Well, you still have to keep track of your voices, but indeed; let 
the host do as much as possible of the work.

(And BTW: Again, the sender is *not* always the host - or you can't 
have event processors in the net.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 19.32, Tim Hockin wrote:
> > I dont like the fact that the id changes after the first block
> > its used in and that the instrument has to comminicate the
> > internal VID back to the host.
>
> Hrrm, I can't argue that a temp ID is pretty, but I _like_ that the
> plugin allocates the VID.

So do I, but I think there are too many side effects.

And each VVID actually being the index of a void * to do what you 
want with isn't *that* much worse, is it? Not as effecient as "VVID 
*is* my value", but "VVID is an index to my value" isn't many cycles 
from it.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 19.23, Steve Harris wrote:
> On Wed, Jan 08, 2003 at 09:18:38AM -0800, Tim Hockin wrote:
> > > On Wed, Jan 08, 2003 at 12:09:56 -0800, Tim Hockin wrote:
> > > > What is LUT?  What is voice-marking?  The negative VVIDs are
> > > > valid for the duration of the block, after which they use
> > > > their new names.  It seems simple to me.
> > >
> > > It doesn't to me!
> >
> > Can you elucidate what your objections are?  I've stated my
> > problems with VVIDs :)
>
> I dont like the fact that the id changes after the first block its
> used in and that the instrument has to comminicate the internal VID
> back to the host.

And (somewhat related to that), even though VVIDs come with a host 
managed table of void *, int32 or whatever, it doesn't have any 
issues with inter-process gateways, wire connections and the like 
that any two-way communication system has. It's not latency sensitive 
(obviously), and the table isn't an issue, since it's only used by 
the *synth*. (The sender only uses the VVIDs, which are just indices 
into the table.)

That is, if you want to talk to a remote synth, just allocate VVIDs 
from the remote *host*, and everything will Just Work(TM).

(Which reminds me; the alloc_vvids() call needs to know which 
receiver the VVIDs are for, obviously. The target queue as an 
argument should be sufficient, as the host should be able to keep 
track of which queues are gateways to remote hosts.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> > Here are the ones we all agree on:
> [...]
> 
> Can I grab that for the site?

That's why I posted it :)



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 14.27, Kjetil S. Matheussen wrote:
[...]
> > Ny if you map CC1 (say) to a map then changing that will change
> > the sample, its just that, in MIDI, velocity cant change after
> > the voice has started.
>
> Not quite right, _I guess_ that in some synths you can change
> velocity of a note after it has started by sending polyphonic
> aftertouch messages. At least, you can use polyphinc aftertouch
> messages for that purpose.

Yes, but Poly Aftertouch is not the same things as NoteOn velocity in 
the MIDI protocol. Of course you can map multiple controls to the 
same function, but that's a completely different matter.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 11.50, Steve Harris wrote:
> On Tue, Jan 07, 2003 at 10:40:15 -0800, Tim Hockin wrote:
> > > ONe great thing about this scheme is that it encourages people
> > > not to think of certain, arbitary parameters as instantiation
> > > parameters, withc are special in some way, 'cos there not.
> >
> > The way I've seen velocity-mapped samplers is not to change the
> > sample later - you get the sample that maps to the initial
> > velocity, and further changes are just volume/filter
> > manipulation.
>
> Ny if you map CC1 (say) to a map then changing that will change the
> sample, its just that, in MIDI, velocity cant change after the
> voice has started.

Right. But the synth can still latch CC1 only on NoteOn - and in that 
case, you have to send CC1 *before* the NoteOn you want it to affect.

However, note that CC1 is a channel control, and thus doesn't have 
the problem of "what if there's no voice allocated to trace the 
controls!?"


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 11.49, Steve Harris wrote:
> On Wed, Jan 08, 2003 at 12:23:23 -0800, Tim Hockin wrote:
> > > I dont think that matters, eg. my favourite exmaple, the gong
> > > synth. If you issue a voice on it will initialise the gong
> > > graph in its stabel state, and when you sned a velocity signal
> > > (or whatever) it will simulate a beater striking it.
> >
> > I'd expect it to work quite differently.  I'd expect it to
> > initialize a stable state, and whenever a VOICE_ON comes in,
> > latch the velocity, beater-hardness, and strike coordinates. 
> > Perhaps damping would be a continuous control.
> >
> > Each new strike would be a VOICE_ON and each new strike would
> > affect the global graph.  Really it is monophonic.  Each new
> > voice inherits state from the prior voice.
>
> That doesn't properly represetn how it works though, I would expect
> VOICE_ON to map to a new gong instance.

That means you can't strike a vibrating gong again...? Not quite 
following here.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 10.01, Tim Hockin wrote:
[...]
> All this is ok in concept.  I still think it is too implicit and it
> feels 'sneaky'. I'd MUCH rather see a rigid, well-defined protocol
> that forces the few bizarre instruments to do a bit more work
> (really, just a BIT) than a loose, implicit one that is going to be
> easy to screw up.

What's implicit about it, and how can you screw it up?

If a MIDI synth gets Poly Pressur or NoteOn/Off events, it has to 
handle them properly. The MIDI pitch is the "VVID".

This is exactly the same thing, only a VVID does not imply a note 
pitch value.

As to VVID management, that's a separate issue, and I'm just looking 
for the cleanest and simplest approach. I still think VVIDs with a 
host global table of VVID entries is the simplest way that can 
actually work, but there are other ways - maybe even simler ways.


> VOICE_ALLOCATE: declare a VID as in-use

Ok, this eliminates the need for ever checking VVID entries in a 
synth. Good.

(Although the name is probably not right. You don't really care 
whether you get a physical voice or not, and it doesn't matter to the 
sender. It can confuse things, though - see below.)


> VOICE_CONTROL_SET: set values for a per-voice control for a VID

Yes - and this is where your average synth will have to allocate a 
voice (or a fake "control tracer" voice) - unless it's done in 
VOICE_ALLOCATE.

Anyway, it's optional for synths. My only argument against 
VOICE_ALLOCATE is that it is redundant. Sending any other event to a 
Voice could have the same effect, and would also bring more 
information. (The only information VOICE_ALLOCATE brings is "the 
sender is about to use this voice ID at some point".)


> VOICE_ON: declare that a VID is ready for play

I still don't really see the point with this one. Synths will have to 
allocate a voice as soon as the first voice control event arrives 
anyway, so all this says is "start a note!" - and that doesn't even 
apply to continous velocity instruments.

Sure, I can see where VOICE_ON can be a handy feature (the gong, for 
example), but it's not strictly related to voice allocation or 
anything like that; it's just a form of control data. I don't see 
what justifies making this more special than a "standardized" NOTE 
control.


> Any VOICE_CONTROL_SET for a VID that does not exist is discarded.

Or just never talk about VVIDs you haven't "initialized". Seems dead 
simple to me, since all you have to do is VOICE_ALLOCATE(vvid) at 
least once before you start using 'vvid'.

(This is where I find "VOICE_ALLOCATE" confusing. To the synth, is an 
"allocated VVID" one that currently has a physical voice, or does any 
initialized VVID count? I can see sequencer and synth coders using 
the same term for two rather different things, if the term "ALLOCATE" 
is used for this.)


> I don't think there is any instrument for which this can't work. 

Right - it's just slightly too much redundancy for my taste. ;-) 

Seriously, though; there has to be *one* of DETACH_VVID and 
VOICE_ALLOCATE. From the implementation POV, it seems to me that they 
are essentially equivalent, as both are effectively used when you 
want to say "I'll use this VVID for a new context from now on." 
VOICE_ALLOCATE is obviously closer to the right name for that action.


> It may not be a perfect fit for some, but I think they are the vast
> minority.
>
> This model fits and doesn't taste too bad.

Well, some synths may ignore VOICE_ON entirely, so I don't see why it 
deserves to be special, but that's my only real problem with this. 
Can't see a technical problem with VOICE_ON, though.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Wed, Jan 08, 2003 at 09:36:08 +0100, David Olofson wrote:
> For implementational reasons, I'm claiming that it makes a lot of 
> sense to assume that synths know what to do when they receive the 
> first event to a VVID that doesn't have a voice. I don't see why the 
> case "this VVID has no voice" should be something that the 
> host/sender *has* to worry about. (Especially since the whole concept 
> is irrelevant to monophonic synths. These will ignore VVIDs anyway.)

But the instrument has to be able to tell control changes to stolen
voices from control changes for a new voice from a voice that should
replace the new voice in the same id (like with midi's pitch=id).

Its easier if it doesn't have to keep track of which are which.
 
> Again, this is voice allocation. Leave this to the synth.

Absolutly.

- Steve 



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 09.48, Tim Hockin wrote:
> > > I agree entirely. If each VVID=a voice then we should just call
> > > them Voice ID's, and let the event-sender make decisions about
> > > voice reappropriation.
> >
> > Actually, they're still virtual, unless we have zero latency
> > feedback from the synths. (Which is not possible, unless
> > everything is function call based, and processing is blockless.)
> > The sender never knows when a VVID loses it's voice, and can't
> > even be sure a VVID *gets* a voice in the first place. Thus, it
> > can't rely on anything that has a fixed relation to physical
> > synth voices.
>
> 
>
> I think it is fair to say that for a block, the sender can assume a
> voice-allocation succeeds.  The only time a VID is ever virtual is
> during the creation block.

No. It'll become "virtual" if the voice gets stolen. Then the synth 
has to remember to listen only to the new virtual ID for that voice, 
and ignore the direct references, since those are for the old context.


> The sender can assume that the neagtive
> VID exists for that block, and at the end of the block's run() it
> will know whether it can send any further events to that VID.

Provided we don't allow connections with more than one block of 
latency; yes.


> I think this protocol is not so insane.  At least no more insane
> than the VVID allocation scheme.

Well, I'm still seing a lot more issues and more complexity with this 
scheme than with VVIDs. Could be missing something, though.

Anyway, I'm just about to put VVIDs, the way I think of them, to work 
in Audiality. Let's see how that works out...


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 09.42, Tim Hockin wrote:
> > >I made a post a while back defining all the XAP terminology to
[...]
> Here are the ones we all agree on:
[...]

Can I grab that for the site?


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 09.15, Tim Hockin wrote:
> > It's just that there's a *big* difference between latching
> > control values when starting a note and being able to "morph"
> > while the note is played... I think it makes a lot of sense to
> > allow synths to do it either way.
>
> Should controls have a flag that indicates whether they are
> continuous vs note-on ?

They would have to be different *types*, as the semantics are totally 
different.


> They can certainly be both or either one. 

Yes, and that's my problem with it. Or rather; it's ok for synths to 
be able to hint that they use controls this way, but designing the 
voice addressing/allocation scheme around it has serious implications.


> It is a hint to allow the host to send init-params at init time
> only (and a hint to the user). Obviously the plugin has to ignore
> it no matter what.

Right, I think... As long as they're really just normal controls, and 
work *exactly* like normal controls (apart from the way the synth 
"samples" the values), this is fine.

Hosts may or may not care, though; if the user *really* wants to put 
initializers some time before the actual start of notes, it's not 
really a problem, although it may indeed cause some synths to waste 
real voices on tracking voice controls. (That's why I suggested some 
synths might want to use Virtual Voices until sound is to be 
produced.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 09.09, Tim Hockin wrote:
[...]
> > Ok, but I don't see the advantage of this, vs explicitly
> > assigning preallocated VVIDs to new voices. All I see is a rather
> > significant performance hit when looking up voices.
>
> Where a perf hit?

You get a 32 bit integer value, and you have to look up the voice 
object it refers to. No matter how you do it, it's going to be more 
expensive than grabbing the pointer from an array indexed by the 
value.


> > Just grab a new VVID and start playing. The synth will decide
> > when a physical voice should be used, just as it decides what
> > exactly to do with that physical voice.
>
> So how does a synth tell the host how it gets activated?

It doesn't because the host/sender doesn't really care. A controller 
or sequencer is just supposed to deliver events. It shouldn't care 
more about voice activation than it does about the exact relation 
between events and audio output.


> A
> VOICE_ON event tells the host and the user 'we are allocating a
> VVID to use'. It also tells the synth.

Yes, but why call it "VOICE_ON" when that's not what it means?


> If the synth wants to not
> play anything for Velocity < 0.5, then it should just not play
> anything.  Just because a Voice is silent, doesn't mean it is not
> active.

Right.

For implementational reasons, I'm claiming that it makes a lot of 
sense to assume that synths know what to do when they receive the 
first event to a VVID that doesn't have a voice. I don't see why the 
case "this VVID has no voice" should be something that the 
host/sender *has* to worry about. (Especially since the whole concept 
is irrelevant to monophonic synths. These will ignore VVIDs anyway.)


> This is a separate discussion entirely from VVIDs.

Yes; VVIDs are just a means of addressing voices. Voice allocation is 
a synth implementation issue. That's *exactly* why I don't like the 
idea of mixing these two things up on the API level.


> > With continous velocity, it is no longer obvious when the synth
> > should actually start playing. Consequently, it seems like wasted
> > code the have the host/sender "guess" when the synth might want
> > to allocate or free voices, since the synth may ignore that
> > information anyway. This is why the explicit note on/off logic
> > seems broken to me.
>
> _Your_ logic seems broken to me :)  If you have a continuous
> controller for Velocity, you have one voice.  So you want a new
> voice, you use a new VVID. How do you standardize this interface so
> a host can present a UI that makes sense?

This VVID thing is just the same thing as MIDI pitch - except that 
VVIDs don't double as note pitch. When you want to control a specific 
note in MIDI, you address it using the MIDI pitch of that note. The 
only difference with VVIDs is that the VVID you use for a particular 
note does not imply a specific pitch.

As to the UI, that's entirely up to the application designer. If you 
want it to look and act like a traditional MIDI sequencer, just use 
one VVID for each pitch in the MIDI scale, and address Voice Controls 
by MIDI pitch.


> If VOICE_ON doesn't make sense for some synth, then it still makes
> sense for the user.

Why? (Unless the user is a tracker die-hard.)

VOICE_ON, assuming that there can be continous velocity synths, has 
no corresponding MIDI event, and doesn't really mean anything to the 
user, so I don't see why it would make sense to any user. It's an API 
thing entirely.


> > > Block start:
> > >   time X: voice(-1, ALLOC)/* a new voice is coming */
> > >   time X: velocity(-1, 100)   /* set init controls */
> > >   time X: voice(-1, ON)   /* start the voice */
> > >   time X: (plugin sends host 'voice -1 = 16')
> > >   time Y: voice(-2, ALLOC)
> > >   time Y: velocity(-2, 66)
> > >   time Y: voice(-2, ON)
> > >   time Y: (plugin sends host 'voice -2 = 17')
> > >
> > > From then out the host uses the plugin-allocated voice-ids.  We
> > > get a large (all negative numbers) namespace for new notes per
> > > block.
> >
> > Short term VVIDs, basically. (Which means there will be voice
> > marking, LUTs or similar internally in synths.)
>
> What is LUT?

Look-Up Table. (So you can find objects without searching.)


>  What is voice-marking?

What I'm doing in Audiality; sender hands the synth a Voice ID, and 
the synth puts that in the voice it allocates. When further events 
referring to that Voice ID are received, the synth searches the 
voices for the Voice ID, and then (if a voice is found) performs the 
requested action on that voice.


> The negative VVIDs are valid
> for the duration of the block, after which they use their new
> names.  It seems simple to me.

It's less simple than what I'm doing in Audiality, and pretty much 
only succeeds in providing half a solution to the main problem with 
that system ("When can I safely reuse a voice ID?"), while 
introducing another, more serious problem: The host/sender 
eventually(*) gets

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread David Olofson
On Wednesday 08 January 2003 08.37, Tim Hockin wrote:
> > > Either you need to NEVER re-use a VVID, or you need to tell the
> > > host when an ended VVID is actually re-usable.  Or you need to
> > > have voice-ids allocated by the plugin, and NOT the host, which
> > > I like more.
> >
> > Having the plugins allocate them is a pain, its much easier if
> > the host aloocates them, and just does so from a sufficiently
> > large pool, if you have 2^32 host VVIDs per instrument you can
> > just round robin them.
>
> Why is it a pain?  I think it is clean.  I've never cared for the
> idea of Virtual Voices.  Either a voice is on, or it is not.  The
> plugin and the host need to agree on that.

I simply don't see why. This is tracker philosophy. MIDI sequencers 
never have a clue about what synths are actually doing, but it all 
works just fine anyway. (Better than traditional trackers ever could, 
for most things, I'd say.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> On Wed, Jan 08, 2003 at 11:09:50 -0800, Tim Hockin wrote:
> > As a host author, I like that I can account for your voices by REAL voice
> > IDs. :)  Besides that, don't you have an array of voice structs or something
> > similar in EVERY plugin?  Just return me your key.  They don't have to be
> > sequential or even small, just positive (or if you prefer, non-zero).
> 
> There needs to be some kind of abstraction because you might be talking
> about a previous incarnation of voice that I've stolen from.

You will have told me that already, so I will know better.  Complexity
belongs in the host or SDK.

(I like these short messages - easier to follow :)



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Wed, Jan 08, 2003 at 11:09:50 -0800, Tim Hockin wrote:
> As a host author, I like that I can account for your voices by REAL voice
> IDs. :)  Besides that, don't you have an array of voice structs or something
> similar in EVERY plugin?  Just return me your key.  They don't have to be
> sequential or even small, just positive (or if you prefer, non-zero).

There needs to be some kind of abstraction because you might be talking
about a previous incarnation of voice that I've stolen from.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> > Hrrm, I can't argue that a temp ID is pretty, but I _like_ that the plugin
> > allocates the VID.
> 
> As a plugin author I like that the host does it ;) One less thing to take
> care of.

As a host author, I like that I can account for your voices by REAL voice
IDs. :)  Besides that, don't you have an array of voice structs or something
similar in EVERY plugin?  Just return me your key.  They don't have to be
sequential or even small, just positive (or if you prefer, non-zero).

In the end, it makes your life easy, too, since your key is my key - you
don't need to hash or map anything.

Tim



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Wed, Jan 08, 2003 at 10:32:20AM -0800, Tim Hockin wrote:
> > I dont like the fact that the id changes after the first block its used in
> > and that the instrument has to comminicate the internal VID back to the
> > host.
> 
> Hrrm, I can't argue that a temp ID is pretty, but I _like_ that the plugin
> allocates the VID.

As a plugin author I like that the host does it ;) One less thing to take
care of.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> I dont like the fact that the id changes after the first block its used in
> and that the instrument has to comminicate the internal VID back to the
> host.

Hrrm, I can't argue that a temp ID is pretty, but I _like_ that the plugin
allocates the VID.




Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Wed, Jan 08, 2003 at 09:18:38AM -0800, Tim Hockin wrote:
> > On Wed, Jan 08, 2003 at 12:09:56 -0800, Tim Hockin wrote:
> > > What is LUT?  What is voice-marking?  The negative VVIDs are valid for the
> > > duration of the block, after which they use their new names.  It seems
> > > simple to me.
> > 
> > It doesn't to me!
> 
> Can you elucidate what your objections are?  I've stated my problems with
> VVIDs :)

I dont like the fact that the id changes after the first block its used in
and that the instrument has to comminicate the internal VID back to the
host.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> On Wed, Jan 08, 2003 at 12:09:56 -0800, Tim Hockin wrote:
> > What is LUT?  What is voice-marking?  The negative VVIDs are valid for the
> > duration of the block, after which they use their new names.  It seems
> > simple to me.
> 
> It doesn't to me!


Can you elucidate what your objections are?  I've stated my problems with
VVIDs :)

Tim



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Wed, Jan 08, 2003 at 02:27:51PM +0100, Kjetil S. Matheussen wrote:
> > Ny if you map CC1 (say) to a map then changing that will change the
> > sample, its just that, in MIDI, velocity cant change after the voice has
> > started.
> >
> Not quite right, _I guess_ that in some synths you can change velocity of
> a note after it has started by sending polyphonic aftertouch messages.
> At least, you can use polyphinc aftertouch messages for that purpose.

You can use polyphonic aftertouch to change the amplitude of a note, but
MIDI aftertouch is not the same thing as MIDI velocity, attack velocity can
only be sent with NOTE ON messages.

The case we are discussing here is when velocity is mapped to a sample
map, so differnt samples are selected depending on the velocity.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Kjetil S. Matheussen


On Wed, 8 Jan 2003, Steve Harris wrote:

> On Tue, Jan 07, 2003 at 10:40:15 -0800, Tim Hockin wrote:
> > > ONe great thing about this scheme is that it encourages people not to
> > > think of certain, arbitary parameters as instantiation parameters, withc
> > > are special in some way, 'cos there not.
> >
> > The way I've seen velocity-mapped samplers is not to change the sample later
> > - you get the sample that maps to the initial velocity, and further changes
> > are just volume/filter manipulation.
>
> Ny if you map CC1 (say) to a map then changing that will change the
> sample, its just that, in MIDI, velocity cant change after the voice has
> started.
>
Not quite right, _I guess_ that in some synths you can change velocity of
a note after it has started by sending polyphonic aftertouch messages.
At least, you can use polyphinc aftertouch messages for that purpose.


-- 





Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Wed, Jan 08, 2003 at 12:09:56 -0800, Tim Hockin wrote:
> What is LUT?  What is voice-marking?  The negative VVIDs are valid for the
> duration of the block, after which they use their new names.  It seems
> simple to me.

It doesn't to me!
 
- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Tue, Jan 07, 2003 at 11:35:20 -0800, Tim Hockin wrote:
> All the further discussion leads me to understand we like this?  Eek.  I
> proposed it as a straw man.
> 
> So we send 1 VOICE_ALLOC, n control SETs, and 1 VOICE_ON event?
> 
> is that what we're converging on?

Yes, that now seems reasonable to me. My suspicion is that its not
neccesary, but I dont think it will hurt, and it makes MIDI->XAP
interfaces simple.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Tue, Jan 07, 2003 at 10:40:15 -0800, Tim Hockin wrote:
> > ONe great thing about this scheme is that it encourages people not to
> > think of certain, arbitary parameters as instantiation parameters, withc
> > are special in some way, 'cos there not.
> 
> The way I've seen velocity-mapped samplers is not to change the sample later
> - you get the sample that maps to the initial velocity, and further changes
> are just volume/filter manipulation.

Ny if you map CC1 (say) to a map then changing that will change the
sample, its just that, in MIDI, velocity cant change after the voice has
started.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Tue, Jan 07, 2003 at 11:37:51 -0800, Tim Hockin wrote:
> > > Either you need to NEVER re-use a VVID, or you need to tell the host when an
> > > ended VVID is actually re-usable.  Or you need to have voice-ids allocated
> > > by the plugin, and NOT the host, which I like more.
> > 
> > Having the plugins allocate them is a pain, its much easier if the host
> > aloocates them, and just does so from a sufficiently large pool, if you
> > have 2^32 host VVIDs per instrument you can just round robin them.
> 
> Why is it a pain?  I think it is clean.  I've never cared for the idea of
> Virtual Voices.  Either a voice is on, or it is not.  The plugin and the
> host need to agree on that.

I dont think the host doesn't has to care. Only the instrument can do
voice assignment and priority, and I think its better if assigning voices
doesn't require a two way conversation.

If hte host assigns the VVIDs then it can refer to that voice instantly,
without having to ask the instrument what ID it assigned to the voice it
just created.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Steve Harris
On Wed, Jan 08, 2003 at 12:23:23 -0800, Tim Hockin wrote:
> > I dont think that matters, eg. my favourite exmaple, the gong synth. If
> > you issue a voice on it will initialise the gong graph in its stabel
> > state, and when you sned a velocity signal (or whatever) it will simulate
> > a beater striking it.
> 
> I'd expect it to work quite differently.  I'd expect it to initialize a
> stable state, and whenever a VOICE_ON comes in, latch the velocity,
> beater-hardness, and strike coordinates.  Perhaps damping would be a
> continuous control.
> 
> Each new strike would be a VOICE_ON and each new strike would affect the
> global graph.  Really it is monophonic.  Each new voice inherits state from
> the prior voice.

That doesn't properly represetn how it works though, I would expect
VOICE_ON to map to a new gong instance.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> VOICE_ALLOCATE is really just a way of saying "I want this VVID 
> hooked up to a new voice - forget about whatever it was connected 
> to." You don't *have* to send one for every note, although you'll 
> probably want to, most of the time. It's a separate feature, and 
> doesn't imply anything about voice allocation; that's what I'm 
> actually trying to say.
> 
> So... Maybe we should actually just go back to the original idea of 
> "DETACH_VVID", but instead use it right before we want a VVID to be 
> attached to a new voice? (The original idea was to send a DETACH_VVID 
> "some time" after you're done with a voice.)
> 
> That way, the first Voice Control Change for a VVID implicitly and 
> "indirectly" causes voice allocation, as a result of no voice (or 
> some sort of marked fake voice - synth implementation dependent) 
> being attached to the VVID.

All this is ok in concept.  I still think it is too implicit and it feels
'sneaky'.  I'd MUCH rather see a rigid, well-defined protocol that forces the
few bizarre instruments to do a bit more work (really, just a BIT) than a 
loose, implicit one that is going to be easy to screw up.

VOICE_ALLOCATE: declare a VID as in-use
VOICE_CONTROL_SET: set values for a per-voice control for a VID
VOICE_ON: declare that a VID is ready for play

Any VOICE_CONTROL_SET for a VID that does not exist is discarded.
I don't think there is any instrument for which this can't work.  It may not
be a perfect fit for some, but I think they are the vast minority.

This model fits and doesn't taste too bad.



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> > I agree entirely. If each VVID=a voice then we should just call
> > them Voice ID's, and let the event-sender make decisions about
> > voice reappropriation.
> 
> Actually, they're still virtual, unless we have zero latency feedback 
> from the synths. (Which is not possible, unless everything is 
> function call based, and processing is blockless.) The sender never 
> knows when a VVID loses it's voice, and can't even be sure a VVID 
> *gets* a voice in the first place. Thus, it can't rely on anything 
> that has a fixed relation to physical synth voices.



I think it is fair to say that for a block, the sender can assume a
voice-allocation succeeds.  The only time a VID is ever virtual is during
the creation block.  The sender can assume that the neagtive VID exists for
that block, and at the end of the block's run() it will know whether it can
send any further events to that VID.

I think this protocol is not so insane.  At least no more insane than the
VVID allocation scheme.




Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> >I made a post a while back defining all the XAP terminology to date. Read 
> >it if you haven't - it is useful :)
> 
> I was hoping something of this sort existed. It would be very helpful if you 
> could put the list of XAP terminology on the webpage. It would help keep 


Here are the ones we all agree on:

* Plugin:
A chunk of code, loaded or not, that implements this API (e.g. a .so
file or a running instance).
* Host
The program responsible for loading and controlling Plugins.
* Instrument/Source:
An instance of a Plugin that supports the instrument API and is used
to generate audio signals.  Many Instruments will implement audio
output but not input, though they may support both and be used an an
Effect, too.
* Effect:
An instance of a Plugin that supports both audio input and output.
* Output/Sink:
An instance of a Plugin that can act as a terminator for a chain of
Plugins.  Many Outputs will will support audio input but not output,
though they may support both and be used as an Effect, too.
* Voice:
A playing sound within an Instrument.  Instruments may have multiple
Voices, or only one Voice.  A Voice may be silent but still active.
* Event:
A time-stamped notification of some change of something.


And some definitions that depend on other unfinished definitions:

* Control:
A knob, button, slider, or virtual thing that modifies behavior of
the Plugin.  Controls can be master (e.g. master volume),
per-Bay (e.g. channel pressure) or per-Voice (e.g. aftertouch).
* Port:
An audio input or output. Ports are on AUDIO Bays.

And some outstanding questions to be answered later (too many discussions
gets me too confused!)

* Preset:
A stored or loaded set of Control values.
//FIXME: presets can be multi-channel or single-channel
* EventQueue:
A control input or output.  Plugins may internally have as many
EventQueues as they deem necessary.  The Host will ask the Plugin
for the EventQueue for each Control.
//FIXME: what is the full list of things that have a queue?
// Controls, Plugin(master), each Channel?

And lastly - undecided (again, we'll get there, lets work out
voice-allocation first :)

VVID, Bay, Channel, Templates, etc.



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> I dont think that matters, eg. my favourite exmaple, the gong synth. If
> you issue a voice on it will initialise the gong graph in its stabel
> state, and when you sned a velocity signal (or whatever) it will simulate
> a beater striking it.

I'd expect it to work quite differently.  I'd expect it to initialize a
stable state, and whenever a VOICE_ON comes in, latch the velocity,
beater-hardness, and strike coordinates.  Perhaps damping would be a
continuous control.

Each new strike would be a VOICE_ON and each new strike would affect the
global graph.  Really it is monophonic.  Each new voice inherits state from
the prior voice.




Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> It's just that there's a *big* difference between latching control 
> values when starting a note and being able to "morph" while the note 
> is played... I think it makes a lot of sense to allow synths to do it 
> either way.

Should controls have a flag that indicates whether they are continuous vs
note-on ?  They can certainly be both or either one.  It is a hint to allow
the host to send init-params at init time only (and a hint to the user).  
Obviously the plugin has to ignore it no matter what.



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-08 Thread Tim Hockin
> > The plugin sees a stream of new VVIDs (maybe wrapping every 2^32
> > notes - probably OK).  It has it's own internal rules about voice
> > allocation, and probably has less polyphony than 128 (or whatever
> > the host sets).  It can do smart voice stealing (though the LRU
> > algorithm the host uses is probably good enough).  It hashes VVIDs
> > in the 0-2^32 namespace on it's real voices internally.  You only
> > re-use VVIDs every 2^32 notes.
> 
> Ok, but I don't see the advantage of this, vs explicitly assigning 
> preallocated VVIDs to new voices. All I see is a rather significant 
> performance hit when looking up voices.

Where a perf hit?

> Just grab a new VVID and start playing. The synth will decide when a 
> physical voice should be used, just as it decides what exactly to do 
> with that physical voice.

So how does a synth tell the host how it gets activated?  A VOICE_ON event
tells the host and the user 'we are allocating a VVID to use'.  It also
tells the synth.  If the synth wants to not play anything for Velocity <
0.5, then it should just not play anything.  Just because a Voice is silent,
doesn't mean it is not active.  This is a separate discussion entirely from
VVIDs.

> With continous velocity, it is no longer obvious when the synth 
> should actually start playing. Consequently, it seems like wasted 
> code the have the host/sender "guess" when the synth might want to 
> allocate or free voices, since the synth may ignore that information 
> anyway. This is why the explicit note on/off logic seems broken to me.

_Your_ logic seems broken to me :)  If you have a continuous controller for
Velocity, you have one voice.  So you want a new voice, you use a new VVID.
How do you standardize this interface so a host can present a UI that makes
sense?

If VOICE_ON doesn't make sense for some synth, then it still makes sense for
the user.

> > Block start:
> >   time X:   voice(-1, ALLOC)/* a new voice is coming */
> >   time X:   velocity(-1, 100)   /* set init controls */
> >   time X:   voice(-1, ON)   /* start the voice */
> >   time X:   (plugin sends host 'voice -1 = 16')
> >   time Y:   voice(-2, ALLOC)
> >   time Y:   velocity(-2, 66)
> >   time Y:   voice(-2, ON)
> >   time Y:   (plugin sends host 'voice -2 = 17')
> >
> > From then out the host uses the plugin-allocated voice-ids.  We get
> > a large (all negative numbers) namespace for new notes per block.
> 
> Short term VVIDs, basically. (Which means there will be voice 
> marking, LUTs or similar internally in synths.)

What is LUT?  What is voice-marking?  The negative VVIDs are valid for the
duration of the block, after which they use their new names.  It seems
simple to me.

> > We get plugin-specific voice-ids (no hashing/translating).
> 
> Actually, you *always* need to do some sort of translation if you 
> have anything but actual voice indices. Also note that there must be 

Because the plugin can allocate them, the plugin need not hash or translate.
It can be a direct index.

> a way to assign voice IDs to non-voices (ie NULL voices) or similar, 
> when running out of physical voices.

if voice-ids are allocated by the plugin, there is no NULL voice.  If you
run out of physical voices you steal a voice or you send back a failure for
the positive voice id.

> You can never return an in-use voice ID, unless the sender is 
> supposed to check every returned voice ID. Better return an invalid 
> voice ID or something...

Host:
send voice_on for temp vid -1
run
read events
find a voice-id -1 => new_vid
if (new_vid < 0) {
/* crap, that voice failed - handle it */
} else {
if (hash_lookup(plug->voices, new_vid)) {
/* woops, plugin stole that voice - handle it */
}
hash_insert(plug->voices, new_vid, something)
}

If the plugin wants to steal a voice, do so.  If it wants to reject new
voices, do so.  It is simple, easy to code and to understand.

> Well, it's an interesting idea, but it has exactly the same problem 
> as VVIDs, and doesn't solve any of the problems with VVIDs. The fact 

It has none of the problems of VVIDs.  The only problem is that it requires
dialog.

* no carving of a VVID namespace for controller plugins
* the plugin and the host always agree on the active list of voices
  * host sends voice_off no release
- plugin puts the VID in the free-list immediately
  - host never sends voice_off
- plugin puts the VID in the free-list whenever it finishes
- plugin can alert the host or not
  - host sends events or voice_off too late
- plugin recognizes that the voice is off and ignores events
  - host sends voice_off with a long release
- plugin puts the VID in the free-list as soon as possible
  - host overruns plugin's max poly
- plugin chooses a VID and stops it, returns that VID (steals the voice)
  or plugin re

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Tim Hockin
> > Either you need to NEVER re-use a VVID, or you need to tell the host when an
> > ended VVID is actually re-usable.  Or you need to have voice-ids allocated
> > by the plugin, and NOT the host, which I like more.
> 
> Having the plugins allocate them is a pain, its much easier if the host
> aloocates them, and just does so from a sufficiently large pool, if you
> have 2^32 host VVIDs per instrument you can just round robin them.

Why is it a pain?  I think it is clean.  I've never cared for the idea of
Virtual Voices.  Either a voice is on, or it is not.  The plugin and the
host need to agree on that.

More later - lots of email to catch up on



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Tim Hockin
> > > So maybe VOICE creation needs to be a three-step process.
> > > * Allocate voice
> > > * Set initial voice-controls
> > > * Voice on
> 
> I think this is harder to handle.

All the further discussion leads me to understand we like this?  Eek.  I
proposed it as a straw man.

So we send 1 VOICE_ALLOC, n control SETs, and 1 VOICE_ON event?

is that what we're converging on?



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 20.42, robbins jacob wrote:
[...]
> >OK... I was thinking that the initial mention of the VVID would
> > cause it creation (be that implicit or explict, though I prefer
> > explit I think), thereafter control changes would be applied the
> > the instantiated voice (or the NULL voice if you've run out /
> > declined it).
>
> The initial mention of the VVID is the issue here; certain types of
> voice events are assumed not to allocate a voice (parameter-set
> events). THis is because there is no difference between a tweak on
> a VVID that has had its voice stolen and a tweak intended to
> initialize a voice that arrives before voice-on. We must conclude
> that the plugin will discard both of them. There must be a signal
> to the plugin that a VVID is targeted for activation. we have a few
> options:
>
> ---a voice-activation event is sent, then any initializing events,
> then a voice-on event

That should work, and allows normal Voice Controls to be used as 
initializers.

Also note that nothing says that the synth must allocate a *physical* 
voice for this; it could start with a non-playing virtual voice and 
allocate a physical voice only when it decides to actually make some 
noise. (Or decides to ditch the voice, based on the state of the 
initializers when "voice on" is triggered.)


> ---a voice-on event is sent, with any following events on the same
> timestamp assumed to be initializers

That breaks control/initializer compatibility totally. Initializers 
cannot be Voice Controls if they're special-cased like this.


> ---a voice-activation event is sent and there is no notion of
> voice-on, one or more of the parameters must be changed to produce
> sound but it is a mystery to the sequencer which those are. (I
> don't like this because it make sequences not portable between
> different instruments)

Sequences are still "portable" if controls are hinted in a sensible 
way. Obviously, you can't make a "note on velocity" synth play 
continous velocity sequences properly, but there isn't much that can 
be done about that, really. The note on velocity data just isn't 
there.


> ---events sent to voiceless VVID's are attached to a temporary
> voice by the plugin and which may later use that to initialize an
> actual voice. This negates the assumption that voiceless VVID
> events are discarded.

Sort of. It's hard to avoid this if initializers really are supposed 
to have any similarities with controls, though.

Also, I don't think it's much of a problem. When you run out of 
voices, you're basically abusing the synth, and problems are to be 
expected. (Though, we obviously want to minimize the side effects. 
Thus voice stealing - which still works with this scheme.)


>#2 is just an abbreviated form of #1, as i argue below. (unless
> you allow the activate-to-voice_on cycle to span multiple
> timestamps which seems undesireable)

Well, you can't really prevent it, if initializers are supposed to be 
control values, can you?


> > > > > When are you supposed to do that sort of stuff? VOICE_ON is
> > > > > > > >
> >
> >what triggers it in a normal synth, but with this scheme, you > >
> > > have to wait for some vaguely defined "all parameters > > >
> > available" point.
>
> We can precisely define initialization parameters to be all the
> events sharing the same VVID and timestamp as the VOICE_ON event.

Yes - but that's not compatible with standard Control sematics in any 
way...


> This means that the "all parameters available" point is at the same
> timestamp as the VOICE_ON event, but after the last event with that
> timestamp.

And that point is a bit harder for synths to keep track of than 
"whenever voice on occurs." This is more like script parsing than 
event decoding.

No major issue, but the special casing of initializers is, IMO.


> If we want to include a VOICE_ALLOCATE event then the sequence
> goes: timestamp-X:voice-allocate,
> timestamp-X:voice-parameter-set(considered an initializer if
> appropriate), timestamp-X:voice-on, timestamp-X+1:more
> voice-parameter-sets (same as any other parameter-set)
>
> But this sequence can be shortened by assuming that the voice-on
> event at the last position for timestamp-X is implicit:
> timestamp-X:voice-on(signifying the same thing as voice-allocate
> above) timestamp-X:voice-parameter-set(considered an initializer if
> appropriate), (synth actually activates voice here),
> timestamp-X+1:other-events

The latter isn't very different from assuming that the first control 
event that references a VVID results in a (virtual or physical) voice 
being allocated. In fact, the synth will have to do *something* to 
keep track of controls as soon as you start talking to a voice, and, 
when appropriate, play some sound. Whether or not the synth allocates 
a *physical* voice right away is really an implementation issue.

VOICE_ALLOCATE is really just a way of saying "I want this VVID 
hooked up to a new voice - forget about whate

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 20.40, robbins jacob wrote:
> >I made a post a while back defining all the XAP terminology to
> > date. Read it if you haven't - it is useful :)
>
> I was hoping something of this sort existed. It would be very
> helpful if you could put the list of XAP terminology on the
> webpage. It would help keep everybody on the same page when
> discussing.;)  And it would help people to join the discussion
> without spending the 10-15 hours it takes to read december's posts.

Yep. (I'm maintaining the web page.) I'll try to do find the post(s) 
and put something together ASAP. (Oh, and I have another logo draft I 
made the other day.)


> >VVID allocation and voice allocation are still two different
> > issues. VVID is about allocating *references*, while voice
> > allocation is about actual voices and/or temporary voice control
> > storage.
>
> I agree entirely. If each VVID=a voice then we should just call
> them Voice ID's, and let the event-sender make decisions about
> voice reappropriation.

Actually, they're still virtual, unless we have zero latency feedback 
from the synths. (Which is not possible, unless everything is 
function call based, and processing is blockless.) The sender never 
knows when a VVID loses it's voice, and can't even be sure a VVID 
*gets* a voice in the first place. Thus, it can't rely on anything 
that has a fixed relation to physical synth voices.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread robbins jacob
I made a post a while back defining all the XAP terminology to date. Read 
it if you haven't - it is useful :)

I was hoping something of this sort existed. It would be very helpful if you 
could put the list of XAP terminology on the webpage. It would help keep 
everybody on the same page when discussing.;)  And it would help people to 
join the discussion without spending the 10-15 hours it takes to read 
december's posts.


VVID allocation and voice allocation are still two different issues. VVID 
is about allocating *references*, while voice allocation is about actual 
voices and/or temporary voice control storage.
I agree entirely. If each VVID=a voice then we should just call them Voice 
ID's, and let the event-sender make decisions about voice reappropriation.


---jacob robbins...





_
MSN 8 with e-mail virus protection service: 2 months FREE* 
http://join.msn.com/?page=features/virus



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread robbins jacob
It's obvious when you consider that "VVID has no voice" can happen 
*before* the synth decides to start the voice; not just after a voice has 
detached from the VVID as a result of voice stealing. At that point, only 
the value of the control that triggered "voice on" will be present; all 
other controls have been lost. Unless the host/sender is somehow forced to 
resend the values, the synth will have to use default values or something.


OK... I was thinking that the initial mention of the VVID would cause it 
creation (be that implicit or explict, though I prefer explit I think), 
thereafter control changes would be applied the the instantiated voice (or 
the NULL voice if you've run out / declined it).

The initial mention of the VVID is the issue here; certain types of voice 
events are assumed not to allocate a voice (parameter-set events). THis is 
because there is no difference between a tweak on a VVID that has had its 
voice stolen and a tweak intended to initialize a voice that arrives before 
voice-on. We must conclude that the plugin will discard both of them. There 
must be a signal to the plugin that a VVID is targeted for activation. we 
have a few options:

---a voice-activation event is sent, then any initializing events, then a 
voice-on event

---a voice-on event is sent, with any following events on the same timestamp 
assumed to be initializers

---a voice-activation event is sent and there is no notion of voice-on, one 
or more of the parameters must be changed to produce sound but it is a 
mystery to the sequencer which those are. (I don't like this because it make 
sequences not portable between different instruments)

---events sent to voiceless VVID's are attached to a temporary voice by the 
plugin and which may later use that to initialize an actual voice. This 
negates the assumption that voiceless VVID events are discarded.


  #2 is just an abbreviated form of #1, as i argue below. (unless you allow 
the activate-to-voice_on cycle to span multiple timestamps which seems 
undesireable)

> > > When are you supposed to do that sort of stuff? VOICE_ON is > > > 
what triggers it in a normal synth, but with this scheme, you > > > have to 
wait for some vaguely defined "all parameters > > > available" point.
We can precisely define initialization parameters to be all the events 
sharing the same VVID and timestamp as the VOICE_ON event. This means that 
the "all parameters available" point is at the same timestamp as the 
VOICE_ON event, but after the last event with that timestamp.

If we want to include a VOICE_ALLOCATE event then the sequence goes: 
timestamp-X:voice-allocate, timestamp-X:voice-parameter-set(considered an 
initializer if appropriate), timestamp-X:voice-on, timestamp-X+1:more 
voice-parameter-sets (same as any other parameter-set)

But this sequence can be shortened by assuming that the voice-on event at 
the last position for timestamp-X is implicit:  
timestamp-X:voice-on(signifying the same thing as voice-allocate above) 
timestamp-X:voice-parameter-set(considered an initializer if appropriate), 
(synth actually activates voice here), timestamp-X+1:other-events


---Jacob Robbins.








_
STOP MORE SPAM with the new MSN 8 and get 2 months FREE* 
http://join.msn.com/?page=features/junkmail



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Tim Hockin
> ONe great thing about this scheme is that it encourages people not to
> think of certain, arbitary parameters as instantiation parameters, withc
> are special in some way, 'cos there not.

The way I've seen velocity-mapped samplers is not to change the sample later
- you get the sample that maps to the initial velocity, and further changes
are just volume/filter manipulation.




Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 17.10, Steve Harris wrote:
[...]
> OK... I was thinking that the initial mention of the VVID would
> cause it creation (be that implicit or explict, though I prefer
> explit I think), thereafter control changes would be applied the
> the instantiated voice (or the NULL voice if you've run out /
> declined it).

Well, it is sort of explicit in that the synth will have to do 
*something* about it - or Voice Controls just won't work as intended. 

Now, if it's a physical voice or a virtual voice (basically a dummy 
control tracer) that's being instantiated is another matter. You can 
still actually keep track of more VVIDs than there are physical 
channels, as desicribed earlier. (Two levels of "out of voices".)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 03:50:26 +0100, David Olofson wrote:
> > > Right - so you can't play a polyphonic synth with a continous
> > > velocity controller, unless you track and re-send the controls
> > > that the synth happens to treat as note parameters.
> >
> > I dont understand why.
> 
> It's obvious when you consider that "VVID has no voice" can happen 
> *before* the synth decides to start the voice; not just after a voice 
> has detached from the VVID as a result of voice stealing. At that 
> point, only the value of the control that triggered "voice on" will 
> be present; all other controls have been lost. Unless the host/sender 
> is somehow forced to resend the values, the synth will have to use 
> default values or something.

OK... I was thinking that the initial mention of the VVID would cause it
creation (be that implicit or explict, though I prefer explit I think),
thereafter control changes would be applied the the instantiated voice (or
the NULL voice if you've run out / declined it).
 
> > There is a difference between "turning down" (implies
> > communicating) and ignoring (silent).
> 
> "Turning down" was meant as seen from the synth implementation POV. 
> That is, if a synth "turns down" a voice allocation for a VVID, that 
> VVID just gets routed to "the NULL Voice". Future events with that 
> VVID are ignored.

Fine then.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 03:38:05 +0100, David Olofson wrote:
> It might seem handy to allow synths to explicitly say that some 
> controls *must* be rewritten at the instant of a VOICE_ON, but I 
> don't think it's useful (it's useless for continous velocity 
> instruments, at least) enough to motivate the cost.

Right, we are in agreement then :)

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 15.08, Steve Harris wrote:
> On Tue, Jan 07, 2003 at 01:49:23 +0100, David Olofson wrote:
> > > > The issue here is this: Where does control data go when there
> > > > is no voice assigned to the VVID?
> > >
> > > They get thrown away.
> >
> > Right - so you can't play a polyphonic synth with a continous
> > velocity controller, unless you track and re-send the controls
> > that the synth happens to treat as note parameters.
>
> I dont understand why.

It's obvious when you consider that "VVID has no voice" can happen 
*before* the synth decides to start the voice; not just after a voice 
has detached from the VVID as a result of voice stealing. At that 
point, only the value of the control that triggered "voice on" will 
be present; all other controls have been lost. Unless the host/sender 
is somehow forced to resend the values, the synth will have to use 
default values or something.


> > > Well, I thinik its OK, because the note will not be used to
> > > render any samples until after all the events have been
> > > processed.
> >
> > But who guarantees that whatever the "trigger" event is, also
> > comes with events for the "parameter" controls? It's trivial (but
> > still has to be done!) with explicit NOTE_ON events when assuming
> > that NOTE_ON means "allocate and start voice NOW", but it's not
> > at all possible if the synth triggers on voice control changes.
>
> I dont think that matters, eg. my favourite exmaple, the gong
> synth. If you issue a voice on it will initialise the gong graph in
> its stabel state, and when you sned a velocity signal (or whatever)
> it will simulate a beater striking it.

Yeah - that's a perfect example of "allocate voice on first use of 
VVID". This breaks down when you run out of physical voices, unless 
you do instant voice stealing upon "allocate new voice for VVID". (It 
might make sense to just do it that way, but it definitely is a waste 
of resources if allocating a voice is frequently done before the 
synth actually decides to start playing.)


> > > > If you really want to your voice allocator to be able to turn
> > > > down requests based on "parameters"
> > >
> > > I think this would be complex.
> >
> > Not more complex than any "note-on latched" controls - and I
> > don't think it's realistic to eliminate those.
>
> There is a difference between "turning down" (implies
> communicating) and ignoring (silent).

"Turning down" was meant as seen from the synth implementation POV. 
That is, if a synth "turns down" a voice allocation for a VVID, that 
VVID just gets routed to "the NULL Voice". Future events with that 
VVID are ignored.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 15.03, Steve Harris wrote:
[...]
> > It's just that there's a *big* difference between latching
> > control values when starting a note and being able to "morph"
> > while the note is played... I think it makes a lot of sense to
> > allow synths to do it either way.
>
> I'm not convinced there are many things that should be latched. I
> guess if you're trying to emulate MIDI hardware, but there you can
> just ignore velocity that arrives after the voice on.

I don't think velocity mapping qualifies as "emulating MIDI 
hardware", though. Likewise with "impact position" for drums. 
Selecting a waveform at voice on is very different from switching 
between waveforms during playback in a useful way. Changing the 
position dependent parameters for playing sounds just because "the 
drummer moves his aim" is simply incorrect.

Either way, the problem with ignoring velocity after voice on is that 
you have to consider event *timestamps* rather than event ordering. 
This breaks the logic of timestamped event processing, IMHO.


> I guess I have no real probelm with two stage voice initialisation.
> It certainly beets having two classes of event.

Yes, and that's the more important side of this. Treating 
"parameters" as different from controls has implications for both 
hosts/senders and synths, whereas defining "voice on latched 
controls" as a synth implementation thing has no implications for 
hosts/senders.

It might seem handy to allow synths to explicitly say that some 
controls *must* be rewritten at the instant of a VOICE_ON, but I 
don't think it's useful (it's useless for continous velocity 
instruments, at least) enough to motivate the cost.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 01:49:23 +0100, David Olofson wrote:
> > > The issue here is this: Where does control data go when there is
> > > no voice assigned to the VVID?
> >
> > They get thrown away.
> 
> Right - so you can't play a polyphonic synth with a continous 
> velocity controller, unless you track and re-send the controls that 
> the synth happens to treat as note parameters.

I dont understand why.
 
> > Well, I thinik its OK, because the note will not be used to render
> > any samples until after all the events have been processed.
> 
> But who guarantees that whatever the "trigger" event is, also comes 
> with events for the "parameter" controls? It's trivial (but still has 
> to be done!) with explicit NOTE_ON events when assuming that NOTE_ON 
> means "allocate and start voice NOW", but it's not at all possible if 
> the synth triggers on voice control changes.

I dont think that matters, eg. my favourite exmaple, the gong synth. If
you issue a voice on it will initialise the gong graph in its stabel
state, and when you sned a velocity signal (or whatever) it will simulate
a beater striking it.

> > > If you really want to your voice allocator to be able to turn
> > > down requests based on "parameters"
> >
> > I think this would be complex.
> 
> Not more complex than any "note-on latched" controls - and I don't 
> think it's realistic to eliminate those.

There is a difference between "turning down" (implies communicating) and
ignoring (silent).

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 01:41:43 +0100, David Olofson wrote:
> Yeah, byt you may not want control values to be latched except when a 
> note is actually triggered (be it explicitly, or as a result of a 
> contro change). Also, this voice.set_voice_map() may have significant 
> cost, and it seems like a bad idea to have the API practically 
> enforce that such things are done twice for every note.

Right, but the coust is not doubled.
 
> > > > So maybe VOICE creation needs to be a three-step process.
> > > > * Allocate voice
> > > > * Set initial voice-controls
> > > > * Voice on
> >
> > I think this is harder to handle.
> 
> Why?

More events. I guess its not impartant now I think about it.

> It's just that there's a *big* difference between latching control 
> values when starting a note and being able to "morph" while the note 
> is played... I think it makes a lot of sense to allow synths to do it 
> either way.

I'm not convinced there are many things that should be latched. I guess if
you're trying to emulate MIDI hardware, but there you can just ignore
velocity that arrives after the voice on.

I guess I have no real probelm with two stage voice initialisation. It
certainly beets having two classes of event.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 10.28, Steve Harris wrote:
[...]
> > This is also for debate - David dislikes (and I agree) the notion
> > that you have to send a note-on but the plugin does not have
> > enough info to handle (for example) a velocity-mapped sampler
> > until later.  Process events in order.  So a few ideas are on the
> > table.
>
> You do have enough infomation, its just that it may be superseded
> later. Velocity can have a default.
>
> ONe great thing about this scheme is that it encourages people not
> to think of certain, arbitary parameters as instantiation
> parameters, withc are special in some way, 'cos there not.

Well, they *are* special in that they're latched only at certain 
points. The problem is that if synths cannot effectively *implement* 
it that way, it becomes the host's/sender's responsibility to know 
the difference, and make sure that these controls are handled the 
right way. And unless the host/sender can tell the synth exactly when 
to latch the values, there is no way to get this right.

What I'm saying is that synths should preferably behave is if *all* 
voice controls ever received are tracked on a per-VVID basis, so they 
can be latched as intended when the synth decides to start a physical 
voice. That way, you can play continous control data on latched 
control synths and vice versa, without nasty side effects or "smart" 
event processing in the host/sender.

Obviously, this just isn't possible if the number of VVIDs used to 
control a synth is unknown or very large. However, I don't see a 
reason why you would use many more VVIDs than there are physical 
voices, so I don't see this as a real problem.

Synths that don't allocate physical voices as soon as a VVID gets 
it's first control may have to allocate virtual voices upon 
connection, but I think that's acceptable, considering the 
alternatives.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 10.23, Steve Harris wrote:
> On Mon, Jan 06, 2003 at 11:17:07 +0100, David Olofson wrote:
> > These "instantiation parameters" are in fact just control events,
> > and they relate to "whatever voice is assigned to the provided
> > VVID".
> >
> > The issue here is this: Where does control data go when there is
> > no voice assigned to the VVID?
>
> They get thrown away.

Right - so you can't play a polyphonic synth with a continous 
velocity controller, unless you track and re-send the controls that 
the synth happens to treat as note parameters.


> > What I'm saying is that if you send the "trigger" event first,
> > followed by the "parameters", you require synths to process a
> > number of control events *before* actually performing the trigger
> > action. That simply does not mix with the way events are supposed
> > to be handled.
>
> Well, I thinik its OK, because the note will not be used to render
> any samples until after all the events have been processed.

But who guarantees that whatever the "trigger" event is, also comes 
with events for the "parameter" controls? It's trivial (but still has 
to be done!) with explicit NOTE_ON events when assuming that NOTE_ON 
means "allocate and start voice NOW", but it's not at all possible if 
the synth triggers on voice control changes.


> > If you really want to your voice allocator to be able to turn
> > down requests based on "parameters"
>
> I think this would be complex.

Not more complex than any "note-on latched" controls - and I don't 
think it's realistic to eliminate those.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 10.19, Steve Harris wrote:
> On Tue, Jan 07, 2003 at 03:21:22 +0100, David Olofson wrote:
> > > > Problem is step 1. If the voice allocator looks at velocity,
> > > > it won't work, since that information is not available when
> > > > you do the allocation. Likewise for setting up waveforms with
> > > > velocity maps and the like.
>
> But in the general case this is just mapping to parameters, you
> have to be able to handle parameter changes during the run of the
> instrument, so why not at creation time too?
[...]

Yeah, byt you may not want control values to be latched except when a 
note is actually triggered (be it explicitly, or as a result of a 
contro change). Also, this voice.set_voice_map() may have significant 
cost, and it seems like a bad idea to have the API practically 
enforce that such things are done twice for every note.


> > > > When are you supposed to do that sort of stuff? VOICE_ON is
> > > > what triggers it in a normal synth, but with this scheme, you
> > > > have to wait for some vaguely defined "all parameters
> > > > available" point.
> > >
> > > So maybe VOICE creation needs to be a three-step process.
> > > * Allocate voice
> > > * Set initial voice-controls
> > > * Voice on
>
> I think this is harder to handle.

Why?


> > >  This is essentially saying that initial parameters are
> > > 'special', and they are in many-ways (I'm sure velocity maps
> > > are just one case).
> >
> > Yes; there can be a whole lot of such parameters for percussion
> > instruments, for example. (Drums, cymbals, marimba etc...)
>
> I still dont think there special. Velicty maps only behave this way
> in midi because you cant change velovity during the note in midi,
> you still need to be able to call up the map instantly, so it
> doesn't matter if you dont know the map at the point the note is
> 'created'.

It's just that there's a *big* difference between latching control 
values when starting a note and being able to "morph" while the note 
is played... I think it makes a lot of sense to allow synths to do it 
either way.


> > > Or we can make the rule that you do not choose an entry in a
> > > velocity map until you start PROCESSING a voice, not when you
> > > create it.  VOICE_ON is a placeholder.  The plugin should see
> > > that a voice is on that has no velocity-map entry and deal with
> > > it whn processing starts.  Maybe not.
> >
> > No, I think that's just moving the problem deeper into synth
> > implementations.
>
> Why? You can create it with the map for velocity=0.0 or whatever,
> and change it if needed. This seems like it will lead to cleaner
> instrument code.

Cleaner, maybe, but slower and more importantly, incorrect.

Bad Things(TM) will happen if you happen to play a "note-on latched 
velocity" synth with data that was recorded from a continous velocity 
controller, for example. What are you supposed to do with the 
sequenced data (or real time input, for that matter!) to have correct 
playback? I don't like the idea of having two different *types* of 
controls (continous and "event latched") for everything, just to deal 
with this.
 

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 09.15, Tim Hockin wrote:
> > continuous note on a particular VVID. The sequencer only reuses a
> > VVID once it has ended any previous notes on that VVID. The
> > sequencer can allocate a
>
> This is the current issue at hand - just because the sequencer has
> ended a VVID with a voice-off, doesn't mean the voice is off.  It
> just begins the release phase of the envelope, on some synths. 
> This could take a while. Either you need to NEVER re-use a VVID, or
> you need to tell the host when an ended VVID is actually re-usable.
>  Or you need to have voice-ids allocated by the plugin, and NOT the
> host, which I like more.

Or you can tell the plugin when you want a particular VVID to be 
assigned to a new voice.


[...]
> > -HOWEVER- there is no rule that a note has any pitch or velocity
> > or any other particular parameter, it is just that the Voice-On
> > tells the voice to start making sound and the Voice-Off tells the
> > voice to stop making sound.
>
> Correct.  Though David is half-heartedly arguing that maybe
> Velocity is the true note-on.  I disagree.

I'm actually arguing that *no* specific event or control is the "note 
on". It could be triggered by VELOCITY changes, or by any other Voice 
Control or combination of Voice Controls. It's entirely synth 
specific.


> > -ALSO HOWEVER- the entity which sends voice-on and off messages
> > may not directly refer to the object's voices. Instead, the event
> > sender puts
>
> ..in the currently understood VVID model.
>
> > voices to do. This differential is in place because sequencers
> > typically send more concurrent notes than the plugin has actual
> > voices for AND the
>
> On the contrary, hopefully you will rarely exceed the max polyphony
> for each instrument.
>
> > other words, it is the role of the plugin to decide whether or
> > not to steal
>
> I believe you should always steal notes, but I suppose there will
> be some instrument plugin some lunatic on here will devise that
> does not follow that. Newer notes are always more important than
> older notes,

Well, it's not quite that simple with continous velocity 
instruments... You may want to steal a voice when the velocity is 
high enough, and possibly even have some other "context" steal it 
back later on. (It would be the right thing to do - but if someone 
actually cares to implement it is another matter. It is, after all, 
little more than an emergency solution.)


> but if you exceed max poly, a red light should go off!

Yes! Bad things *will* happen in 99% of cases. (Whether you can 
actually hear the difference in a full mix is another matter.)


> > (1)send voice-on event at timestamp X. This indicates a note is
> > to start.
> >
> > (2)send parameter-set events also at timestamp X, these are
> > guaranteed to
>
> This is also for debate - David dislikes (and I agree) the notion
> that you have to send a note-on but the plugin does not have enough
> info to handle (for example) a velocity-mapped sampler until later.
>  Process events in order.  So a few ideas are on the table.

Yeah, it's basically a mattor of allocation - either of a physical 
voice, or a temporary "fake voice" or something else that can track 
the incoming events until a physical voice is allocated.

I think different synths will want to do this in different ways, but 
we need to come up with an API that's clean and works well for all 
sensible methods.


> > (4)send voice-off event at later time to end the note and free
> > the voice.
>
> And what of step-sequencing, where you send a note-on and never a
> note-off? Finite duration voices (samples, drum hits, etc) end
> spontaneously.  Does the plugin tell the host about it, or just let
> the VVID leak?

Either tell the host/sender when the VVID is free (which means you 
need a VVID reserve that has some sort of hard to define relation to 
latency), or let the host/sender tell the synth when it no longer 
cares about a specific "voice context". I strongly prefer the latter.


> > When the plugin reads the voice-on event at timestamp X it
> > decides whether to allocate a voice or not. If it has an
> > initialization routine for voice-on events, then the plugin must
> > read through the remaining events with timestamp X to get
> > initialization arguments. The plugin must delay actually
>
> I guess it may not be tooo bad.  Plugins which need some init info
> (such as velocity for the velo-map) know they need this, and can
> look for that info. Other plugins for whom an init event is no
> different than a continuous event just go about their merry way.
>
> But before we all decide this, I want to explore other notions. 
> I'm not saying my negative Voice-ID thing is great, but I rather
> like the idea that Voice-Ids mean something and are almost purely
> the domain of the plugin.

The problem is that voice IDs still cannot have a direct relation to 
voices. If they have, you can't steal voices, since that would result 
in the host/sender s

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 04.24, Tim Hockin wrote:
> > Two problems solved! (Well, almost... Still need temporary space
> > for the "parameters", unless voice allocation is
> > non-conditional.)
>
> I think if you get an allocate voice event, you MUST get a voice on
> event.

Why? Isn't a "voice on event" actually just "something" that makes 
the synth activate a voice?


[...]
> > Why would the host (or rather, sender) care about the VVIDs that
> > are *not* active? (Remember; "Kill All Notes" is a Channel
> > Control, and if you want per-voice note killing, you simply keep
> > your VVIDs until you're done with them - as always.)
>
> It wouldn't - if the host has a limit of 128 voice polyphony, it
> keeps a hash or array of 128 VVIDs.  There is a per-instrument (or
> per-channel) next_vvid variable.  Whenever host wants a new voice
> on an instrument it finds an empty slot on the VVID table (or the
> oldest VVID if full) and sets it to next_vvid++.  That is then the
> VVID for the new voice.  If we had to steal one, it's because the
> user went too far.  In that case, VOICE(oldest_vvid, 0) is probably
> acceptable.
>
> The plugin sees a stream of new VVIDs (maybe wrapping every 2^32
> notes - probably OK).  It has it's own internal rules about voice
> allocation, and probably has less polyphony than 128 (or whatever
> the host sets).  It can do smart voice stealing (though the LRU
> algorithm the host uses is probably good enough).  It hashes VVIDs
> in the 0-2^32 namespace on it's real voices internally.  You only
> re-use VVIDs every 2^32 notes.

Ok, but I don't see the advantage of this, vs explicitly assigning 
preallocated VVIDs to new voices. All I see is a rather significant 
performance hit when looking up voices.


> > > VELOCITY can be continuous - as you pointed out with strings
> > > and such.  The creation of a voice must be separate in the API,
> > > I think.
> >
> > Why? It's up to the *instrument* to decide when the string (or
> > whatever) actually starts to vibrate, isn't it? (Could be
> > VELOCITY >= 0.5, or whatever!) Then, what use is it for
> > hosts/senders to try to figure out where notes start and end?
>
> And for a continuous velocity instrument, how do you make a new
> voice?

Just grab a new VVID and start playing. The synth will decide when a 
physical voice should be used, just as it decides what exactly to do 
with that physical voice.


> And why is velocity becoming special again?

It isn't. VELOCITY was just an example; any control - or a 
combination of controls - would do.


> I think voice-on/off is well understood and applies pretty well to
> everything.  I am all for inventing new concepts, but this will be
> more confusing than useful, I think.  Open to convincing, but
> dubious.

I think the voice on/off concept is little more than a MIDIism. Since 
basic MIDI does not support continous velocity at all, it makes sense 
to merge "note on" and "velocity" into one, and assume that a note 
starts when the resulting message is received.

With continous velocity, it is no longer obvious when the synth 
should actually start playing. Consequently, it seems like wasted 
code the have the host/sender "guess" when the synth might want to 
allocate or free voices, since the synth may ignore that information 
anyway. This is why the explicit note on/off logic seems broken to me.

Note that using an event for VVID allocation has very little to do 
with this. VVID allocation is just a way for the host/sender to 
explicitly tell the synth when it's talking about a new "voice 
context" without somehow grabbing a new VVID. It doesn't imply 
anything about physical voice allocation.


[...]
> Block start:
>   time X: voice(-1, ALLOC)/* a new voice is coming */
>   time X: velocity(-1, 100)   /* set init controls */
>   time X: voice(-1, ON)   /* start the voice */
>   time X: (plugin sends host 'voice -1 = 16')
>   time Y: voice(-2, ALLOC)
>   time Y: velocity(-2, 66)
>   time Y: voice(-2, ON)
>   time Y: (plugin sends host 'voice -2 = 17')
>
> From then out the host uses the plugin-allocated voice-ids.  We get
> a large (all negative numbers) namespace for new notes per block.

Short term VVIDs, basically. (Which means there will be voice 
marking, LUTs or similar internally in synths.)


> We get plugin-specific voice-ids (no hashing/translating).

Actually, you *always* need to do some sort of translation if you 
have anything but actual voice indices. Also note that there must be 
a way to assign voice IDs to non-voices (ie NULL voices) or similar, 
when running out of physical voices.


>  Plugin
> handles voice stealing in a plugin specific way (ask for a voice,
> it's all full, it returns a voice-id you already have and
> internally sends voice_off).

You can never return an in-use voice ID, unless the sender is 
supposed to check every returned voice ID. Better return an invalid 
voice ID or something...


> I found it ugl

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 12:15:40 -0800, Tim Hockin wrote:
> > continuous note on a particular VVID. The sequencer only reuses a VVID once 
> > it has ended any previous notes on that VVID. The sequencer can allocate a 
> 
> This is the current issue at hand - just because the sequencer has ended a
> VVID with a voice-off, doesn't mean the voice is off.  It just begins the
> release phase of the envelope, on some synths.  This could take a while.
> Either you need to NEVER re-use a VVID, or you need to tell the host when an
> ended VVID is actually re-usable.  Or you need to have voice-ids allocated
> by the plugin, and NOT the host, which I like more.

Having the plugins allocate them is a pain, its much easier if the host
aloocates them, and just does so from a sufficiently large pool, if you
have 2^32 host VVIDs per instrument you can just round robin them.
 
> This is also for debate - David dislikes (and I agree) the notion that you
> have to send a note-on but the plugin does not have enough info to handle
> (for example) a velocity-mapped sampler until later.  Process events in
> order.  So a few ideas are on the table.

You do have enough infomation, its just that it may be superseded later.
Velocity can have a default.

ONe great thing about this scheme is that it encourages people not to
think of certain, arbitary parameters as instantiation parameters, withc
are special in some way, 'cos there not.

- Steve 



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Mon, Jan 06, 2003 at 11:17:07 +0100, David Olofson wrote:
> These "instantiation parameters" are in fact just control events, and 
> they relate to "whatever voice is assigned to the provided VVID".
> 
> The issue here is this: Where does control data go when there is no 
> voice assigned to the VVID?

They get thrown away.
 
> What I'm saying is that if you send the "trigger" event first, 
> followed by the "parameters", you require synths to process a number 
> of control events *before* actually performing the trigger action. 
> That simply does not mix with the way events are supposed to be 
> handled.

Well, I thinik its OK, because the note will not be used to render any
samples until after all the events have been processed.
 
> If you really want to your voice allocator to be able to turn down 
> requests based on "parameters"

I think this would be complex.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 03:21:22 +0100, David Olofson wrote:
> > > Problem is step 1. If the voice allocator looks at velocity, it
> > > won't work, since that information is not available when you do
> > > the allocation. Likewise for setting up waveforms with velocity
> > > maps and the like.

But in the general case this is just mapping to parameters, you have to be
able to handle parameter changes during the run of the instrument, so why
not at creation time too?

for samples
   if (event)
  if (event == new_voice)
 voice = allocate_voice()
  if (event == velocity)
 voice.set_voice_map(event)
   out = voice.run()

> > > When are you supposed to do that sort of stuff? VOICE_ON is what
> > > triggers it in a normal synth, but with this scheme, you have to
> > > wait for some vaguely defined "all parameters available" point.
> >
> > So maybe VOICE creation needs to be a three-step process.
> > * Allocate voice
> > * Set initial voice-controls
> > * Voice on

I think this is harder to handle.

> >  This is essentially saying that initial parameters are
> > 'special', and they are in many-ways (I'm sure velocity maps are
> > just one case).
> 
> Yes; there can be a whole lot of such parameters for percussion 
> instruments, for example. (Drums, cymbals, marimba etc...)

I still dont think there special. Velicty maps only behave this way in
midi because you cant change velovity during the note in midi, you still
need to be able to call up the map instantly, so it doesn't matter if you
dont know the map at the point the note is 'created'.
 
> > Or we can make the rule that you do not choose an entry in a
> > velocity map until you start PROCESSING a voice, not when you
> > create it.  VOICE_ON is a placeholder.  The plugin should see that
> > a voice is on that has no velocity-map entry and deal with it whn
> > processing starts.  Maybe not.
> 
> No, I think that's just moving the problem deeper into synth 
> implementations.

Why? You can create it with the map for velocity=0.0 or whatever, and
change it if needed. This seems like it will lead to cleaner instrument
code.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Tim Hockin
> continuous note on a particular VVID. The sequencer only reuses a VVID once 
> it has ended any previous notes on that VVID. The sequencer can allocate a 

This is the current issue at hand - just because the sequencer has ended a
VVID with a voice-off, doesn't mean the voice is off.  It just begins the
release phase of the envelope, on some synths.  This could take a while.
Either you need to NEVER re-use a VVID, or you need to tell the host when an
ended VVID is actually re-usable.  Or you need to have voice-ids allocated
by the plugin, and NOT the host, which I like more.

> My underlying assumptions are:

I made a post a while back defining all the XAP terminology to date.  Read
it if you haven't - it is useful :)

> -DEFINITION: the individual voices produce finite periods of sound which we 
> call notes. A note is the sound that a voice makes between a Voice-On event 
> and a Voice-Off event (provided that the voice is not reappropriated in the 
> middle to make a different note)

We've been avoiding the word 'note' because it is too specific.  The
lifetime of voice is either finite or open-ended.  A synth would be
open-ended.  A gong hit would be finite.

> -HOWEVER- there is no rule that a note has any pitch or velocity or any 
> other particular parameter, it is just that the Voice-On tells the voice to 
> start making sound and the Voice-Off tells the voice to stop making sound.

Correct.  Though David is half-heartedly arguing that maybe Velocity is the
true note-on.  I disagree.

> -ALSO HOWEVER- the entity which sends voice-on and off messages may not 
> directly refer to the object's voices. Instead, the event sender puts 

..in the currently understood VVID model.

> voices to do. This differential is in place because sequencers typically 
> send more concurrent notes than the plugin has actual voices for AND the 

On the contrary, hopefully you will rarely exceed the max polyphony for each
instrument.

> other words, it is the role of the plugin to decide whether or not to steal 

I believe you should always steal notes, but I suppose there will be some
instrument plugin some lunatic on here will devise that does not follow
that.  Newer notes are always more important than older notes, but if you
exceed max poly, a red light should go off!

> (1)send voice-on event at timestamp X. This indicates a note is to start.
> 
> (2)send parameter-set events also at timestamp X, these are guaranteed to 

This is also for debate - David dislikes (and I agree) the notion that you
have to send a note-on but the plugin does not have enough info to handle
(for example) a velocity-mapped sampler until later.  Process events in
order.  So a few ideas are on the table.

> (4)send voice-off event at later time to end the note and free the voice.

And what of step-sequencing, where you send a note-on and never a note-off?
Finite duration voices (samples, drum hits, etc) end spontaneously.  Does
the plugin tell the host about it, or just let the VVID leak?

> When the plugin reads the voice-on event at timestamp X it decides whether 
> to allocate a voice or not. If it has an initialization routine for voice-on 
> events, then the plugin must read through the remaining events with 
> timestamp X to get initialization arguments. The plugin must delay actually 

I guess it may not be tooo bad.  Plugins which need some init info (such as
velocity for the velo-map) know they need this, and can look for that info.
Other plugins for whom an init event is no different than a continuous event
just go about their merry way.

But before we all decide this, I want to explore other notions.  I'm not
saying my negative Voice-ID thing is great, but I rather like the idea that
Voice-Ids mean something and are almost purely the domain of the plugin.

Tim



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread robbins jacob
My understanding of VVID's is that the sequencer puts one complete, 
continuous note on a particular VVID. The sequencer only reuses a VVID once 
it has ended any previous notes on that VVID. The sequencer can allocate a 
large number of VVIDs so that it never has to make a voice stealing decision 
on its end (and so we don't have to make roundtrips). This large allocation 
means that the plugin should never try to allocate a significant sized 
structure for each VVID. Instead, the plugin should match VVIDs to actual 
voices as incoming voice-on messages are received until all actual voices 
are used. After all voices are in use, the plugin has to decide whether to 
steal voices from ongoing notes or deny voice-on events.  Voice stealing 
decisions are properly made by the plugin. This is the case even if the 
sequencer knows how many actual voices there are because the plugin has much 
more intimate knowledge of the nature of the voices: their amplitude, timbre 
etc.

My underlying assumptions are:

-a single object resides in each channel, be it a piano a gong or whatever, 
there is one in the channel

-HOWEVER, that single object may be polyphonic; the piano may be able to 
sound multiple notes concurrently, the gong may be able to sound two quick 
strokes in succession which overlap in their duration.

-DEFINITION: We call the facility for making ONE of those sounds a voice.

-DEFINITION: the individual voices produce finite periods of sound which we 
call notes. A note is the sound that a voice makes between a Voice-On event 
and a Voice-Off event (provided that the voice is not reappropriated in the 
middle to make a different note)

-HOWEVER- there is no rule that a note has any pitch or velocity or any 
other particular parameter, it is just that the Voice-On tells the voice to 
start making sound and the Voice-Off tells the voice to stop making sound.

-ALSO HOWEVER- the entity which sends voice-on and off messages may not 
directly refer to the object's voices. Instead, the event sender puts 
separate notes on separate Virtual Voice IDs to indicate what it desires the 
voices to do. This differential is in place because sequencers typically 
send more concurrent notes than the plugin has actual voices for AND the 
plugin is better suited to decide how to allocate those scarce resources. In 
other words, it is the role of the plugin to decide whether or not to steal 
a voice for a new note and which voice to steal. So the sequencer sends out 
notes in fantasy-land VVID notation where they never ever have to overlap, 
and the plugin decides how best to play those notes using the limited number 
of voices it has.


  As I see it, the procedure for using a voice via a particular VVID is as 
follows (note that all events mentioned are assumed to have a particular 
VVID):

(1)send voice-on event at timestamp X. This indicates a note is to start.

(2)send parameter-set events also at timestamp X, these are guaranteed to 
follow the voice-on event even though they have the same timestamp because 
the event ordering specifies it. These parameter-set events are to be 
considered voice initializers should the plugin support such a concept, 
otherwise they are the first regular events to effect this note.

(3)send parameter-set events at later times to modify the note as it 
progresses.

(4)send voice-off event at later time to end the note and free the voice.


When the plugin reads the voice-on event at timestamp X it decides whether 
to allocate a voice or not. If it has an initialization routine for voice-on 
events, then the plugin must read through the remaining events with 
timestamp X to get initialization arguments. The plugin must delay actually 
initializing the voice until it has read the other events at the same 
timestamp as the voice-on event. If the plugin doesn't do any special 
initialization procedures then it doesn't have to worry about this because 
the events concurrent with the voice-on event can just be applied in the 
same manner as later param-set events.


--jacob robbins:. soundtank..





_
Protect your PC - get McAfee.com VirusScan Online 
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread Tim Hockin
> Two problems solved! (Well, almost... Still need temporary space for 
> the "parameters", unless voice allocation is non-conditional.)

I think if you get an allocate voice event, you MUST get a voice on event.

> That doesn't really work for normal polyphonic instruments, unless 
> the host *fully* understands the synth's voice allocation rules, 
> release envelopes and whatnot. The polyphonic synth is effectively 
> reduced to a tracker style synth with N monophonic channels.



> > two-way dialogue, or you have use-once VVIDs.  Maybe this is OK -
> > 2^32 VVIDs per synth. The host only really needs to store a small
> > number - the active list.
> 
> Why would the host (or rather, sender) care about the VVIDs that are 
> *not* active? (Remember; "Kill All Notes" is a Channel Control, and 
> if you want per-voice note killing, you simply keep your VVIDs until 
> you're done with them - as always.)

It wouldn't - if the host has a limit of 128 voice polyphony, it keeps a
hash or array of 128 VVIDs.  There is a per-instrument (or per-channel)
next_vvid variable.  Whenever host wants a new voice on an instrument it
finds an empty slot on the VVID table (or the oldest VVID if full) and sets
it to next_vvid++.  That is then the VVID for the new voice.  If we had to
steal one, it's because the user went too far.  In that case,
VOICE(oldest_vvid, 0) is probably acceptable.

The plugin sees a stream of new VVIDs (maybe wrapping every 2^32 notes -
probably OK).  It has it's own internal rules about voice allocation, and
probably has less polyphony than 128 (or whatever the host sets).  It can do
smart voice stealing (though the LRU algorithm the host uses is probably
good enough).  It hashes VVIDs in the 0-2^32 namespace on it's real voices
internally.  You only re-use VVIDs every 2^32 notes.

> > VELOCITY can be continuous - as you pointed out with strings and
> > such.  The creation of a voice must be separate in the API, I
> > think.
> 
> Why? It's up to the *instrument* to decide when the string (or 
> whatever) actually starts to vibrate, isn't it? (Could be VELOCITY >= 
> 0.5, or whatever!) Then, what use is it for hosts/senders to try to 
> figure out where notes start and end?

And for a continuous velocity instrument, how do you make a new voice?  And
why is velocity becoming special again?

I think voice-on/off is well understood and applies pretty well to
everything.  I am all for inventing new concepts, but this will be more
confusing than useful, I think.  Open to convincing, but dubious.

> Of course - but I don't think it has to be a two-way dialog for this 
> reason. And I don't think a two-way dialog can work very well in this 
> context anyway. Either you have to bypass the event system, or you 
> have to allow for quite substantial latency in the feedback direction.

I once had the idea that you send all your events in a given block with
negative voice-ids.  The plugin responds with the proper per-plugin
voice-id.

Block start:
  time X:   voice(-1, ALLOC)/* a new voice is coming */
  time X:   velocity(-1, 100)   /* set init controls */
  time X:   voice(-1, ON)   /* start the voice */
  time X:   (plugin sends host 'voice -1 = 16')
  time Y:   voice(-2, ALLOC)
  time Y:   velocity(-2, 66)
  time Y:   voice(-2, ON)
  time Y:   (plugin sends host 'voice -2 = 17')

>From then out the host uses the plugin-allocated voice-ids.  We get a large
(all negative numbers) namespace for new notes per block.  We get
plugin-specific voice-ids (no hashing/translating).  Plugin handles voice
stealing in a plugin specific way (ask for a voice, it's all full, it returns
a voice-id you already have and internally sends voice_off).

I found it ugly at first.  I like that it all is plugin-specific.  Poke
holes in it?

> Note that when a synth starts stealing voices, that's actually *error 
> handling* going on. If a soft synth with say, 32 physical and 32 

Not necessarily.  I've played with soft-synths with no envelope control,
which I wanted to be mono.  I set maxpoly to 1, and let it cut off tails
automatically.  It isn't _necessarily_ an error.

Tim



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread David Olofson
On Tuesday 07 January 2003 01.55, Tim Hockin wrote:
> > > 1. Recv VOICE_ON, allocate a voice struct
> > > 2. Recv VELOCITY, set velocity in voice struct
> > > 3. No more events for timestamp X, next is timestamp Y
> > > 4. Process audio until stamp Y
> > > 4.a. Start playing the voice with given velocity
> >
> > Problem is step 1. If the voice allocator looks at velocity, it
> > won't work, since that information is not available when you do
> > the allocation. Likewise for setting up waveforms with velocity
> > maps and the like.
> >
> > When are you supposed to do that sort of stuff? VOICE_ON is what
> > triggers it in a normal synth, but with this scheme, you have to
> > wait for some vaguely defined "all parameters available" point.
>
> So maybe VOICE creation needs to be a three-step process.
> * Allocate voice
> * Set initial voice-controls
> * Voice on
>
> This way the instrument is alerted to the fact that a new voice is
> being created without deciding which entry in the velocity map to
> use.

...and then there's no need for a DETACH_VVID event either, as the 
"allocate voice" event implies that whatever VVID it uses will be 
used for a new note for now on, regardless of what that VVID might 
have been used for before.

Two problems solved! (Well, almost... Still need temporary space for 
the "parameters", unless voice allocation is non-conditional.)


>  This is essentially saying that initial parameters are
> 'special', and they are in many-ways (I'm sure velocity maps are
> just one case).

Yes; there can be a whole lot of such parameters for percussion 
instruments, for example. (Drums, cymbals, marimba etc...)


> Or we can make the rule that you do not choose an entry in a
> velocity map until you start PROCESSING a voice, not when you
> create it.  VOICE_ON is a placeholder.  The plugin should see that
> a voice is on that has no velocity-map entry and deal with it whn
> processing starts.  Maybe not.

No, I think that's just moving the problem deeper into synth 
implementations.


[...]
> > Actually, it doesn't know anything about that. The physical
> > VVID->voice mapping is a synth implementation thing, and is
> > entirely dependent on how the synth manages voices.
> > s/voice/VVID/, and you get closer to what VVIDs are about.
>
> But it COULD.  This could become more exported.  The plugin tells
> the host what it's max polyphony is (a POLYPHONY control?).  The
> host manages voices 0 to (MAX_POLY-1) for each synth.

That doesn't really work for normal polyphonic instruments, unless 
the host *fully* understands the synth's voice allocation rules, 
release envelopes and whatnot. The polyphonic synth is effectively 
reduced to a tracker style synth with N monophonic channels.

The POLYPHONY could be interesting, though, but I think most things 
you can do with it should really be done with monophonic channels...


> > > 0-VVID is just so you can have one control for voice on and
> > > off. Positive means ON, negative means OFF.  abs(event->vvid)
> > > is the VVID.
> >
> > Ok. Why not just use the "value" field instead, like normal Voice
> > Controls? :-)
>
> Because VOICE is actually a channel control?

But it's really just a per-Voice on/off switch, right? (It's just 
that you don't address voices directly, but you never do that anyway.)


>  I dunno, being thick,
> probably.  :)  VOICE(vid, 1) and VOICE(vid, 0) are the notation I
> will use to indicate that a voice 'vid' has been turned on or off.
> :)

Ok. Well, those look like Voice Controls to me. :-)


> > Not really - but whoever *sends* to the synth will care, when
> > running out of VVIDs. (Unless it's a MIDI based sequencer, VVID
> > management isn't as easy as "one VVID per MIDI pitch value".)
>
> Ahh, this does get interesting.

Yep. I ran into it when I started messing with VVIDs in Audiality. 
For the MIDI sequencer, one can just grab one VVID for each MIDI 
pitch for each channel and bo done with it, but for the future 
"native" sequencer (no MIDI crap), there won't be any fixed limit on 
the number of voices you can control on one channel, and no fixed 
relation between "pitch" and "ID"...


> > way you can know when it's safe to reuse a VVID. (Release
> > envelopes...) Polling the synth for voice status, or having
> > synths return voice status events doesn't seem very nice to me.
> > The very
>
> It seems to me that voice allocation and de-allocation HAS to be a
> two-way dialogue, or you have use-once VVIDs.  Maybe this is OK -
> 2^32 VVIDs per synth. The host only really needs to store a small
> number - the active list.

Why would the host (or rather, sender) care about the VVIDs that are 
*not* active? (Remember; "Kill All Notes" is a Channel Control, and 
if you want per-voice note killing, you simply keep your VVIDs until 
you're done with them - as always.)

My point is that when the host/sender no longer cares about a voice, 
it can grab the VVID and tell the synth to allocate a new voice for 
it. Whether or not

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread Tim Hockin
> > 1. Recv VOICE_ON, allocate a voice struct
> > 2. Recv VELOCITY, set velocity in voice struct
> > 3. No more events for timestamp X, next is timestamp Y
> > 4. Process audio until stamp Y
> > 4.a. Start playing the voice with given velocity
> 
> Problem is step 1. If the voice allocator looks at velocity, it won't 
> work, since that information is not available when you do the 
> allocation. Likewise for setting up waveforms with velocity maps and 
> the like.
> 
> When are you supposed to do that sort of stuff? VOICE_ON is what 
> triggers it in a normal synth, but with this scheme, you have to wait 
> for some vaguely defined "all parameters available" point.

So maybe VOICE creation needs to be a three-step process.
* Allocate voice
* Set initial voice-controls
* Voice on

This way the instrument is alerted to the fact that a new voice is being
created without deciding which entry in the velocity map to use.  This is
essentially saying that initial parameters are 'special', and they are in
many-ways (I'm sure velocity maps are just one case).

Or we can make the rule that you do not choose an entry in a velocity map
until you start PROCESSING a voice, not when you create it.  VOICE_ON is a
placeholder.  The plugin should see that a voice is on that has no
velocity-map entry and deal with it whn processing starts.  Maybe not.

Needs thinking


> > Host: I know SYNTH has voices 1, 2, 3 active.  So I send params for
> > voice 4.
> 
> Actually, it doesn't know anything about that. The physical 
> VVID->voice mapping is a synth implementation thing, and is entirely 
> dependent on how the synth manages voices. s/voice/VVID/, and you get 
> closer to what VVIDs are about.

But it COULD.  This could become more exported.  The plugin tells the host
what it's max polyphony is (a POLYPHONY control?).  The host manages voices
0 to (MAX_POLY-1) for each synth.

> > 0-VVID is just so you can have one control for voice on and off. 
> > Positive means ON, negative means OFF.  abs(event->vvid) is the
> > VVID.
> 
> Ok. Why not just use the "value" field instead, like normal Voice 
> Controls? :-)

Because VOICE is actually a channel control?  I dunno, being thick,
probably.  :)  VOICE(vid, 1) and VOICE(vid, 0) are the notation I will use
to indicate that a voice 'vid' has been turned on or off. :)

> Not really - but whoever *sends* to the synth will care, when running 
> out of VVIDs. (Unless it's a MIDI based sequencer, VVID management 
> isn't as easy as "one VVID per MIDI pitch value".)

Ahh, this does get interesting.

> way you can know when it's safe to reuse a VVID. (Release 
> envelopes...) Polling the synth for voice status, or having synths 
> return voice status events doesn't seem very nice to me. The very 

It seems to me that voice allocation and de-allocation HAS to be a two-way
dialogue, or you have use-once VVIDs.  Maybe this is OK - 2^32 VVIDs per
synth.  The host only really needs to store a small number - the active
list.  Obviously it can't be a linear index EVER, but it makes a fine hash
key or index modulo N.

>   1) Voice Control. (Keep the VVID for as long as you need it!)
> 
>   2) Channel Control. ("Kill All Notes" type of controls.)

My header already has a 'stop all sound' event.  :)

> Although the VOICE control might actually be the VELOCITY control, 
> where anything non-0 means "on"... A specific, non-optional VOICE 
> control doesn't make sense for all types of instruments, but there 
> may be implementational reasons to have it anyway; not sure yet.

VELOCITY can be continuous - as you pointed out with strings and such.  The
creation of a voice must be separate in the API, I think.

This all needs more thinking.  I haven't had too much time to think on these
hard subjects the past two weeks, and I might not for a few more.  I'll try
to usurp work-time when I can :)

This all leads me back to my original thoughts, that the voice-management
MUST be a two-way dialog.  I don't like the idea of use-once VVIDs because
eventually SOMEONE will hit the limit.  I hate limits :)

more thought needed

Tim



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread David Olofson
On Tuesday 07 January 2003 00.31, Tim Hockin wrote:
> > Problem is that that requires special case event processing in
> > synths. You'll need an inner event decoding loop just to get the
> > parameters, before you can go on with voice intantiation.
>
> You always process all events for timestamp X before you generate
> X's audio right?

Yes, that's true. (You loop until you see a different timestamp.)


> 1. Recv VOICE_ON, allocate a voice struct
> 2. Recv VELOCITY, set velocity in voice struct
> 3. No more events for timestamp X, next is timestamp Y
> 4. Process audio until stamp Y
> 4.a. Start playing the voice with given velocity

Problem is step 1. If the voice allocator looks at velocity, it won't 
work, since that information is not available when you do the 
allocation. Likewise for setting up waveforms with velocity maps and 
the like.

When are you supposed to do that sort of stuff? VOICE_ON is what 
triggers it in a normal synth, but with this scheme, you have to wait 
for some vaguely defined "all parameters available" point.


> > The alternative indeed means that you have to find somewhere to
> > store the parameters until the "VOICE_ON" arrives - but that
> > doesn't screw up the API, and in fact, it's not an issue at all
> > until you get into voice stealing. (If you just grab a voice when
> > you get a "new" VVID, parameters will work like any controls - no
> > extra work or special cases.)
>
> All this is much easier if synths pre-allocate their internal voice
> structs.

I'm assuming that they do this at all times. There is no other 
reliable way of doing it.

The problem is when the sender uses more VVIDs than there are voices, 
or when voices linger in the release phase. That is, the problem is 
voice stealing occurs - or rather, that there may not always be a 
physical voice "object" for each in-use VVID.


> Host: I know SYNTH has voices 1, 2, 3 active.  So I send params for
> voice 4.

Actually, it doesn't know anything about that. The physical 
VVID->voice mapping is a synth implementation thing, and is entirely 
dependent on how the synth manages voices. s/voice/VVID/, and you get 
closer to what VVIDs are about.


> How does VST handle this?

Same way as MIDI synths; MIDI pitch == note ID.


> > > Send 0-VVID to the VOICE contol with timestamp Y for note-off
> >
> > I'm not quite following with the 0-VVID thing here... (The VVID
> > *is*
>
> 0-VVID is just so you can have one control for voice on and off. 
> Positive means ON, negative means OFF.  abs(event->vvid) is the
> VVID.

Ok. Why not just use the "value" field instead, like normal Voice 
Controls? :-)


> > can tell the synth that you have nothing further to say to this
> > Voice by implying that NOTE_OFF means "I will no longer use this
> > VVID to address whatever voice it's connected to now." Is that
> > the idea?
>
> The synth doesn't care if you have nothing further to say.

Not really - but whoever *sends* to the synth will care, when running 
out of VVIDs. (Unless it's a MIDI based sequencer, VVID management 
isn't as easy as "one VVID per MIDI pitch value".)

My point is that if you don't have a way of doing this, there's no 
way you can know when it's safe to reuse a VVID. (Release 
envelopes...) Polling the synth for voice status, or having synths 
return voice status events doesn't seem very nice to me. The very 
idea with VVIDs was to keep communication one way, so why not keep it 
that way as far as possible?


> Either
> it will hold the note forever (synth, violin, woodwind, etc) or it
> will end eventually on it's own (sample, drum, gong).

Yeah - and if you can't detach VVIDs, you have to find out when this 
happens, which requires synth->sender feedback. (Or you basically 
cannot safely reuse a VVID, ever, once you've used it to play a note.)


> You do,
> however want to be able to shut off continuous notes and to
> terminate self-ending voices (hi-hat being hit again, or a crash
> cymbal being grabbed).

Yes - but that falls in one of two categories:

1) Voice Control. (Keep the VVID for as long as you need it!)

2) Channel Control. ("Kill All Notes" type of controls.)


> > If so, I would suggest that a special "DETACH_VVID" control/event
> > is used for this. There's no distinct relation between a note
> > being "on" and whether or not Voice Controls can be used. At
> > least with MIDI synths, it's really rather common that you want
> > Channel controls to affect *all* voices, even after NoteOff, and
> > I see no reason why our Channel Controls should be different.
>
> I don't get what this has to do with things - of course channel
> control affect everything.  That is their nature.

Sorry, I meant so say "*Voice* Controls", of course...


> You're right
> that a note can end, and the host might still send events for it. 

Yep - and the note might *not* end, while the host (or rather, 
"whatever sends the events" - doesn't have to be the host, IMO) 
doesn't care, and just needs a "new

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread Tim Hockin
> Problem is that that requires special case event processing in 
> synths. You'll need an inner event decoding loop just to get the 
> parameters, before you can go on with voice intantiation.

You always process all events for timestamp X before you generate X's audio
right?

1. Recv VOICE_ON, allocate a voice struct
2. Recv VELOCITY, set velocity in voice struct
3. No more events for timestamp X, next is timestamp Y
4. Process audio until stamp Y
4.a. Start playing the voice with given velocity

> The alternative indeed means that you have to find somewhere to store 
> the parameters until the "VOICE_ON" arrives - but that doesn't screw 
> up the API, and in fact, it's not an issue at all until you get into 
> voice stealing. (If you just grab a voice when you get a "new" VVID, 
> parameters will work like any controls - no extra work or special 
> cases.)

All this is much easier if synths pre-allocate their internal voice structs.

Host: I know SYNTH has voices 1, 2, 3 active.  So I send params for voice 4.

How does VST handle this?

> > Send 0-VVID to the VOICE contol with timestamp Y for note-off
> 
> I'm not quite following with the 0-VVID thing here... (The VVID *is* 

0-VVID is just so you can have one control for voice on and off.  Positive
means ON, negative means OFF.  abs(event->vvid) is the VVID.

> can tell the synth that you have nothing further to say to this Voice 
> by implying that NOTE_OFF means "I will no longer use this VVID to 
> address whatever voice it's connected to now." Is that the idea?

The synth doesn't care if you have nothing further to say.  Either it will
hold the note forever (synth, violin, woodwind, etc) or it will end
eventually on it's own (sample, drum, gong).  You do, however want to be
able to shut off continuous notes and to terminate self-ending voices
(hi-hat being hit again, or a crash cymbal being grabbed).

> If so, I would suggest that a special "DETACH_VVID" control/event is 
> used for this. There's no distinct relation between a note being "on" 
> and whether or not Voice Controls can be used. At least with MIDI 
> synths, it's really rather common that you want Channel controls to 
> affect *all* voices, even after NoteOff, and I see no reason why our 
> Channel Controls should be different.

I don't get what this has to do with things - of course channel control
affect everything.  That is their nature.  You're right that a note can end,
and the host might still send events for it.  In that case, the plugin
should just drop them.  Imagine I piano-roll a 4 bar note with 50 different
velocity changes for a 1/2 second hihat sample.  The sample ends
spontaneously (the host did not call NOTE_OFF). The sequencer still has
events, and so keeps sending them.  No problem, plugin ignores them.

> That doesn't really change anything. Whether you're using 
> VOICE_ON/VOICE_OFF or VOICE(1)/VOICE(0), the resulting actions and 
> timing are identical.

Right - Aesthetics.  I prefer VOICE_ON/VOICE_OFF but a VOICE control fits
our model more generically.




Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread David Olofson
On Monday 06 January 2003 23.04, Tim Hockin wrote:
> > The main difficulty I see with the VVID system is how to
> > initialize parameters on a new voice before the voice is
> > activated with a VoiceOn event.
>
> After, or rather - at the same time.  The host just has to make
> sure the VoiceOn arrives first.  This is what is currently in the
> header in my working dir.

Problem is that that requires special case event processing in 
synths. You'll need an inner event decoding loop just to get the 
parameters, before you can go on with voice intantiation.

The alternative indeed means that you have to find somewhere to store 
the parameters until the "VOICE_ON" arrives - but that doesn't screw 
up the API, and in fact, it's not an issue at all until you get into 
voice stealing. (If you just grab a voice when you get a "new" VVID, 
parameters will work like any controls - no extra work or special 
cases.)


> Send a new VVID to the VOICE control with timestamp X for note-on
> Send any initial voice-params with timestamp X
> Send 0-VVID to the VOICE contol with timestamp Y for note-off

I'm not quite following with the 0-VVID thing here... (The VVID *is* 
the common reference; that was the idea.) You can't know when a synth 
is *really* done with a Voice without a round-trip - but indeed, you 
can tell the synth that you have nothing further to say to this Voice 
by implying that NOTE_OFF means "I will no longer use this VVID to 
address whatever voice it's connected to now." Is that the idea?

If so, I would suggest that a special "DETACH_VVID" control/event is 
used for this. There's no distinct relation between a note being "on" 
and whether or not Voice Controls can be used. At least with MIDI 
synths, it's really rather common that you want Channel controls to 
affect *all* voices, even after NoteOff, and I see no reason why our 
Channel Controls should be different.


> Alternatively, rather than a VOICE control, just have special
> VOICE_ON/VOICE_OFF events.

That doesn't really change anything. Whether you're using 
VOICE_ON/VOICE_OFF or VOICE(1)/VOICE(0), the resulting actions and 
timing are identical.


> > -first- order on timestamps
> > -second- put voice-on ahead of all other event types.
> > A little ungainly, but effective as it frees the plugins to
> > assume that they'll get voice-on first but must consider all
> > other events on that timestamp as arguments to the initialization
> > of the voice. Of course this
>
> not all events with that timestamp - only per-voice controls with
> the VVID that was just turned on.  All per-voice events have a
> VVID.  So it is easy to know if that VVID is active - if not, the
> host screwed up.
>
> What I am still fighting with is the idea that the host
> pre-allocates VVID space.

It's not really "space", but rather a minimalistic approach to 
senders allocating "handles", and receivers marking them, to avoid 
searching.

When you get a new VVID, you just look the corresponding entry up in 
the global array and point it at the physical voice you allocated. 

When you get further references to that VVID, you can just look at 
the entry and there's your voice; no searching.

(This obviously assumes that senders set new VVIDs to NULL, so synths 
can tell them from VVIDs with assigned voices.)


> How much space for each VVID?

4 bytes. (On 32 bit archs, that is.)


> Any
> imposed structure?

Nope, just a void * that the synth may use in any way it likes. (Or 
ignore, if it's using some "tag + search" approach; you need only the 
VVID [index] for that.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread David Olofson
On Monday 06 January 2003 22.03, Steve Harris wrote:
> On Mon, Jan 06, 2003 at 12:04:23 -0800, robbins jacob wrote:
> > Alternately, we could require that event ordering has 2
> > criterion: -first- order on timestamps
> > -second- put voice-on ahead of all other event types.
>
> This is what I was assuming was meant orignally.
>
> However you dont have to think of them as initiasiation parameters,
> voices can have instrument wide defaults (eg. a pitch of 0.0 and
> and amplitude of 0.0), and the parameter changes that arrive at the
> same timestamp can be thought of as immediate parameter changes,
> which they are.

Exactly.

These "instantiation parameters" are in fact just control events, and 
they relate to "whatever voice is assigned to the provided VVID".

The issue here is this: Where does control data go when there is no 
voice assigned to the VVID?

It's tempting to just say that whenever you get events, check the 
VVID, and allocate a voice for it if it doesn't have one.

However, it's kind of hard to implement voice allocation in a useful 
way if you don't know what control will trigger the allocation. (You 
can't discard very silent notes and that sort of stuff.)


As to event ordering, the logical way is to send "parameters" 
*first*, and then send the event that triggers the note - be it a 
specific note_on event (bad idea IMHO) or just a change of a control, 
such as Velocity.

Synth implementations will want it the other way around, though. 
Problem with that is that it breaks a fundamental rule of the event 
systems: "Events are processed in order."

What I'm saying is that if you send the "trigger" event first, 
followed by the "parameters", you require synths to process a number 
of control events *before* actually performing the trigger action. 
That simply does not mix with the way events are supposed to be 
handled.

What's even worse; the logical alternative (sending parameters first) 
results in allocation problems. Do we have to have actual Virtual 
Voices objects in synths? If so, how to allocate them? (Or rather; 
how many?) If VVIDs are global and allocated from the host, that just 
won't work. Synths would at least need to be informed about the 
number of VVIDs allocated for a Channel, when a connection is made.


Wait. There's another alternative.

If you really want to your voice allocator to be able to turn down 
requests based on "parameters", how about using a single temporary 
"fake voice" whenever you get a "new" VVID? Grab it, fill in the 
defaults and have it receive whatever controls you're getting. When 
you get the "trigger" event, check the "fake voice", and if the voice 
allocator doesn't like it, just hook the VVID up to your "null" voice.

Obviously, this requires that "parameters" are sent with the same 
timestamp as the corresponding "trigger" event - but that makes sense 
if the "parameters" are really to be regarded as such.

If they aren't, we're talking about a different problem: Keeping 
Voice Control history for Voices that aren't allocated or playing.


Anyway, this obviously suggests that it should be strictly specified 
whan you may and may not talk to a Voice and expect it to respond. I 
mean, it obviously seems stupid to have a physical voice running just 
to track Voice Controls while nothing is being played - but OTOH, 
what do you do if you have 128 VVIDs in use on a 32 voice synth? What 
do you track, and what do out send to the NULL voice? When do you 
actually steal a physical voice?


Note that all this is very closely related to implementation 
specifics of synth voice allocators. I think the issue is more one of 
coming up with a sensible API for this, than to dictate a policy for 
voice management. I doubt that a single policy can work well for all 
kinds of synths.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread Tim Hockin
> The main difficulty I see with the VVID system is how to initialize 
> parameters on a new voice before the voice is activated with a VoiceOn 
> event.

After, or rather - at the same time.  The host just has to make sure the
VoiceOn arrives first.  This is what is currently in the header in my
working dir.

Send a new VVID to the VOICE control with timestamp X for note-on
Send any initial voice-params with timestamp X
Send 0-VVID to the VOICE contol with timestamp Y for note-off

Alternatively, rather than a VOICE control, just have special
VOICE_ON/VOICE_OFF events.

> -first- order on timestamps
> -second- put voice-on ahead of all other event types.
> A little ungainly, but effective as it frees the plugins to assume that 
> they'll get voice-on first but must consider all other events on that 
> timestamp as arguments to the initialization of the voice. Of course this 

not all events with that timestamp - only per-voice controls with the VVID
that was just turned on.  All per-voice events have a VVID.  So it is easy
to know if that VVID is active - if not, the host screwed up.

What I am still fighting with is the idea that the host pre-allocates VVID
space.  How much space for each VVID?  Any imposed structure?

Tim



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread robbins jacob
On Mon, Jan 06, 2003 at 12:04:23 -0800, robbins jacob wrote:

Alternately, we could require that event ordering has 2 criterion: -first- 
order on timestamps -second- put voice-on ahead of all other event types.



This is what I was assuming was meant orignally.




However you dont have to think of them as initiasiation parameters, voices 
can have instrument wide defaults (eg. a pitch of 0.0 and and amplitude of 
0.0), and the parameter changes that arrive at the same timestamp can be 
thought of as immediate parameter changes, which they are.

True, my post was based on the assumption that there are some plugins where 
a certain parameter being initialized at the beginning of the voice would 
affect the voice over its entire duration. The only concrete example I can 
give is velocity maps determining use of different samples in a sampler, 
which doesn't apply here. Maybe a bell model where the voice-on event is 
considered an impulse describes what I'm talking about; ramping up the 
velocity after the voice has started has no effect. Irregardless, if a 
plugin wants to use some parameters in voice initialization, it can do so 
with the events timestamped at the same point as the voice-on supplying the 
values for initialization. For the majority of plugins all parameters are 
equal and some events just happen to coincide with voice-on events, as you 
say.



--jacob robbins porjects, soundtank...








_
MSN 8 with e-mail virus protection service: 2 months FREE* 
http://join.msn.com/?page=features/virus



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-06 Thread Steve Harris
On Mon, Jan 06, 2003 at 12:04:23 -0800, robbins jacob wrote:
> Alternately, we could require that event ordering has 2 criterion:
> -first- order on timestamps
> -second- put voice-on ahead of all other event types.

This is what I was assuming was meant orignally.

However you dont have to think of them as initiasiation parameters, voices
can have instrument wide defaults (eg. a pitch of 0.0 and and amplitude
of 0.0), and the parameter changes that arrive at the same timestamp can
be thought of as immediate parameter changes, which they are.

- Steve