Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Tim Hockin
> > Either you need to NEVER re-use a VVID, or you need to tell the host when an
> > ended VVID is actually re-usable.  Or you need to have voice-ids allocated
> > by the plugin, and NOT the host, which I like more.
> 
> Having the plugins allocate them is a pain, its much easier if the host
> aloocates them, and just does so from a sufficiently large pool, if you
> have 2^32 host VVIDs per instrument you can just round robin them.

Why is it a pain?  I think it is clean.  I've never cared for the idea of
Virtual Voices.  Either a voice is on, or it is not.  The plugin and the
host need to agree on that.

More later - lots of email to catch up on



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Tim Hockin
> > > So maybe VOICE creation needs to be a three-step process.
> > > * Allocate voice
> > > * Set initial voice-controls
> > > * Voice on
> 
> I think this is harder to handle.

All the further discussion leads me to understand we like this?  Eek.  I
proposed it as a straw man.

So we send 1 VOICE_ALLOC, n control SETs, and 1 VOICE_ON event?

is that what we're converging on?



Re: [linux-audio-dev] SCHED_FIFO watchdog util (was: 2.4.20 + lowlat +preempt + alsa + jack = dead computer)

2003-01-07 Thread Roger Larsson

> P.S. Where can I find the mentioned utility?
> 

See my reply to the original message.
I forgot to change Subject... it is late...

/RogerL

-- 
Roger Larsson
SkellefteƄ
Sweden




Re: [Jackit-devel] Re: [linux-audio-dev] 2.4.20 + lowlat +preempt + alsa + jack = dead computer

2003-01-07 Thread Roger Larsson
On Wednesday 08 January 2003 03:34, Paul Davis wrote:
> >So I run that in a terminal and after playing around with a bunch of
> >jack apps got the machine to lockup... and then, after a little bit,
> >suddenly, it came back to life! (you could see that the monitor had
> >changed the priority of the hogs to SCHED_OTHER). 
> >
> >So I guess that somehow jack has a hard to trigger race condition that
> >locks up the machine when running SCHED_FIFO.
> >
> >Now I have to figure out how to trace the thing so as to determine where
> >the whole thing is locking. Help from the jack gurus appreciated. 
> 
> what do you want to know? can you get roger's tool to tell you which
> process (PID) is hogging the CPU? is it jackd or a client? 
> 

Thinking about it...

It could mark the processes that was in the run queue when the
monitor kicked in.

Maybe it should only reduce the priority of those/that as a first level.
And if that did not work - reduce all...

But I am a little afraid of adding features in this kind of application...

Tomorrow... for now I send the current version... (it is quite small)
Start rt_monitor as root from a text terminal (Alt-Ctrl-F1)

/RogerL
- 
Roger Larsson
SkellefteƄ
Sweden



rt_monitor.tgz
Description: application/tgz


Re: [Jackit-devel] Re: [linux-audio-dev] 2.4.20 + lowlat +preempt + alsa + jack = dead computer

2003-01-07 Thread Paul Davis
>> what do you want to know? 
>
>Ahem, everything? :-) ;-) :-)

recompile with DEBUG_ENABLED defined at the top of engine.c and then
if necessary client.c as well.

this will produce reams of output, but will provide you with the next
hint. problem: the output will affect scheduling, which might make the
problem go away. i recommend outputting it to a file if possible, and
viewing after the fact.

--p



[linux-audio-dev] SCHED_FIFO watchdog util (was: 2.4.20 + lowlat +preempt + alsa +jack = dead computer)

2003-01-07 Thread Josh Green
On Tue, 2003-01-07 at 18:07, Fernando Pablo Lopez-Lezcano wrote:
> 
> One more (small) datapoint. Roger Larsson sent me off the list a couple
> of small utilities (VERY nice tools!) that monitors the cpu usage of
> SCHED_FIFO processes and after a timeout actually downgrades the
> persistent hogs to SCHED_OTHER. 
> 
> So I run that in a terminal and after playing around with a bunch of
> jack apps got the machine to lockup... and then, after a little bit,
> suddenly, it came back to life! (you could see that the monitor had
> changed the priority of the hogs to SCHED_OTHER). 
> 

Wow, I must have that as well. I have been wondering about the existence
of such utilities. It seems kind of sad to trade off low latency for
instability. A few times I was driving FluidSynth (was called iiwusynth)
running SCHED_FIFO with a sequencer and got the tempo going too fast.
Hack.. cough.. reboot..

Seems like something like that should actually be part of the kernel (a
configurable option of course, for those die hard SCHED_FIFO folks). It
would not be good for the community at large to think of Linux and audio
as being unstable just because some badly behaving processes wont give
the machine back to their mouse and keyboard :) Cheers.
Josh Green

P.S. Where can I find the mentioned utility?

> 
> -- Fernando
> 





Re: [Jackit-devel] Re: [linux-audio-dev] 2.4.20 + lowlat +preempt +alsa + jack = dead computer

2003-01-07 Thread Fernando Pablo Lopez-Lezcano
> >So I run that in a terminal and after playing around with a bunch of
> >jack apps got the machine to lockup... and then, after a little bit,
> >suddenly, it came back to life! (you could see that the monitor had
> >changed the priority of the hogs to SCHED_OTHER). 
> >
> >So I guess that somehow jack has a hard to trigger race condition that
> >locks up the machine when running SCHED_FIFO.
> >
> >Now I have to figure out how to trace the thing so as to determine where
> >the whole thing is locking. Help from the jack gurus appreciated. 
> 
> what do you want to know? 

Ahem, everything? :-) ;-) :-)

> can you get roger's tool to tell you which
> process (PID) is hogging the CPU? is it jackd or a client? 

I just tried it and it appears to be both (which is consistent with what
I got with the kernel debugger, I was breaking into it and the only
processes I ever saw were jackd or one of the clients). 

I was running jackd, ardour, freqtweak, qjackconnect, ams, and an
additional process doing disk i/o that pushed things over the edge.
After rt_monitor kicks in it does print pids, but I just discovered that
for some reason ps axuw is not printing all the processes (seems to miss
the SCHED_FIFO ones - never noticed this before) so it is hard to track
what is what. The SCHED_FIFO jackd process is downgraded to SCHED_OTHER,
plus a bunch of client processes. Only jackd and ardour survive after
the freeze, all other clients die or are killed (just made it happen
again, this time with only jack and ardour).

-- Fernando





Re: [Jackit-devel] Re: [linux-audio-dev] 2.4.20 + lowlat +preempt + alsa + jack = dead computer

2003-01-07 Thread Paul Davis
>So I run that in a terminal and after playing around with a bunch of
>jack apps got the machine to lockup... and then, after a little bit,
>suddenly, it came back to life! (you could see that the monitor had
>changed the priority of the hogs to SCHED_OTHER). 
>
>So I guess that somehow jack has a hard to trigger race condition that
>locks up the machine when running SCHED_FIFO.
>
>Now I have to figure out how to trace the thing so as to determine where
>the whole thing is locking. Help from the jack gurus appreciated. 

what do you want to know? can you get roger's tool to tell you which
process (PID) is hogging the CPU? is it jackd or a client? 

--p



Re: [Jackit-devel] Re: [linux-audio-dev] 2.4.20 + lowlat +preempt +alsa + jack = dead computer

2003-01-07 Thread Fernando Pablo Lopez-Lezcano
> >I browsed the Kernel Source and there is only one mark_inode_dirty in
> >pipe_write (in fs/pipe.c). So we know where it is hanging...
> >
> >And in __mark_inode_dirty (in fs/inode.c) there is one  
> >   spin_lock(&inode_lock)
> >call, and I guess that is where the whole thing is hanging. So something
> >is holding that lock... how do I find out who is doing that? Apparently
> >the handling of inode_lock is confined to inode.c. I'll keep reading. 

[Andrew Morton had suggested that the stack traces did not show problems
with stuck locks in the kernel...]

> >Maybe the pipe in question is one of the pipes that jack uses for ipc?
> 
> seems *damn* likely ... sorry to just be chiming in with a useless comment!

One more (small) datapoint. Roger Larsson sent me off the list a couple
of small utilities (VERY nice tools!) that monitors the cpu usage of
SCHED_FIFO processes and after a timeout actually downgrades the
persistent hogs to SCHED_OTHER. 

So I run that in a terminal and after playing around with a bunch of
jack apps got the machine to lockup... and then, after a little bit,
suddenly, it came back to life! (you could see that the monitor had
changed the priority of the hogs to SCHED_OTHER). 

So I guess that somehow jack has a hard to trigger race condition that
locks up the machine when running SCHED_FIFO.

Now I have to figure out how to trace the thing so as to determine where
the whole thing is locking. Help from the jack gurus appreciated. 

-- Fernando





Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 20.42, robbins jacob wrote:
[...]
> >OK... I was thinking that the initial mention of the VVID would
> > cause it creation (be that implicit or explict, though I prefer
> > explit I think), thereafter control changes would be applied the
> > the instantiated voice (or the NULL voice if you've run out /
> > declined it).
>
> The initial mention of the VVID is the issue here; certain types of
> voice events are assumed not to allocate a voice (parameter-set
> events). THis is because there is no difference between a tweak on
> a VVID that has had its voice stolen and a tweak intended to
> initialize a voice that arrives before voice-on. We must conclude
> that the plugin will discard both of them. There must be a signal
> to the plugin that a VVID is targeted for activation. we have a few
> options:
>
> ---a voice-activation event is sent, then any initializing events,
> then a voice-on event

That should work, and allows normal Voice Controls to be used as 
initializers.

Also note that nothing says that the synth must allocate a *physical* 
voice for this; it could start with a non-playing virtual voice and 
allocate a physical voice only when it decides to actually make some 
noise. (Or decides to ditch the voice, based on the state of the 
initializers when "voice on" is triggered.)


> ---a voice-on event is sent, with any following events on the same
> timestamp assumed to be initializers

That breaks control/initializer compatibility totally. Initializers 
cannot be Voice Controls if they're special-cased like this.


> ---a voice-activation event is sent and there is no notion of
> voice-on, one or more of the parameters must be changed to produce
> sound but it is a mystery to the sequencer which those are. (I
> don't like this because it make sequences not portable between
> different instruments)

Sequences are still "portable" if controls are hinted in a sensible 
way. Obviously, you can't make a "note on velocity" synth play 
continous velocity sequences properly, but there isn't much that can 
be done about that, really. The note on velocity data just isn't 
there.


> ---events sent to voiceless VVID's are attached to a temporary
> voice by the plugin and which may later use that to initialize an
> actual voice. This negates the assumption that voiceless VVID
> events are discarded.

Sort of. It's hard to avoid this if initializers really are supposed 
to have any similarities with controls, though.

Also, I don't think it's much of a problem. When you run out of 
voices, you're basically abusing the synth, and problems are to be 
expected. (Though, we obviously want to minimize the side effects. 
Thus voice stealing - which still works with this scheme.)


>#2 is just an abbreviated form of #1, as i argue below. (unless
> you allow the activate-to-voice_on cycle to span multiple
> timestamps which seems undesireable)

Well, you can't really prevent it, if initializers are supposed to be 
control values, can you?


> > > > > When are you supposed to do that sort of stuff? VOICE_ON is
> > > > > > > >
> >
> >what triggers it in a normal synth, but with this scheme, you > >
> > > have to wait for some vaguely defined "all parameters > > >
> > available" point.
>
> We can precisely define initialization parameters to be all the
> events sharing the same VVID and timestamp as the VOICE_ON event.

Yes - but that's not compatible with standard Control sematics in any 
way...


> This means that the "all parameters available" point is at the same
> timestamp as the VOICE_ON event, but after the last event with that
> timestamp.

And that point is a bit harder for synths to keep track of than 
"whenever voice on occurs." This is more like script parsing than 
event decoding.

No major issue, but the special casing of initializers is, IMO.


> If we want to include a VOICE_ALLOCATE event then the sequence
> goes: timestamp-X:voice-allocate,
> timestamp-X:voice-parameter-set(considered an initializer if
> appropriate), timestamp-X:voice-on, timestamp-X+1:more
> voice-parameter-sets (same as any other parameter-set)
>
> But this sequence can be shortened by assuming that the voice-on
> event at the last position for timestamp-X is implicit:
> timestamp-X:voice-on(signifying the same thing as voice-allocate
> above) timestamp-X:voice-parameter-set(considered an initializer if
> appropriate), (synth actually activates voice here),
> timestamp-X+1:other-events

The latter isn't very different from assuming that the first control 
event that references a VVID results in a (virtual or physical) voice 
being allocated. In fact, the synth will have to do *something* to 
keep track of controls as soon as you start talking to a voice, and, 
when appropriate, play some sound. Whether or not the synth allocates 
a *physical* voice right away is really an implementation issue.

VOICE_ALLOCATE is really just a way of saying "I want this VVID 
hooked up to a new voice - forget about whate

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 20.40, robbins jacob wrote:
> >I made a post a while back defining all the XAP terminology to
> > date. Read it if you haven't - it is useful :)
>
> I was hoping something of this sort existed. It would be very
> helpful if you could put the list of XAP terminology on the
> webpage. It would help keep everybody on the same page when
> discussing.;)  And it would help people to join the discussion
> without spending the 10-15 hours it takes to read december's posts.

Yep. (I'm maintaining the web page.) I'll try to do find the post(s) 
and put something together ASAP. (Oh, and I have another logo draft I 
made the other day.)


> >VVID allocation and voice allocation are still two different
> > issues. VVID is about allocating *references*, while voice
> > allocation is about actual voices and/or temporary voice control
> > storage.
>
> I agree entirely. If each VVID=a voice then we should just call
> them Voice ID's, and let the event-sender make decisions about
> voice reappropriation.

Actually, they're still virtual, unless we have zero latency feedback 
from the synths. (Which is not possible, unless everything is 
function call based, and processing is blockless.) The sender never 
knows when a VVID loses it's voice, and can't even be sure a VVID 
*gets* a voice in the first place. Thus, it can't rely on anything 
that has a fixed relation to physical synth voices.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



[linux-audio-dev] unsuscribe

2003-01-07 Thread rick Burnett
unsuscribe



[linux-audio-dev] unsuscribe

2003-01-07 Thread Richard Burnett
unsuscribe




Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread robbins jacob
I made a post a while back defining all the XAP terminology to date. Read 
it if you haven't - it is useful :)

I was hoping something of this sort existed. It would be very helpful if you 
could put the list of XAP terminology on the webpage. It would help keep 
everybody on the same page when discussing.;)  And it would help people to 
join the discussion without spending the 10-15 hours it takes to read 
december's posts.


VVID allocation and voice allocation are still two different issues. VVID 
is about allocating *references*, while voice allocation is about actual 
voices and/or temporary voice control storage.
I agree entirely. If each VVID=a voice then we should just call them Voice 
ID's, and let the event-sender make decisions about voice reappropriation.


---jacob robbins...





_
MSN 8 with e-mail virus protection service: 2 months FREE* 
http://join.msn.com/?page=features/virus



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread robbins jacob
It's obvious when you consider that "VVID has no voice" can happen 
*before* the synth decides to start the voice; not just after a voice has 
detached from the VVID as a result of voice stealing. At that point, only 
the value of the control that triggered "voice on" will be present; all 
other controls have been lost. Unless the host/sender is somehow forced to 
resend the values, the synth will have to use default values or something.


OK... I was thinking that the initial mention of the VVID would cause it 
creation (be that implicit or explict, though I prefer explit I think), 
thereafter control changes would be applied the the instantiated voice (or 
the NULL voice if you've run out / declined it).

The initial mention of the VVID is the issue here; certain types of voice 
events are assumed not to allocate a voice (parameter-set events). THis is 
because there is no difference between a tweak on a VVID that has had its 
voice stolen and a tweak intended to initialize a voice that arrives before 
voice-on. We must conclude that the plugin will discard both of them. There 
must be a signal to the plugin that a VVID is targeted for activation. we 
have a few options:

---a voice-activation event is sent, then any initializing events, then a 
voice-on event

---a voice-on event is sent, with any following events on the same timestamp 
assumed to be initializers

---a voice-activation event is sent and there is no notion of voice-on, one 
or more of the parameters must be changed to produce sound but it is a 
mystery to the sequencer which those are. (I don't like this because it make 
sequences not portable between different instruments)

---events sent to voiceless VVID's are attached to a temporary voice by the 
plugin and which may later use that to initialize an actual voice. This 
negates the assumption that voiceless VVID events are discarded.


  #2 is just an abbreviated form of #1, as i argue below. (unless you allow 
the activate-to-voice_on cycle to span multiple timestamps which seems 
undesireable)

> > > When are you supposed to do that sort of stuff? VOICE_ON is > > > 
what triggers it in a normal synth, but with this scheme, you > > > have to 
wait for some vaguely defined "all parameters > > > available" point.
We can precisely define initialization parameters to be all the events 
sharing the same VVID and timestamp as the VOICE_ON event. This means that 
the "all parameters available" point is at the same timestamp as the 
VOICE_ON event, but after the last event with that timestamp.

If we want to include a VOICE_ALLOCATE event then the sequence goes: 
timestamp-X:voice-allocate, timestamp-X:voice-parameter-set(considered an 
initializer if appropriate), timestamp-X:voice-on, timestamp-X+1:more 
voice-parameter-sets (same as any other parameter-set)

But this sequence can be shortened by assuming that the voice-on event at 
the last position for timestamp-X is implicit:  
timestamp-X:voice-on(signifying the same thing as voice-allocate above) 
timestamp-X:voice-parameter-set(considered an initializer if appropriate), 
(synth actually activates voice here), timestamp-X+1:other-events


---Jacob Robbins.








_
STOP MORE SPAM with the new MSN 8 and get 2 months FREE* 
http://join.msn.com/?page=features/junkmail



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Tim Hockin
> ONe great thing about this scheme is that it encourages people not to
> think of certain, arbitary parameters as instantiation parameters, withc
> are special in some way, 'cos there not.

The way I've seen velocity-mapped samplers is not to change the sample later
- you get the sample that maps to the initial velocity, and further changes
are just volume/filter manipulation.




Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 17.10, Steve Harris wrote:
[...]
> OK... I was thinking that the initial mention of the VVID would
> cause it creation (be that implicit or explict, though I prefer
> explit I think), thereafter control changes would be applied the
> the instantiated voice (or the NULL voice if you've run out /
> declined it).

Well, it is sort of explicit in that the synth will have to do 
*something* about it - or Voice Controls just won't work as intended. 

Now, if it's a physical voice or a virtual voice (basically a dummy 
control tracer) that's being instantiated is another matter. You can 
still actually keep track of more VVIDs than there are physical 
channels, as desicribed earlier. (Two levels of "out of voices".)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 03:50:26 +0100, David Olofson wrote:
> > > Right - so you can't play a polyphonic synth with a continous
> > > velocity controller, unless you track and re-send the controls
> > > that the synth happens to treat as note parameters.
> >
> > I dont understand why.
> 
> It's obvious when you consider that "VVID has no voice" can happen 
> *before* the synth decides to start the voice; not just after a voice 
> has detached from the VVID as a result of voice stealing. At that 
> point, only the value of the control that triggered "voice on" will 
> be present; all other controls have been lost. Unless the host/sender 
> is somehow forced to resend the values, the synth will have to use 
> default values or something.

OK... I was thinking that the initial mention of the VVID would cause it
creation (be that implicit or explict, though I prefer explit I think),
thereafter control changes would be applied the the instantiated voice (or
the NULL voice if you've run out / declined it).
 
> > There is a difference between "turning down" (implies
> > communicating) and ignoring (silent).
> 
> "Turning down" was meant as seen from the synth implementation POV. 
> That is, if a synth "turns down" a voice allocation for a VVID, that 
> VVID just gets routed to "the NULL Voice". Future events with that 
> VVID are ignored.

Fine then.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 03:38:05 +0100, David Olofson wrote:
> It might seem handy to allow synths to explicitly say that some 
> controls *must* be rewritten at the instant of a VOICE_ON, but I 
> don't think it's useful (it's useless for continous velocity 
> instruments, at least) enough to motivate the cost.

Right, we are in agreement then :)

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 15.08, Steve Harris wrote:
> On Tue, Jan 07, 2003 at 01:49:23 +0100, David Olofson wrote:
> > > > The issue here is this: Where does control data go when there
> > > > is no voice assigned to the VVID?
> > >
> > > They get thrown away.
> >
> > Right - so you can't play a polyphonic synth with a continous
> > velocity controller, unless you track and re-send the controls
> > that the synth happens to treat as note parameters.
>
> I dont understand why.

It's obvious when you consider that "VVID has no voice" can happen 
*before* the synth decides to start the voice; not just after a voice 
has detached from the VVID as a result of voice stealing. At that 
point, only the value of the control that triggered "voice on" will 
be present; all other controls have been lost. Unless the host/sender 
is somehow forced to resend the values, the synth will have to use 
default values or something.


> > > Well, I thinik its OK, because the note will not be used to
> > > render any samples until after all the events have been
> > > processed.
> >
> > But who guarantees that whatever the "trigger" event is, also
> > comes with events for the "parameter" controls? It's trivial (but
> > still has to be done!) with explicit NOTE_ON events when assuming
> > that NOTE_ON means "allocate and start voice NOW", but it's not
> > at all possible if the synth triggers on voice control changes.
>
> I dont think that matters, eg. my favourite exmaple, the gong
> synth. If you issue a voice on it will initialise the gong graph in
> its stabel state, and when you sned a velocity signal (or whatever)
> it will simulate a beater striking it.

Yeah - that's a perfect example of "allocate voice on first use of 
VVID". This breaks down when you run out of physical voices, unless 
you do instant voice stealing upon "allocate new voice for VVID". (It 
might make sense to just do it that way, but it definitely is a waste 
of resources if allocating a voice is frequently done before the 
synth actually decides to start playing.)


> > > > If you really want to your voice allocator to be able to turn
> > > > down requests based on "parameters"
> > >
> > > I think this would be complex.
> >
> > Not more complex than any "note-on latched" controls - and I
> > don't think it's realistic to eliminate those.
>
> There is a difference between "turning down" (implies
> communicating) and ignoring (silent).

"Turning down" was meant as seen from the synth implementation POV. 
That is, if a synth "turns down" a voice allocation for a VVID, that 
VVID just gets routed to "the NULL Voice". Future events with that 
VVID are ignored.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



RE: [linux-audio-dev] New relase of ZynAddSubFX (1.0.4)

2003-01-07 Thread Mark Knecht
Nacsa Paul,
   Any plans to add Jack support to your synth?

Thanks,
Mark

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Nasca Paul
> Sent: Tuesday, January 07, 2003 2:13 AM
> To: [EMAIL PROTECTED];
> [EMAIL PROTECTED]
> Subject: [linux-audio-dev] New relase of ZynAddSubFX (1.0.4)
> 
> 
> ZynAddSubFX is a open-source software synthesizer for
> Linux.
> 
> It is available at :
> http://zynaddsubfx.sourceforge.net
> or 
> http://sourceforge.net/projects/zynaddsubfx
> 
> 
> news:
> 
> 1.0.4 - It is possible to load Scala (.scl and .kbm)
> files
>   - Added mapping from note number to scale degree
> is possible to load Scala kbm files
>   - Corrected small bugs related to Microtonal
>   - If you want to use ZynAddSubFX with OSS (or
> you don't have ALSA) you can modify the Makefile.inc
> file to compile with OSS only.
>   - It is shown the real detune (in cents)
>   - Made a new widget that replaces the Dial
> widget
>   - Removed a bug that crashed ZynAddSubFX if you
> change some effect parameters
> 
> 
> 
> __
> Do you Yahoo!?
> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
> http://mailplus.yahoo.com
> 
> 



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 15.03, Steve Harris wrote:
[...]
> > It's just that there's a *big* difference between latching
> > control values when starting a note and being able to "morph"
> > while the note is played... I think it makes a lot of sense to
> > allow synths to do it either way.
>
> I'm not convinced there are many things that should be latched. I
> guess if you're trying to emulate MIDI hardware, but there you can
> just ignore velocity that arrives after the voice on.

I don't think velocity mapping qualifies as "emulating MIDI 
hardware", though. Likewise with "impact position" for drums. 
Selecting a waveform at voice on is very different from switching 
between waveforms during playback in a useful way. Changing the 
position dependent parameters for playing sounds just because "the 
drummer moves his aim" is simply incorrect.

Either way, the problem with ignoring velocity after voice on is that 
you have to consider event *timestamps* rather than event ordering. 
This breaks the logic of timestamped event processing, IMHO.


> I guess I have no real probelm with two stage voice initialisation.
> It certainly beets having two classes of event.

Yes, and that's the more important side of this. Treating 
"parameters" as different from controls has implications for both 
hosts/senders and synths, whereas defining "voice on latched 
controls" as a synth implementation thing has no implications for 
hosts/senders.

It might seem handy to allow synths to explicitly say that some 
controls *must* be rewritten at the instant of a VOICE_ON, but I 
don't think it's useful (it's useless for continous velocity 
instruments, at least) enough to motivate the cost.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 01:49:23 +0100, David Olofson wrote:
> > > The issue here is this: Where does control data go when there is
> > > no voice assigned to the VVID?
> >
> > They get thrown away.
> 
> Right - so you can't play a polyphonic synth with a continous 
> velocity controller, unless you track and re-send the controls that 
> the synth happens to treat as note parameters.

I dont understand why.
 
> > Well, I thinik its OK, because the note will not be used to render
> > any samples until after all the events have been processed.
> 
> But who guarantees that whatever the "trigger" event is, also comes 
> with events for the "parameter" controls? It's trivial (but still has 
> to be done!) with explicit NOTE_ON events when assuming that NOTE_ON 
> means "allocate and start voice NOW", but it's not at all possible if 
> the synth triggers on voice control changes.

I dont think that matters, eg. my favourite exmaple, the gong synth. If
you issue a voice on it will initialise the gong graph in its stabel
state, and when you sned a velocity signal (or whatever) it will simulate
a beater striking it.

> > > If you really want to your voice allocator to be able to turn
> > > down requests based on "parameters"
> >
> > I think this would be complex.
> 
> Not more complex than any "note-on latched" controls - and I don't 
> think it's realistic to eliminate those.

There is a difference between "turning down" (implies communicating) and
ignoring (silent).

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 01:41:43 +0100, David Olofson wrote:
> Yeah, byt you may not want control values to be latched except when a 
> note is actually triggered (be it explicitly, or as a result of a 
> contro change). Also, this voice.set_voice_map() may have significant 
> cost, and it seems like a bad idea to have the API practically 
> enforce that such things are done twice for every note.

Right, but the coust is not doubled.
 
> > > > So maybe VOICE creation needs to be a three-step process.
> > > > * Allocate voice
> > > > * Set initial voice-controls
> > > > * Voice on
> >
> > I think this is harder to handle.
> 
> Why?

More events. I guess its not impartant now I think about it.

> It's just that there's a *big* difference between latching control 
> values when starting a note and being able to "morph" while the note 
> is played... I think it makes a lot of sense to allow synths to do it 
> either way.

I'm not convinced there are many things that should be latched. I guess if
you're trying to emulate MIDI hardware, but there you can just ignore
velocity that arrives after the voice on.

I guess I have no real probelm with two stage voice initialisation. It
certainly beets having two classes of event.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 10.28, Steve Harris wrote:
[...]
> > This is also for debate - David dislikes (and I agree) the notion
> > that you have to send a note-on but the plugin does not have
> > enough info to handle (for example) a velocity-mapped sampler
> > until later.  Process events in order.  So a few ideas are on the
> > table.
>
> You do have enough infomation, its just that it may be superseded
> later. Velocity can have a default.
>
> ONe great thing about this scheme is that it encourages people not
> to think of certain, arbitary parameters as instantiation
> parameters, withc are special in some way, 'cos there not.

Well, they *are* special in that they're latched only at certain 
points. The problem is that if synths cannot effectively *implement* 
it that way, it becomes the host's/sender's responsibility to know 
the difference, and make sure that these controls are handled the 
right way. And unless the host/sender can tell the synth exactly when 
to latch the values, there is no way to get this right.

What I'm saying is that synths should preferably behave is if *all* 
voice controls ever received are tracked on a per-VVID basis, so they 
can be latched as intended when the synth decides to start a physical 
voice. That way, you can play continous control data on latched 
control synths and vice versa, without nasty side effects or "smart" 
event processing in the host/sender.

Obviously, this just isn't possible if the number of VVIDs used to 
control a synth is unknown or very large. However, I don't see a 
reason why you would use many more VVIDs than there are physical 
voices, so I don't see this as a real problem.

Synths that don't allocate physical voices as soon as a VVID gets 
it's first control may have to allocate virtual voices upon 
connection, but I think that's acceptable, considering the 
alternatives.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 10.23, Steve Harris wrote:
> On Mon, Jan 06, 2003 at 11:17:07 +0100, David Olofson wrote:
> > These "instantiation parameters" are in fact just control events,
> > and they relate to "whatever voice is assigned to the provided
> > VVID".
> >
> > The issue here is this: Where does control data go when there is
> > no voice assigned to the VVID?
>
> They get thrown away.

Right - so you can't play a polyphonic synth with a continous 
velocity controller, unless you track and re-send the controls that 
the synth happens to treat as note parameters.


> > What I'm saying is that if you send the "trigger" event first,
> > followed by the "parameters", you require synths to process a
> > number of control events *before* actually performing the trigger
> > action. That simply does not mix with the way events are supposed
> > to be handled.
>
> Well, I thinik its OK, because the note will not be used to render
> any samples until after all the events have been processed.

But who guarantees that whatever the "trigger" event is, also comes 
with events for the "parameter" controls? It's trivial (but still has 
to be done!) with explicit NOTE_ON events when assuming that NOTE_ON 
means "allocate and start voice NOW", but it's not at all possible if 
the synth triggers on voice control changes.


> > If you really want to your voice allocator to be able to turn
> > down requests based on "parameters"
>
> I think this would be complex.

Not more complex than any "note-on latched" controls - and I don't 
think it's realistic to eliminate those.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 10.19, Steve Harris wrote:
> On Tue, Jan 07, 2003 at 03:21:22 +0100, David Olofson wrote:
> > > > Problem is step 1. If the voice allocator looks at velocity,
> > > > it won't work, since that information is not available when
> > > > you do the allocation. Likewise for setting up waveforms with
> > > > velocity maps and the like.
>
> But in the general case this is just mapping to parameters, you
> have to be able to handle parameter changes during the run of the
> instrument, so why not at creation time too?
[...]

Yeah, byt you may not want control values to be latched except when a 
note is actually triggered (be it explicitly, or as a result of a 
contro change). Also, this voice.set_voice_map() may have significant 
cost, and it seems like a bad idea to have the API practically 
enforce that such things are done twice for every note.


> > > > When are you supposed to do that sort of stuff? VOICE_ON is
> > > > what triggers it in a normal synth, but with this scheme, you
> > > > have to wait for some vaguely defined "all parameters
> > > > available" point.
> > >
> > > So maybe VOICE creation needs to be a three-step process.
> > > * Allocate voice
> > > * Set initial voice-controls
> > > * Voice on
>
> I think this is harder to handle.

Why?


> > >  This is essentially saying that initial parameters are
> > > 'special', and they are in many-ways (I'm sure velocity maps
> > > are just one case).
> >
> > Yes; there can be a whole lot of such parameters for percussion
> > instruments, for example. (Drums, cymbals, marimba etc...)
>
> I still dont think there special. Velicty maps only behave this way
> in midi because you cant change velovity during the note in midi,
> you still need to be able to call up the map instantly, so it
> doesn't matter if you dont know the map at the point the note is
> 'created'.

It's just that there's a *big* difference between latching control 
values when starting a note and being able to "morph" while the note 
is played... I think it makes a lot of sense to allow synths to do it 
either way.


> > > Or we can make the rule that you do not choose an entry in a
> > > velocity map until you start PROCESSING a voice, not when you
> > > create it.  VOICE_ON is a placeholder.  The plugin should see
> > > that a voice is on that has no velocity-map entry and deal with
> > > it whn processing starts.  Maybe not.
> >
> > No, I think that's just moving the problem deeper into synth
> > implementations.
>
> Why? You can create it with the map for velocity=0.0 or whatever,
> and change it if needed. This seems like it will lead to cleaner
> instrument code.

Cleaner, maybe, but slower and more importantly, incorrect.

Bad Things(TM) will happen if you happen to play a "note-on latched 
velocity" synth with data that was recorded from a continous velocity 
controller, for example. What are you supposed to do with the 
sequenced data (or real time input, for that matter!) to have correct 
playback? I don't like the idea of having two different *types* of 
controls (continous and "event latched") for everything, just to deal 
with this.
 

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! .
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 09.15, Tim Hockin wrote:
> > continuous note on a particular VVID. The sequencer only reuses a
> > VVID once it has ended any previous notes on that VVID. The
> > sequencer can allocate a
>
> This is the current issue at hand - just because the sequencer has
> ended a VVID with a voice-off, doesn't mean the voice is off.  It
> just begins the release phase of the envelope, on some synths. 
> This could take a while. Either you need to NEVER re-use a VVID, or
> you need to tell the host when an ended VVID is actually re-usable.
>  Or you need to have voice-ids allocated by the plugin, and NOT the
> host, which I like more.

Or you can tell the plugin when you want a particular VVID to be 
assigned to a new voice.


[...]
> > -HOWEVER- there is no rule that a note has any pitch or velocity
> > or any other particular parameter, it is just that the Voice-On
> > tells the voice to start making sound and the Voice-Off tells the
> > voice to stop making sound.
>
> Correct.  Though David is half-heartedly arguing that maybe
> Velocity is the true note-on.  I disagree.

I'm actually arguing that *no* specific event or control is the "note 
on". It could be triggered by VELOCITY changes, or by any other Voice 
Control or combination of Voice Controls. It's entirely synth 
specific.


> > -ALSO HOWEVER- the entity which sends voice-on and off messages
> > may not directly refer to the object's voices. Instead, the event
> > sender puts
>
> ..in the currently understood VVID model.
>
> > voices to do. This differential is in place because sequencers
> > typically send more concurrent notes than the plugin has actual
> > voices for AND the
>
> On the contrary, hopefully you will rarely exceed the max polyphony
> for each instrument.
>
> > other words, it is the role of the plugin to decide whether or
> > not to steal
>
> I believe you should always steal notes, but I suppose there will
> be some instrument plugin some lunatic on here will devise that
> does not follow that. Newer notes are always more important than
> older notes,

Well, it's not quite that simple with continous velocity 
instruments... You may want to steal a voice when the velocity is 
high enough, and possibly even have some other "context" steal it 
back later on. (It would be the right thing to do - but if someone 
actually cares to implement it is another matter. It is, after all, 
little more than an emergency solution.)


> but if you exceed max poly, a red light should go off!

Yes! Bad things *will* happen in 99% of cases. (Whether you can 
actually hear the difference in a full mix is another matter.)


> > (1)send voice-on event at timestamp X. This indicates a note is
> > to start.
> >
> > (2)send parameter-set events also at timestamp X, these are
> > guaranteed to
>
> This is also for debate - David dislikes (and I agree) the notion
> that you have to send a note-on but the plugin does not have enough
> info to handle (for example) a velocity-mapped sampler until later.
>  Process events in order.  So a few ideas are on the table.

Yeah, it's basically a mattor of allocation - either of a physical 
voice, or a temporary "fake voice" or something else that can track 
the incoming events until a physical voice is allocated.

I think different synths will want to do this in different ways, but 
we need to come up with an API that's clean and works well for all 
sensible methods.


> > (4)send voice-off event at later time to end the note and free
> > the voice.
>
> And what of step-sequencing, where you send a note-on and never a
> note-off? Finite duration voices (samples, drum hits, etc) end
> spontaneously.  Does the plugin tell the host about it, or just let
> the VVID leak?

Either tell the host/sender when the VVID is free (which means you 
need a VVID reserve that has some sort of hard to define relation to 
latency), or let the host/sender tell the synth when it no longer 
cares about a specific "voice context". I strongly prefer the latter.


> > When the plugin reads the voice-on event at timestamp X it
> > decides whether to allocate a voice or not. If it has an
> > initialization routine for voice-on events, then the plugin must
> > read through the remaining events with timestamp X to get
> > initialization arguments. The plugin must delay actually
>
> I guess it may not be tooo bad.  Plugins which need some init info
> (such as velocity for the velo-map) know they need this, and can
> look for that info. Other plugins for whom an init event is no
> different than a continuous event just go about their merry way.
>
> But before we all decide this, I want to explore other notions. 
> I'm not saying my negative Voice-ID thing is great, but I rather
> like the idea that Voice-Ids mean something and are almost purely
> the domain of the plugin.

The problem is that voice IDs still cannot have a direct relation to 
voices. If they have, you can't steal voices, since that would result 
in the host/sender s

Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread David Olofson
On Tuesday 07 January 2003 04.24, Tim Hockin wrote:
> > Two problems solved! (Well, almost... Still need temporary space
> > for the "parameters", unless voice allocation is
> > non-conditional.)
>
> I think if you get an allocate voice event, you MUST get a voice on
> event.

Why? Isn't a "voice on event" actually just "something" that makes 
the synth activate a voice?


[...]
> > Why would the host (or rather, sender) care about the VVIDs that
> > are *not* active? (Remember; "Kill All Notes" is a Channel
> > Control, and if you want per-voice note killing, you simply keep
> > your VVIDs until you're done with them - as always.)
>
> It wouldn't - if the host has a limit of 128 voice polyphony, it
> keeps a hash or array of 128 VVIDs.  There is a per-instrument (or
> per-channel) next_vvid variable.  Whenever host wants a new voice
> on an instrument it finds an empty slot on the VVID table (or the
> oldest VVID if full) and sets it to next_vvid++.  That is then the
> VVID for the new voice.  If we had to steal one, it's because the
> user went too far.  In that case, VOICE(oldest_vvid, 0) is probably
> acceptable.
>
> The plugin sees a stream of new VVIDs (maybe wrapping every 2^32
> notes - probably OK).  It has it's own internal rules about voice
> allocation, and probably has less polyphony than 128 (or whatever
> the host sets).  It can do smart voice stealing (though the LRU
> algorithm the host uses is probably good enough).  It hashes VVIDs
> in the 0-2^32 namespace on it's real voices internally.  You only
> re-use VVIDs every 2^32 notes.

Ok, but I don't see the advantage of this, vs explicitly assigning 
preallocated VVIDs to new voices. All I see is a rather significant 
performance hit when looking up voices.


> > > VELOCITY can be continuous - as you pointed out with strings
> > > and such.  The creation of a voice must be separate in the API,
> > > I think.
> >
> > Why? It's up to the *instrument* to decide when the string (or
> > whatever) actually starts to vibrate, isn't it? (Could be
> > VELOCITY >= 0.5, or whatever!) Then, what use is it for
> > hosts/senders to try to figure out where notes start and end?
>
> And for a continuous velocity instrument, how do you make a new
> voice?

Just grab a new VVID and start playing. The synth will decide when a 
physical voice should be used, just as it decides what exactly to do 
with that physical voice.


> And why is velocity becoming special again?

It isn't. VELOCITY was just an example; any control - or a 
combination of controls - would do.


> I think voice-on/off is well understood and applies pretty well to
> everything.  I am all for inventing new concepts, but this will be
> more confusing than useful, I think.  Open to convincing, but
> dubious.

I think the voice on/off concept is little more than a MIDIism. Since 
basic MIDI does not support continous velocity at all, it makes sense 
to merge "note on" and "velocity" into one, and assume that a note 
starts when the resulting message is received.

With continous velocity, it is no longer obvious when the synth 
should actually start playing. Consequently, it seems like wasted 
code the have the host/sender "guess" when the synth might want to 
allocate or free voices, since the synth may ignore that information 
anyway. This is why the explicit note on/off logic seems broken to me.

Note that using an event for VVID allocation has very little to do 
with this. VVID allocation is just a way for the host/sender to 
explicitly tell the synth when it's talking about a new "voice 
context" without somehow grabbing a new VVID. It doesn't imply 
anything about physical voice allocation.


[...]
> Block start:
>   time X: voice(-1, ALLOC)/* a new voice is coming */
>   time X: velocity(-1, 100)   /* set init controls */
>   time X: voice(-1, ON)   /* start the voice */
>   time X: (plugin sends host 'voice -1 = 16')
>   time Y: voice(-2, ALLOC)
>   time Y: velocity(-2, 66)
>   time Y: voice(-2, ON)
>   time Y: (plugin sends host 'voice -2 = 17')
>
> From then out the host uses the plugin-allocated voice-ids.  We get
> a large (all negative numbers) namespace for new notes per block.

Short term VVIDs, basically. (Which means there will be voice 
marking, LUTs or similar internally in synths.)


> We get plugin-specific voice-ids (no hashing/translating).

Actually, you *always* need to do some sort of translation if you 
have anything but actual voice indices. Also note that there must be 
a way to assign voice IDs to non-voices (ie NULL voices) or similar, 
when running out of physical voices.


>  Plugin
> handles voice stealing in a plugin specific way (ask for a voice,
> it's all full, it returns a voice-id you already have and
> internally sends voice_off).

You can never return an in-use voice ID, unless the sender is 
supposed to check every returned voice ID. Better return an invalid 
voice ID or something...


> I found it ugl

[linux-audio-dev] New relase of ZynAddSubFX (1.0.4)

2003-01-07 Thread Nasca Paul
ZynAddSubFX is a open-source software synthesizer for
Linux.

It is available at :
http://zynaddsubfx.sourceforge.net
or 
http://sourceforge.net/projects/zynaddsubfx


news:

1.0.4 - It is possible to load Scala (.scl and .kbm)
files
  - Added mapping from note number to scale degree
is possible to load Scala kbm files
  - Corrected small bugs related to Microtonal
  - If you want to use ZynAddSubFX with OSS (or
you don't have ALSA) you can modify the Makefile.inc
file to compile with OSS only.
  - It is shown the real detune (in cents)
  - Made a new widget that replaces the Dial
widget
  - Removed a bug that crashed ZynAddSubFX if you
change some effect parameters



__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 12:15:40 -0800, Tim Hockin wrote:
> > continuous note on a particular VVID. The sequencer only reuses a VVID once 
> > it has ended any previous notes on that VVID. The sequencer can allocate a 
> 
> This is the current issue at hand - just because the sequencer has ended a
> VVID with a voice-off, doesn't mean the voice is off.  It just begins the
> release phase of the envelope, on some synths.  This could take a while.
> Either you need to NEVER re-use a VVID, or you need to tell the host when an
> ended VVID is actually re-usable.  Or you need to have voice-ids allocated
> by the plugin, and NOT the host, which I like more.

Having the plugins allocate them is a pain, its much easier if the host
aloocates them, and just does so from a sufficiently large pool, if you
have 2^32 host VVIDs per instrument you can just round robin them.
 
> This is also for debate - David dislikes (and I agree) the notion that you
> have to send a note-on but the plugin does not have enough info to handle
> (for example) a velocity-mapped sampler until later.  Process events in
> order.  So a few ideas are on the table.

You do have enough infomation, its just that it may be superseded later.
Velocity can have a default.

ONe great thing about this scheme is that it encourages people not to
think of certain, arbitary parameters as instantiation parameters, withc
are special in some way, 'cos there not.

- Steve 



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Mon, Jan 06, 2003 at 11:17:07 +0100, David Olofson wrote:
> These "instantiation parameters" are in fact just control events, and 
> they relate to "whatever voice is assigned to the provided VVID".
> 
> The issue here is this: Where does control data go when there is no 
> voice assigned to the VVID?

They get thrown away.
 
> What I'm saying is that if you send the "trigger" event first, 
> followed by the "parameters", you require synths to process a number 
> of control events *before* actually performing the trigger action. 
> That simply does not mix with the way events are supposed to be 
> handled.

Well, I thinik its OK, because the note will not be used to render any
samples until after all the events have been processed.
 
> If you really want to your voice allocator to be able to turn down 
> requests based on "parameters"

I think this would be complex.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Steve Harris
On Tue, Jan 07, 2003 at 03:21:22 +0100, David Olofson wrote:
> > > Problem is step 1. If the voice allocator looks at velocity, it
> > > won't work, since that information is not available when you do
> > > the allocation. Likewise for setting up waveforms with velocity
> > > maps and the like.

But in the general case this is just mapping to parameters, you have to be
able to handle parameter changes during the run of the instrument, so why
not at creation time too?

for samples
   if (event)
  if (event == new_voice)
 voice = allocate_voice()
  if (event == velocity)
 voice.set_voice_map(event)
   out = voice.run()

> > > When are you supposed to do that sort of stuff? VOICE_ON is what
> > > triggers it in a normal synth, but with this scheme, you have to
> > > wait for some vaguely defined "all parameters available" point.
> >
> > So maybe VOICE creation needs to be a three-step process.
> > * Allocate voice
> > * Set initial voice-controls
> > * Voice on

I think this is harder to handle.

> >  This is essentially saying that initial parameters are
> > 'special', and they are in many-ways (I'm sure velocity maps are
> > just one case).
> 
> Yes; there can be a whole lot of such parameters for percussion 
> instruments, for example. (Drums, cymbals, marimba etc...)

I still dont think there special. Velicty maps only behave this way in
midi because you cant change velovity during the note in midi, you still
need to be able to call up the map instantly, so it doesn't matter if you
dont know the map at the point the note is 'created'.
 
> > Or we can make the rule that you do not choose an entry in a
> > velocity map until you start PROCESSING a voice, not when you
> > create it.  VOICE_ON is a placeholder.  The plugin should see that
> > a voice is on that has no velocity-map entry and deal with it whn
> > processing starts.  Maybe not.
> 
> No, I think that's just moving the problem deeper into synth 
> implementations.

Why? You can create it with the map for velocity=0.0 or whatever, and
change it if needed. This seems like it will lead to cleaner instrument
code.

- Steve



Re: [linux-audio-dev] more on XAP Virtual Voice ID system

2003-01-07 Thread Tim Hockin
> continuous note on a particular VVID. The sequencer only reuses a VVID once 
> it has ended any previous notes on that VVID. The sequencer can allocate a 

This is the current issue at hand - just because the sequencer has ended a
VVID with a voice-off, doesn't mean the voice is off.  It just begins the
release phase of the envelope, on some synths.  This could take a while.
Either you need to NEVER re-use a VVID, or you need to tell the host when an
ended VVID is actually re-usable.  Or you need to have voice-ids allocated
by the plugin, and NOT the host, which I like more.

> My underlying assumptions are:

I made a post a while back defining all the XAP terminology to date.  Read
it if you haven't - it is useful :)

> -DEFINITION: the individual voices produce finite periods of sound which we 
> call notes. A note is the sound that a voice makes between a Voice-On event 
> and a Voice-Off event (provided that the voice is not reappropriated in the 
> middle to make a different note)

We've been avoiding the word 'note' because it is too specific.  The
lifetime of voice is either finite or open-ended.  A synth would be
open-ended.  A gong hit would be finite.

> -HOWEVER- there is no rule that a note has any pitch or velocity or any 
> other particular parameter, it is just that the Voice-On tells the voice to 
> start making sound and the Voice-Off tells the voice to stop making sound.

Correct.  Though David is half-heartedly arguing that maybe Velocity is the
true note-on.  I disagree.

> -ALSO HOWEVER- the entity which sends voice-on and off messages may not 
> directly refer to the object's voices. Instead, the event sender puts 

..in the currently understood VVID model.

> voices to do. This differential is in place because sequencers typically 
> send more concurrent notes than the plugin has actual voices for AND the 

On the contrary, hopefully you will rarely exceed the max polyphony for each
instrument.

> other words, it is the role of the plugin to decide whether or not to steal 

I believe you should always steal notes, but I suppose there will be some
instrument plugin some lunatic on here will devise that does not follow
that.  Newer notes are always more important than older notes, but if you
exceed max poly, a red light should go off!

> (1)send voice-on event at timestamp X. This indicates a note is to start.
> 
> (2)send parameter-set events also at timestamp X, these are guaranteed to 

This is also for debate - David dislikes (and I agree) the notion that you
have to send a note-on but the plugin does not have enough info to handle
(for example) a velocity-mapped sampler until later.  Process events in
order.  So a few ideas are on the table.

> (4)send voice-off event at later time to end the note and free the voice.

And what of step-sequencing, where you send a note-on and never a note-off?
Finite duration voices (samples, drum hits, etc) end spontaneously.  Does
the plugin tell the host about it, or just let the VVID leak?

> When the plugin reads the voice-on event at timestamp X it decides whether 
> to allocate a voice or not. If it has an initialization routine for voice-on 
> events, then the plugin must read through the remaining events with 
> timestamp X to get initialization arguments. The plugin must delay actually 

I guess it may not be tooo bad.  Plugins which need some init info (such as
velocity for the velo-map) know they need this, and can look for that info.
Other plugins for whom an init event is no different than a continuous event
just go about their merry way.

But before we all decide this, I want to explore other notions.  I'm not
saying my negative Voice-ID thing is great, but I rather like the idea that
Voice-Ids mean something and are almost purely the domain of the plugin.

Tim