On Tue, Jan 07, 2003 at 01:41:43 +0100, David Olofson wrote: > Yeah, byt you may not want control values to be latched except when a > note is actually triggered (be it explicitly, or as a result of a > contro change). Also, this voice.set_voice_map() may have significant > cost, and it seems like a bad idea to have the API practically > enforce that such things are done twice for every note.
Right, but the coust is not doubled. > > > > So maybe VOICE creation needs to be a three-step process. > > > > * Allocate voice > > > > * Set initial voice-controls > > > > * Voice on > > > > I think this is harder to handle. > > Why? More events. I guess its not impartant now I think about it. > It's just that there's a *big* difference between latching control > values when starting a note and being able to "morph" while the note > is played... I think it makes a lot of sense to allow synths to do it > either way. I'm not convinced there are many things that should be latched. I guess if you're trying to emulate MIDI hardware, but there you can just ignore velocity that arrives after the voice on. I guess I have no real probelm with two stage voice initialisation. It certainly beets having two classes of event. - Steve