On Friday 17 January 2003 12.25, Fons Adriaensen wrote: > Following the discussion on VVIDs, I've been thinking about how the > MIDI protocol could be modified to encompass explicit contexts. To > my surprise, this would quite simple. I'll call the new protocol > ECMP (Explicit Context Midi Protocol -- just a working name). Key > features are: [...]
The major problem with anything like this is that it's not compatible with any standard protocol. Practically *everything* that deals with MIDI actively parses and processes messages one way or anonther, so changing the format in any way will cause applications, drivers, h/w synths and pretty much everything else, including some MIDI interfaces, to fail. I think it will have to make use of standard messages, so it can work with existing implementations (sequencers, synths etc), or there won't be much use for it. If you're going to make low level changes, you may as well implement a completely new protocol, eliminating the restrictions that still remain in this modified MIDI protocol, or better, use an existing alternative protocol. (DMIDI?) As to doing something over standard MIDI, how about this: * Reserve a few ranges of CCs as Voice Controls. (Say, 4 ranges of 8 CCs each.) * Use the first CC in each range for "Voice ID". * Interpret pitch fields of MIDI events as Voice IDs. * Make pitch/Voice ID 0 illegal, so you can use it to detach Voice Controls completely. * When a Voice ID of a CC range is set, that range is attached to the indicated Voice ID. Controls in the range now affect only notes playend with this pitch/Voice ID. * Use the second Voice Control of each range for pitch, replacing the pitch of the RT messages, so those can be used as pure Voice IDs. * Use the rest of the Voice Controls for whatever you like. (Pitch bend, pan, modulation,...) Of course, you can use NRPNs or SysEx for the Voice Controls, but that will result in higher bandwidth. Doesn't matter unless you're running on 31250 bps wires, of course. BTW, this is probably not far from what the existing SysEx based standard is doing. It's just that very few sequencers and synths seem to support that, but something like the above has the exact same problem. It's only *slightly* more compatible with existing implementations - although probably not in a very useful way. //David Olofson - Programmer, Composer, Open Source Advocate .- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se ---