>I am not sure if this is the right way to go, but then I've never used the >alsa seqeuncer API, and write no midi software anymore. > >What about the maximum jack buffer size issue. Is it reasonable to make >all the apps do thier own limiting?
thats where the work comes in. the "read" side of the proposed handling of MIDI is easy; the write side needs to collect data from the ALSA sequencer and store it in shm or something like that. its all a bit murky right now. there is no maximum buffer size, however, other than system limits. >Also, I suspect even more MIDI apps will be read/write based, and its more >reasonable for them to be IMHO, that is actually how MIDI works at the low >level, unlike PCM soundcards. actually, i think its the opposite. most MIDI apps need to emit MIDI data at specific times (hence the early development of the OSS sequencer and now the ALSA one). write(2) of raw MIDI data doesn't work (by itself) for this - the applications either have to do their own timing, or they need to deliver event structures to some API that does the timing for them. for apps that use audio+MIDI, they will generally want to use the same timebase for this. with ALSA right now, you can do this, but its not trivial to set up, and has been subject to some wierd bugs which are hopefully now fixed. if you do this at the JACK level (or if the ALSA seq was in user space, or something)), i think this gets very easy to implement. a typical MIDI client that does MIDI output really wants to use a queued write, which is more or less exactly what JACK's callback-based handling of audio provides the major problem is that the granularity of the MIDI timestamp is finer than the audio, because of the block/chunked handling of audio. --p