On 1/10/11 4:46 PM, Bjorn Roche wrote:
>
> On Jan 10, 2011, at 3:11 PM, Roger Dannenberg wrote:
>
>> On 1/10/11 1:44 PM, Bjorn Roche wrote:
>>> Okay, below is my first stab.
>> Hi Bjorn,
>>    That was fast! From your code, I think there is a misunderstanding 
>> of how things work in PortMidi. I think your model is that a sequence 
>> of events is loaded into PortMidi and then PortMidi writes this data 
>> out over time according to the time it gets from the timeProc. This 
>> is a reasonable assumption, but it's not how things work.
>
> Last time I did this, was, I think, with OSS, and that's how it worked.
>
>>    Instead, PortMidi does not have any internal store for future 
>> events. Events pass through PortMidi to the Host API (e.g. coremidi) 
>> immediately. The synchronization is achieved by translating 
>> timestamps from timeProc coordinates to coremidi coordinates. This 
>> translation happens at the time the midi message is sent, and the 
>> assumption is that the clocks will not drift by any significant 
>> amount before the message is actually sent. More precisely, what 
>> happens is that PortMidi computes a difference between the system 
>> time and the timeProc time. This difference is added to each 
>> timestamp to translate to system time, and the message is sent on to 
>> CoreMidi. CoreMidi actually provides the buffering if needed. Windows 
>> and Linux/ALSA work roughly the same way.
>>    Therefore, MIDI messages should be sent incrementally and shortly 
>> before their desired delivery time and not sent all in advance. That 
>> way, any drift between timeProc and system time (reflected by changes 
>> in their difference) will be applied to timestamps to support 
>> long-term synchronization.
>
> Hmmm. I think I understand. Can you define "shortly before"?
Recommendation: Figure out the worst case time to schedule your MIDI 
processing thread (I assume there's only one because PortMidi is not 
"multithreaded" and messages need to have monotonically increasing 
timestamps, thus even if there are multiple sources for a midi stream, 
one thread should merge them in time order). Add the worst case 
processing time (e.g. if you have to read data from a remote file 
system, you'll want to add more than a few milliseconds). Send MIDI in 
advance by at least this sum of scheduling latency and processing time. 
Any less risks sending data late. Any more means you will have pending 
messages sitting in the queue, so e.g. if you want to pause, the pause 
will not really take effect until the queue drains out. (You can in 
principle flush the messages in the queue, but you won't know which 
messages have been sent, meaning you pretty much have to send 
all-notes-off and reset all the controllers you care about to eliminate 
uncertainty. In Audacity, we send all-notes-off when the user hits 
"pause", but we let the queue drain out so that controller data is 
updated, and this means there is a bit of latency where midi keeps 
playing after "pause" is pressed.

In reality, you can send midi as early as you want, provided you have 
allocated enough buffers (portmidi asks for an output queue size that 
for example is used to allocate output stream buffers in Windows -- if 
you run out of buffers, the portmidi write call will block). I think OS 
X and ALSA manage their own buffers and probably these are not limited 
to anything small.

Again, you don't want to send the message so early that clock drift is 
significant while the message is in the queue. I think you should assume 
clocks drift about 1ms per second (one clock is 0.05% fast, the other is 
0.05% slow), but you can expect the typical case (the one you test on 
:-) is much less.
>
>>    It should not matter to portmidi whether you send messages from 
>> the main thread or the portaudio callback (but not both -- there's no 
>> mutual exclusion). I would not be surprised, though, if ALSA midi 
>> sends called malloc, which might create a possible priority inversion 
>> when sending midi from the high-priority pa callback. (I don't know 
>> the internals of ALSA -- maybe someone else can confirm or deny this 
>> concern).
>
>
> Short of some crazy scheme with a "midi scheduling" thread and some 
> complex communication, or something like that, it sounds like events 
> need to be queued in the callback if they are to be synchronized, but 
> the danger of priority inversion is serious. While Linux is not a 
> concern of mine right now, it seems like there should be a "correct" 
> solution for this issue.
>
> I will try to tweak the code such that events are queued in the 
> callback at the same time as the comparable audio is rendered. That 
> will be within the computed latency, which should be within any 
> reasonable definition of "shortly before".
Yes. And just for completeness and to remind anyone else reading this, 
to get timed midi output, you need to set the latency parameter to the 
Pm_OpenOutput() to a positive, non-zero number of milliseconds, and this 
latency value is automatically added to the timestamp to get the actual 
output time. (The rationale here is that if you have audio latency of 
100ms, and you set Midi latency to 100ms, then you can just output audio 
and midi at the same time, using the current time as the timestamp, and 
things will come out about the same time.)
>
>     bjorn
>
> -----------------------------
> Bjorn Roche
> http://www.xonami.com
> Audio Collaboration
>
>
_______________________________________________
media_api mailing list
[email protected]
http://lists.create.ucsb.edu/mailman/listinfo/media_api

Reply via email to