On Thursday 12 February 2004 20.30, [EMAIL PROTECTED] wrote:
> Anybody know what's the minimum latency that can be achieved
> passing MIDI notes from one computer to another?

Depends on the wire. For standard 31250 bps MIDI cables, the minimum 
latency for a NoteOn message (status, pitch, velocity - three bytes) 
is 0.96 ms. (It is one start bit and one stop bit, right...?)


> I'm wondering if it's possible in theory to set up an audio
> generation cluster to be used as a realtime instrument.

It's definitely possible in theory, though there's no avoiding that 
latencies add up. You just have to keep "internal" latencies low 
enough that the total system latency is acceptable. (No more than 10 
ms for a real time synth, I'd say. That is assuming the latency is 
*constant*!)


> Basically,
> have a network aware synthesis app running on all machines,
> administer the setup/modification of the signal flow architecture
> from one master machine in non-rt over sockets, and then pass notes
> to the appropriate machines using midi interfaces.

MIDI wire latencies are OK for normal use, but I'm afraid it can build 
up too much if you "chain" machines together this way. What's worse 
is that most of the latency will be caused by the application and/or 
OS, unless you use a hard RTOS, like RTL or RTAI.

Why not use ethernet hardware for everything? Good NICs with proper 
drivers have worst case latencies in the microsecond range on a 
dedicated network, and you'll need an RTOS anyway, so you kind of get 
a fast low latency network for free... MIDI interfaces can't possibly 
do any better anyway, unless the NICs are crippled by the drivers 
somehow. (A protocol stack that isn't designed for RT networking 
could cause trouble, obviously.)

One way to use NICs would be to drop the protocol stack and just use 
the NICs as high speed serial interfaces. You could use cross-over 
cables and point-point connections only, to completely avoid the risk 
of collisions and the related "random" latencies that may cause. 
(Though that's really protocol dependent, and not a general 
requirement. RT network streaming protocols tend to use token passing 
or similar techniques to guarantee bandwidth and latencies. It's 
virtually to impossible to do it reliably any other way, AFAIK.)


> Naturally this
> depends on a signal path separable into big chunks.

Yes... Or rather, the number of node->node hops is restricted by the 
connection latency, internal latency of nodes and the required total 
system latency. Use RTL or RTAI, fast NICs, cross-over point-point 
connections and process say, 0.1 ms' worth of audio/MIDI data at a 
time, and you can send data around quite a bit before you even reach 
the "magical" 3 ms. Unfortunately, processing that small chunks of 
data isn't terribly efficient, so at some point you'll undo the 
advantage of having more CPU power available.


> I have a hunch
> that midi over ip has latency too high and unpredictable for this.

Only if the OS and/or the protocol stack is not real time capable, or 
if some non-RT machines are on the same network.

(With standard hardware, there's no way you can force a babbling 
Windoze box or something to get off the network when you want to chat 
with your RT peers, so you just have to make sure everything on your 
RT network agrees with, and is physically capable of obeying the 
rules of that network. RTOS and RT network protocol support is a 
minimum requirement for every node.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
|  Free/Open Source audio engine for games and multimedia.  |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---


Reply via email to