Re: [linux-audio-dev] RFC: API for audio across network -inter-host audio routing

2002-06-13 Thread Dominique Fober

Peter Hanappe wrote:
>
>I wondered if it would be possible to write a JACK driver (i.e.
>replacement for current ALSA driver) that would stream the audio over
>a network. The driver is a shared object, so it's technically possible.
>I was thinking of the timing issues.
>

Concerning the timing issues, one of the problem raised by audio transmission is the 
audio cards clock skew of the different stations involved in the transmission.
I've done some work on this topic. It's available as a technical report at 
ftp://ftp.grame.fr/pub/Documents/AudioClockSkew.pdf
Hoping that it may help,


------
Dominique Fober   <[EMAIL PROTECTED]>
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06 fax:+33 (0)4 720 737 01
http://www.grame.fr





Re: [linux-audio-dev] Introducing DMIDI

2001-12-21 Thread Dominique Fober

>
>We've clocked sustained, two-handed, fast piano improvisation at about
>20 events per second -- significantly less than the 1000-event "maxing
>out the MIDI cable" number above. For the MWPP target application of
>network musical performance of musical groups, the "number of events
>produced by a human" metric seems more appropriate than the "max out
>the cable" metric. BTW, note that using controllers seems to decrease,
>not increase the number -- its hard to generate data faster than fast
>fingers sweeping across a piano keyboard.
>
>This fast, two-handed playing is the sort of input that yields the
>4700 bits per second figure for MWPP's payload (the RTP way of
>comparing bandwidth is to use packetization payloads, since there are
>good header-compression techniques for RTP, UDP, and IP headers that
>can be brought into play if saving every bit is important).
>

The two-hands piano performance is a simple case. We may consider a more general case 
where the performance appears as remote control of audio synth. The controler may be a 
MIDI instrument but also an acoustic instrument which signal is transformed into MIDI 
using a pitch tracker for example. In this case, you may have a lot of control 
information (pitch bend, controlers) generated, which rate depends on the tracker 
audio input buffer size. Using a fft on a 512 frames buffer for example may reach a 
200 events per sec. data flow.
More generally, the MIDI limitations in regard of control have been pointed out very 
early and some proposals have been made for improvements or replacement [see Computer 
Music Journal Vol 18 No 4 Winter 1994 for example]. This was just to highlight the 
fact that in common musical situations and depending on what king of control you want, 
you may rapidly reach a substantial data flow.


>If you wanted to send these "max out the MIDI cable" flows through
>MWPP, you'd have two alternatives using the existing scheme:
>
>
> 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 
>+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+  
>|R|R|LEN|  MIDI Command Payload ... |
>+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>|   Recovery Journal ...|
>+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>
>-- Place one MIDI command per RTP packet, to minimize time skew
>at the expense of bandwidth.
>
>-- Pack multiple MIDI commands into each MIDI Command Header, and
>pay the price in systematic jitter.
>

not necessary in systematic jitter, you can choose to group the events at regular 
interval. Of course, the interval length is to be added to the end to end delay but as 
it is constant, it should have no effect on the jitter. The problem then is to 
evaluate wether such an interval exists, which meets the real-time performance 
requirements while providing a real bandwith economy.

>The alternative is a more sophisticated coding of the MIDI Command
>Payload, that codes time deltas between commands; this information
>wouldn't be of use for simple "play when received" implementations,
>but would be of use for implementations which did latency variation
>removal. This would be an example of the sort of thing that could be
>added to a future MWPP I-D revision ...

You can consider delta time between commands or time offsets to the packet timestamp. 
In both cases, the delta values are bounded by the grouping interval which should be 
always be kept low. Therefore I don't think that any specific strategy (as in MIDI 
Files for example) is to be provided to code delta times.

--df


--
Dominique Fober   <[EMAIL PROTECTED]>
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06 fax:+33 (0)4 720 737 01
http://www.grame.fr





Re: [linux-audio-dev] Introducing DMIDI

2001-12-20 Thread Dominique Fober

>>
>> Therefore, a mecanism to compensate for the latency variation seems to
>> me to be necessary.
>>
>
>I don't think its always necessary -- MWPP should provide the freedom
>to implement latency variation compensation, but I don't think it
>should mandate it. In our experience playing over the CalREN 2
>Berkeley-Stanford and Berkeley-Caltech links, jitter compensation
>wasn't necessary; "play when received" worked well, as long as the
>"outliers" of very late packets were handled separately, using
>semantic rules (which is what the "ontime" and "late" flags codify).
>

I agree: MWPP may let the compensation task to the client responsability, provided 
that the client get the necessary information to do so. Are RTP timestamped packets 
enough ? I'm not sure: dealing with time on different stations may rapidly reach the 
problem of clock skew. 
Lets take an example in http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/nossdav01.pdf 
appendix B which is refered in draft-lazzaro-avt-mwpp-midi-nmp-00 section 3:
the model of time is based on what I call the apparent clock offset (ACO) ie the clock 
offset between the sender and the receiver + the current transport latency at the time 
of the measurement (tf minus t0). Deciding wether a packet is "on time" or "late" is 
based on the latency variation which is evaluated as the current ACO minus the initial 
ACO. But if the receiver clock is running faster than the sender clock, the latency 
will appear as constantly increasing and will reach 'maxlate' sooner or later, 
triggering a set of 'late packets' up to the 'late window' exhaustion (3.5 s).
Of course, the clock skew depends on the clocks used to timestamp the packets and to 
collect the current time on receiver side. My experience of software clocks (based for 
example on timer tasks) is that you can get significant drift: for example 1 ms per 10 
sec which means that maxlate is then reached in about 6 mn.
I don't want to claim that it's a real problem: using more acurate clocks may solve 
the problem for a given time. However, fixing the limits (the skew tolerance and the 
corresponding time limit) would be useful for the protocol implementation.

--df

--
Dominique Fober   <[EMAIL PROTECTED]>
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06 fax:+33 (0)4 720 737 01
http://www.grame.fr





Re: [linux-audio-dev] Introducing DMIDI

2001-12-20 Thread Dominique Fober

>
>One last comment about the RTESP-Wedel.pdf. Millisecond timestamps
>are used in it. I think this is too coarse, especially when trying to have
>jitter
>smaller than 1 millisecond, which should be possible on a LAN (with a
>latency <<5 ms).
>
>--martijn

You're right, however the millisecond time resolution has been fixed for the example 
implementation, but nothing in the protocol prevents to use a more accurate time 
resolution.

--df


------
Dominique Fober   <[EMAIL PROTECTED]>
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06 fax:+33 (0)4 720 737 01
http://www.grame.fr





Re: [linux-audio-dev] Introducing DMIDI

2001-12-19 Thread Dominique Fober

>> from my point of view, preserving the transmitted events scheduling with a
>maximum accuracy is important as this scheduling is part of the musical
>information itself. However, except considering that the protocol is
>intented to run on a fast local dedicated network, the transport latency
>will introduce an important time distortion. Therefore, a mecanism to
>compensate for the latency variation seems to me to be necessary.
>
>RTP timestamps are used for this.

Sure, it may be used for this but it isn't: MIDI events are played at reception time. 
In MWPP for example, the timestamp is only used to determine wether a packet is too 
late or not. 

>
>> Another point is the efficient use of the transmitted packets: sending one
>packet for each event is probably not the best solution. In this case and
>due to the underlying protocols overhead, the useful information part of a
>packet may become less than 10% of the packet size. Moreover, hardware
>layers such as Ethernet for example, often require a minimum packet size to
>operate correctly.
>
>I think a protocol for realtime MIDI over UDP will always have significant
>protocol overhead, I
>don't see this as a problem however.

Considering a 44 bytes overhead (IP + UDP) + the 4 DMIDI header bytes intended to 
address a specific node and device, sending a full MIDI data flow (about 1000 3-bytes 
events per second) requires nearly 400 kbs when the MIDI rate is 31.25 kbs. It's not a 
problem as long as the corresponding bandwith is available to you. But if you plan to 
address different devices on the same node (for example using a multiport interface), 
you should be able to provide each device with an equivalent full MIDI data flow and 
then the problem seriously increases with the number of devices.

--df

--
Dominique Fober   <[EMAIL PROTECTED]>
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06 fax:+33 (0)4 720 737 01
http://www.grame.fr





Re: [linux-audio-dev] Introducing DMIDI

2001-12-19 Thread Dominique Fober

There are some interesting ideas in these works and I would like to add a  
contribution to the discussion: 
from my point of view, preserving the transmitted events scheduling with a maximum 
accuracy is important as this scheduling is part of the musical information itself. 
However, except considering that the protocol is intented to run on a fast local 
dedicated network, the transport latency will introduce an important time distortion. 
Therefore, a mecanism to compensate for the latency variation seems to me to be 
necessary.
Another point is the efficient use of the transmitted packets: sending one packet for 
each event is probably not the best solution. In this case and due to the underlying 
protocols overhead, the useful information part of a packet may become less than 10% 
of the packet size. Moreover, hardware layers such as Ethernet for example, often 
require a minimum packet size to operate correctly.
There are solutions to these problems: I've recently presented a paper at WedelMusic 
2001 which take account of efficiency, scheduling and clock skew. Maybe, combining the 
different things may result in a improved solution.
You can temporarely get the paper at http://www.grame.fr/~fober/RTESP-Wedel.pdf
-df

--
Dominique Fober   <[EMAIL PROTECTED]>
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06 fax:+33 (0)4 720 737 01
http://www.grame.fr



>> But from what I understand of RTP the same
>> thing would/could happen if the protocols are switched. 
>
>Yes, using RTP isn't about getting QoS for free -- 
>
>BTW, some LAD-folk may not be aware that sfront networking:
>
>http://www.cs.berkeley.edu/~lazzaro/nmp/index.html
>
>uses RTP for MIDI, we presented our Internet-Draft at IETF
>52 in Salt Lake a few weeks ago:
>
>http://www.ietf.org/internet-drafts/draft-lazzaro-avt-mwpp-midi-nmp-00.txt
>
>and it received a good reception -- the odds are good that
>it will become a working-group item for AVT. The bulk of
>this I-D describes how to use the information in RTP to 
>handle lost and late packets gracefully in the context of
>network musical performance using MIDI ... 
>
>   --jl
>
>-
>John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
>lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
>-






Re: [linux-audio-dev] Real-Time IPC benchs

2001-09-05 Thread Dominique Fober

Hi,

>
>in the multiclient page in the "next latency" box the Linux latency 
>value is 6 in the Win98 comparison case, and 12 in the Win2000 
>comparison. Which is right?
>
>--DOH! my fault, it's tested on different machines, perhaps it should be 
>marked better?

yes: couple of systems comparisons are generally made on different machines. See at 
http://www.grame.fr/Research/IPCBenchs/implementation.html#stations for the machines 
descriptions.

>
>Also the text for the Win98 test has a cut/paste error from the win2000 
>test.

right ! it's now correcred.

>As for Linux marketing material, was the Linux kernel low-latency 
>patched (doesn't look that way)? (not that I know if it makes any 
>difference?).
>

no low-latency patch was applied. As mentionned in the abstract, I wanted to measure 
real world performances ie performances of basic systems with no particular 
improvement or setup. However, I suspect that low latency patch will only improve the 
"busy" benchmark results as other measurements are only dependant on scheduling policy 
and context switches. But sure: it would be very interesting to get compared results 
between a kernel with and without low-latency patch. The source code is available and 
if someone makes such a bench, please, send me the results.

Dominique

>
>Regards
>/Robert
>
>
>Dominique Fober wrote:
>
>>In order to evaluate a possible important architecture change for the MidiShare
>>kernel developped at Grame (http://www.grame.fr/MidiShare/), we have measured 
>>inter processus communication (IPC) real-time performances on different operating 
>>systems, including GNU/Linux, Windows 98, 2000, NT 4.0 and MacOS X. 
>>The adopted point of view is based on a client/server model.
>>Results can be viewed and downloaded at http://www.grame.fr/Research/IPCBenchs/
>>
>>--
>>Dominique Fober   <[EMAIL PROTECTED]>
>>--
>>GRAME - Centre National de Creation Musicale -
>>9 rue du Garet  69001 Lyon France
>>tel:+33 (0)4 720 737 06fax:+33 (0)4 720 737 01
>>
>>






[linux-audio-dev] Real-Time IPC benchs

2001-09-04 Thread Dominique Fober

In order to evaluate a possible important architecture change for the MidiShare
kernel developped at Grame (http://www.grame.fr/MidiShare/), we have measured 
inter processus communication (IPC) real-time performances on different operating 
systems, including GNU/Linux, Windows 98, 2000, NT 4.0 and MacOS X. 
The adopted point of view is based on a client/server model.
Results can be viewed and downloaded at http://www.grame.fr/Research/IPCBenchs/

--
Dominique Fober   <[EMAIL PROTECTED]>
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06fax:+33 (0)4 720 737 01