Re: [linux-audio-dev] RFC: API for audio across network -inter-host audio routing

2002-06-15 Thread Nelson Posse Lago

On Thu, Jun 13 2002 at 11:55:11am +0200, Men Muheim wrote:
  http://magic.gibson.com/
 Looks interesting. Thanks for the info. 
 [...]
 The discussion here should not be about the network but about an audio
 API which is independent of the underlying network stack.
 [...]
 I think the first thing to do is to define an audio API that reflects
 network capabilities and design the according audio library / server.
 Maybe the server could base on some of the JACK technology. Only after
 that the network driver comes into place.

Hi,

I think what I intend to do is somewhat related to this. I'm not sure if
what you are talking about are systems for synchronous or asynchronous
data transfer. As for asynchronous, I suspect that (as pointed out by
Steve Harris) MAS and RTP are good starting points.

I intend to write (RSN - I turned it into a school assignment) a remote
effects processor: suppose I'm running a JACK graph where I have a lot of
LADSPA plugins running, and I want to add another one but my hardware
can't keep up; I'd like to be able to offload some of the work to another
machine. So, I'll create a host that simply grabs the data fed by jack,
sends it to be processed on a different machine, gets it back and feeds it
back to jack. To achieve this, we simply send the data during one jack
cycle and receive the processed data on the next cycle (which would
mean add something to the total latency). I want this to work over
ethernet, probably with UDP.

This would be a proof-of-concept implementation. If it works, I'd like to
implement a more general idea to do this on top of JACK, maybe even a
patch to JACK; this would allow us, for instance, to have all the UI work
done on one machine, all the disk I/O work done on 2-3 other machines, all
the DSP work spread over a bunch of machines and all the audio I/O done on
a single machine. Given that older hardware is extremely cheap and might
be able to do some parts of this work very well, I believe this would
offer an interesting setup to work with. In fact, the system may be made
smart so that different paths on the jack graph are treated in
parallel: this way, we might even be able to send and receive the data
through the network within a single jack cycle.

As Steve Harris also pointed out, a system like this can only output data
from a single machine, since everything needs to be run in sync with the
soundcard; compensating for the innacuracies doesn't seem to be adequate
on this scenario, which is geared IMHO to a recording studio, where plugin
processing power AND absolute audio quality are mandatory. However, maybe
adding the possibility of multiple soundcards on different machines with
drift compensation is an interesting idea and useful in other setups.

If the proof-of-concept implementation goes reasonably well, I intend to
pursue this further and make this the subject of my masters thesis.

As for the patent of Magic, I didn't read the specs, but from the
description of what Magic is, I guess this goes like:

Tech: Hey, what if we use a network like ethernet to send data around
between digital equipment?
Boss: Great idea! Let's patent it.

Needless to say, that's ridiculous. The only original idea here is that
the data is audio, which is as original an idea as transposing the art of
fugue to a brass quartet.

See ya,
Nelson



RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Men Muheim

Maybe the word latency did confuse some people here. Latency is
sometimes used for the OS scheduling time, or for a network packet
delay, or for the audio buffer-size. 

In my case I wanted to name the delay from the analogue input signal to
the analogue output signal.

   Nonsense! What about tubas ?
   I admit a guitar would feel pretty awful w/ 10 ms latency,
 
  And yet electric guitarists do it all the time. Ever stood 10  feet
away
  from your amp? It's no big deal, you get used to it.
 
 Thats true, but doesn't match my experience of trying to play with 512
 sample buffers... although that probably doesn't equate to 512 samples
of
 latency, its probably more.

It is more indeed! 

An ALSA audio interface normally uses two double-buffers. One for the
input and one for the output. 512 samples is the size of one half of the
double buffer. Therefore the minimum delay in your test was 2*512/48kHz
= 21.3ms. Usually the ADC/DAC adds another 1-2ms.

Besides that you probably used loudspeakers for your test and therefore
the acoustic delay needs to be added. 

 I'l have to try it with a hardware delay line.

Yes! Great idea!





RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Men Muheim

 BTW, as I keep moaning, I think network audio is an important next
step
 in
 LAD development and ideally could be combined with the kind of step
 required
 to get JACK firmly off the ground. I'm still plugging LADMEA ideas
 (www.ladspa.org/ladmea/).

Thank you for getting back to the original topic!

Regards, Men





Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Joern Nettingsmeier

John Lazzaro wrote:
 
  You certainly can't play an instrument with 10ms latency.
 
 See:
 
 http://ccrma-www.stanford.edu/groups/soundwire/delay_p.html
 http://www-ccrma.stanford.edu/groups/soundwire/delay.html
 http://www-ccrma.stanford.edu/groups/soundwire/WAN_aug17.html

if i read this correctly, it's about latency wrt _another_player_. all
trained ensemble musicians are easily able to compensate for the rather
long delays that occur on normal stages.  not *hearing_oneself_in_time*
is a completely different thing. if i try to groove on a softsynth, 10
ms response time feels ugly on the verge of unusable (provided my
calculations and measurements on latency are correct), and i'm not even
very good.

let a drummer play an electronic trigger that does not make any sound by
itself, and feed him a triggered synth drum over his headphones with 5
ms latency, and he will kill you. his/her body control will be off to
hell in a handbasket if the actual motion and the sound are not totally
in sync, which means the drumtrack will be garbage and the drummer will
suffer from increased muscular strain.

as some people mentioned, some instruments have a long natural
latency, so the players have learnt to compensate, and the latency is
part of the feel. but then, this is not true for most percussive or
plucked instruments.

i need to read up on the haas effect, but i doubt it is a constant. i
bet you that a bebop drummer swinging at 380 bpm will develop a much
higher aural time resolution than joe average bathroom singer. 
if we are talking pro audio, we must deliver a good feel for the
really badass musicians, not just for the guys who step-sequence their
stuff with one finger on the keyboard. noi.


jörn



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Joern Nettingsmeier

Men Muheim wrote:
 
  Thats true, but doesn't match my experience of trying to play with 512
  sample buffers... although that probably doesn't equate to 512 samples
 of
  latency, its probably more.
 
 It is more indeed!
 
 An ALSA audio interface normally uses two double-buffers. One for the
 input and one for the output. 512 samples is the size of one half of the
 double buffer. Therefore the minimum delay in your test was 2*512/48kHz
 = 21.3ms. Usually the ADC/DAC adds another 1-2ms.

now that i read this, i probably need to correct my comfortable
latency estimate by a few msec, but the basic point still stands.

btw, when i do P.A. jobs, i have taken to setting up delay lines by ear,
and only measure the distance afterwards to check. guess what ? i'm
always correct within 3 msec, a little worse if the room has a lot of
reverb.
it's not hard to hear. at more than 5-8 msec, you feel that beats become
fuzzy, and below that, you can judge by sound coloration (comb filter).
granted, it's a slightly different situation, since you have both a
delayed and an undelayed signal combined, but it goes to show you can
hear quite a bit. i know a drummer who claims to listen for the sound
color of snare and ride together to check if he's in sync...



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Martijn Sipkema

 if i read this correctly, it's about latency wrt _another_player_. all
 trained ensemble musicians are easily able to compensate for the rather
 long delays that occur on normal stages.  not *hearing_oneself_in_time*
 is a completely different thing. if i try to groove on a softsynth, 10
 ms response time feels ugly on the verge of unusable (provided my
 calculations and measurements on latency are correct), and i'm not even
 very good.

Well, I just did some guitar playing through my boss gx700 with various
delay settings. Indeed 10ms delay is noticable, but I wouldn't go as
far as ugly/unusable. You could just feel there was a slight delay. But
really anything below 10ms is realtime. I don't think a 5ms delay is
a problem for playing any instrument. I don't know if the gx700 (or
effect units in general) has an additional delay from adc/dac or the
way it processes.

 let a drummer play an electronic trigger that does not make any sound by
 itself, and feed him a triggered synth drum over his headphones with 5
 ms latency, and he will kill you. his/her body control will be off to
 hell in a handbasket if the actual motion and the sound are not totally
 in sync, which means the drumtrack will be garbage and the drummer will
 suffer from increased muscular strain.

This already happens with 5ms??

 as some people mentioned, some instruments have a long natural
 latency, so the players have learnt to compensate, and the latency is
 part of the feel. but then, this is not true for most percussive or
 plucked instruments.

Indeed when playing pads on a synth it would be hard to even notice
a 40ms delay.

I once read somewhere that some digital mixers have a 6ms delay.

--martijn







Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread John Lazzaro

 Joern Nettingsmeier writes

 not *hearing_oneself_in_time* is a completely different thing.

Yes, I agree its a completely different thing, but ...

 if i try to groove on a softsynth, 10
 ms response time feels ugly on the verge of unusable (provided my
 calculations and measurements on latency are correct), and i'm not even
 very good.

This doesn't match my own personal experience -- I can play with 10ms
of constant latency in the loop, and not be terribly bothered by it.
I notice it, but I can play through it, and I didn't grow up playing
instruments with big delays ... just the normal collection of 
Wuzlizers, acoustic and electro-acoustic ...

But that's just me ... maybe I'm an outlier ...

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Martijn Sipkema

 Going back to the issue of latency, it should be pointed out that while 
 it might not be a big deal if your softsynth takes 25 ms to trigger, 

It is unless you only use it with a sequencer.

 latency on the PCI bus is a big problem.  If you can't get data from 
 your HD (or RAM)

From memory I think. (is it possible for an audio card to read directly
from the hard disk??).

 to your soundcard fast enough
, you _will_ hear dropouts 
 in the audio.  The more tracks, the higher the sampling rate, and the 
 longer the wordlength, the greater the problem becomes.  I think this is 
 what people mostly are worried about when they talk about latency 
 problems.

Yes, the PCI bus adds some extra latency. But that's not a problem.
The bandwidth I think will become a problem, but only with very large
numbers of channels high sample rate.

A 25 ms dropout in the audiostream is quite noticeable and 
 annoying.  The discussions (which I did contribute to) on latency in 
 acoustic instruments touched on the subject that trained performers of 
 those instruments have learned to compensate for the inevitable delay 
 between articulation and sound.  Bus latency, however, is a completely 
 different story

If the bus has sufficient bandwidth there should be no dropouts. 25ms is
an extemely long period for the PCI bus. I'm not sure what the normal
buffer size for PCI transfers is, but this is in the order of a few hundred
usec worth of samples ,in the case of the audiowerk8 (philips saa7146)
a 24 Dword buffer per DMA channel (8 bytes per audio frame).

--martijn






Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Lamar Owen

On Friday 14 June 2002 01:58 pm, Richard C. Burnett wrote:
  Which is enough to reconstruct the sine wave if the output is through a
  proper post-DAC filter.

 amplitude modulation from the different sample points.  Have you ever
 plotted a sine wave where you you don't pick enough points to show the
 real shape?  It's the same principle.

The idea is that you get the integrated value of the amplitude of the sine 
wave, since a sine wave always has the same shape.  But the amplitude, at the 
Nyquist frequency, cannot change.  Yes, I really said that. If the amplitude 
of the sine wave changes, you get an upper sideband above the Nyquist rate 
that you cannot sample -- of course you also get a lower sideband that you 
can sample, but then the recovered envelope is distorted.  Amplitude changes 
at the Nyquist frequency violate Nyquist's theorem.  Thus the Nyquist 
frequency itself is an asymptote and cannot be accurately reproduced except 
in the steady state.

It follows that as the sampled frequency approaches the Nyquist frequency its 
rate of amplitude change must be decreased due to that stubborn upper 
sideband.  You end up with aliasing distortion from the upper sideband that 
passes the Nyquist filtering at the A/D, producing aliasing components 
reflected down in frequency.  And if you can hear 20kHz (I barely can), you 
can detect the difference in attack times for sharply attacking instruments, 
which translates as noise, given a 44.1ksps rate.

 But the higher your sampling rate, the more you are guaranteed not to miss
 a max point or a min point, because when the signal is in those ranges,
 it's able to grab many points.

That's what the post-filtering is for.  The post-filter acts as an electronic 
flywheel that fills in the points between samples.  The difficulty is 
building accurate output filters -- and that also translates to the cost of 
the equipment.

 Actually I read equal to in one of my text books, but that is beside the
 point.  It means that you can 'detect' frequencies at that frequency, but
 it doesn't mean you can accuratly reproduce them.  Even at say 18kHz
 sampling at 40kz you would see essentially a beat frequency (I think that
 is the term) as the sampling point is different in each cycle, and not
 enough to capture the max/min every time.

The flywheel effect of the post-filter eliminates that.  What you do hear is 
the aliasing distortion from the loss of the upper sideband of the complex 
component of the sampled frequency.  A change in amplitude of any given 
frequency translates to a static 'carrier' at the frequency of interest and 
two sidebands containing the change information.  This is AM broadcast theory 
here.  (I am a broadcast engineer by profession).

 I've done the test.  I listened to a CD recorded SACD and a standard CD,
 same CD player.  The difference was very audible, and a lot of it to me
 was crispness in the upper frequency range.

Loss of the upper sideband, as the frequency-domain signal acquires components 
approaching the Nyquist frequency, might possibly explain this.

 My research indicated that
 clock jitter is the number one cause of problems.  This is why there are
 $1000 CD players that do sound different, and their specs usually talk
 about the jitter.

I wholeheartedly agree with that.  Jitter causes unmusical (additive instead 
of multiplicative) frequency transposition, which in turn creates FM 
sidebands in the decoded audio, perceived as 'grunge'.  Which is why I spec 
broadcast-grade CD players here -- the difference is audible.  Plus I get 
balanced outputs for RF resistance... :-)
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread STEFFL, ERIK *Internet* (SBCSI)

 -Original Message-
 From: Lamar Owen [mailto:[EMAIL PROTECTED]]
...
 The idea is that you get the integrated value of the 
 amplitude of the sine 
 wave, since a sine wave always has the same shape.  But the 
 amplitude, at the 
 Nyquist frequency, cannot change.  Yes, I really said that. 
 If the amplitude 
 of the sine wave changes, you get an upper sideband above the 
 Nyquist rate 
 that you cannot sample -- of course you also get a lower 
 sideband that you 
 can sample, but then the recovered envelope is distorted.  
 Amplitude changes 
 at the Nyquist frequency violate Nyquist's theorem.  Thus the Nyquist 
 frequency itself is an asymptote and cannot be accurately 
 reproduced except 
 in the steady state.

  nyquist theorem doesn't say that you can get EXACT same signal if your
sampling frequency is twice the highest frequencey of signal. It says
(roughly) that you wouldn't miss any bumps (of frequency half of your
sampling frequency), you can't be sure about the shape of those bump
(amplitude).

  which is basically what you say, it's just that there is no violation of
nyquist theorem there...

erik



RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread Bob Colwell

Nyquist says that if you sample a repeating waveform, ANY repeating
waveform, at a sampling rate greater than or equal to the highest
frequency partial present in the waveform, you can exactly duplicate
that waveform. Yes, exactly duplicate the original.

Lamar's comments are correct, but then, what difference is there 
between amplitude modulation at the sampling frequency and a partial
at that frequency? Both contain information that cannot be captured
by the stated sampling frequency. Why would you amplitude modulate
anything that fast?

-BobC

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of STEFFL,
ERIK *Internet* (SBCSI)
Sent: Friday, June 14, 2002 12:01 PM
To: '[EMAIL PROTECTED]'
Subject: RE: [linux-audio-dev] RFC: API for audio across network -
inter-host audio routing


 -Original Message-
 From: Lamar Owen [mailto:[EMAIL PROTECTED]]
...
 The idea is that you get the integrated value of the 
 amplitude of the sine 
 wave, since a sine wave always has the same shape.  But the 
 amplitude, at the 
 Nyquist frequency, cannot change.  Yes, I really said that. 
 If the amplitude 
 of the sine wave changes, you get an upper sideband above the 
 Nyquist rate 
 that you cannot sample -- of course you also get a lower 
 sideband that you 
 can sample, but then the recovered envelope is distorted.  
 Amplitude changes 
 at the Nyquist frequency violate Nyquist's theorem.  Thus the Nyquist 
 frequency itself is an asymptote and cannot be accurately 
 reproduced except 
 in the steady state.

  nyquist theorem doesn't say that you can get EXACT same signal if your
sampling frequency is twice the highest frequencey of signal. It says
(roughly) that you wouldn't miss any bumps (of frequency half of your
sampling frequency), you can't be sure about the shape of those bump
(amplitude).

  which is basically what you say, it's just that there is no violation of
nyquist theorem there...

erik



RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-14 Thread STEFFL, ERIK *Internet* (SBCSI)

 -Original Message-
 From: Bob Colwell [mailto:[EMAIL PROTECTED]]
 Sent: Friday, June 14, 2002 4:39 PM
 To: [EMAIL PROTECTED]
 Subject: RE: [linux-audio-dev] RFC: API for audio across network -
 inter-host audio routing
 
 
 Nyquist says that if you sample a repeating waveform, ANY repeating
 waveform, at a sampling rate greater than or equal to the highest

  not sure what you mean by 'repeating', it does not have to be periodic, it
just has to be continuous.

 frequency partial present in the waveform, you can exactly duplicate
 that waveform. Yes, exactly duplicate the original.

  in theory (i.e. not exactly, it says that you can reconstuct a continuous
function)

  reality brings two important factors:

  1) the sample we have is not of infinite length - error is introduced
(see http://www.digital-recordings.com/publ/pubneq.html) - nyquist theorem
is for functions (with no beginning and no end).

  2) the real signals are not really limited to exactly half of sample
frequency, the higher frequency (above half the sampling frequency) are
'transformed' into lower frequencies - so the frequencies that you would not
hear in original sound would be in hearing range ion reconstructed signal
(aliasing).

  (there might be other reasons why exact signal cannot ber reconstructed)

  I don't have time to search for it but I recall something about the
'exact' is not exactly what one would expect, something along the lines that
you don't miss any frequency (when you do fourier transform) but that the
signal you can reconstruct doesn't have exactly the same shape or something
like that...

erik

 Lamar's comments are correct, but then, what difference is there 
 between amplitude modulation at the sampling frequency and a partial
 at that frequency? Both contain information that cannot be captured
 by the stated sampling frequency. Why would you amplitude modulate
 anything that fast?
 
 -BobC
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of STEFFL,
 ERIK *Internet* (SBCSI)
 Sent: Friday, June 14, 2002 12:01 PM
 To: '[EMAIL PROTECTED]'
 Subject: RE: [linux-audio-dev] RFC: API for audio across network -
 inter-host audio routing
 
 
  -Original Message-
  From: Lamar Owen [mailto:[EMAIL PROTECTED]]
 ...
  The idea is that you get the integrated value of the 
  amplitude of the sine 
  wave, since a sine wave always has the same shape.  But the 
  amplitude, at the 
  Nyquist frequency, cannot change.  Yes, I really said that. 
  If the amplitude 
  of the sine wave changes, you get an upper sideband above the 
  Nyquist rate 
  that you cannot sample -- of course you also get a lower 
  sideband that you 
  can sample, but then the recovered envelope is distorted.  
  Amplitude changes 
  at the Nyquist frequency violate Nyquist's theorem.  Thus 
 the Nyquist 
  frequency itself is an asymptote and cannot be accurately 
  reproduced except 
  in the steady state.
 
   nyquist theorem doesn't say that you can get EXACT same 
 signal if your
 sampling frequency is twice the highest frequencey of signal. It says
 (roughly) that you wouldn't miss any bumps (of frequency half of your
 sampling frequency), you can't be sure about the shape of those bump
 (amplitude).
 
   which is basically what you say, it's just that there is no 
 violation of
 nyquist theorem there...
 
   erik
 



Re: [linux-audio-dev] RFC: API for audio across network -inter-host audio routing

2002-06-13 Thread Dominique Fober

Peter Hanappe wrote:

I wondered if it would be possible to write a JACK driver (i.e.
replacement for current ALSA driver) that would stream the audio over
a network. The driver is a shared object, so it's technically possible.
I was thinking of the timing issues.


Concerning the timing issues, one of the problem raised by audio transmission is the 
audio cards clock skew of the different stations involved in the transmission.
I've done some work on this topic. It's available as a technical report at 
ftp://ftp.grame.fr/pub/Documents/AudioClockSkew.pdf
Hoping that it may help,


--
Dominique Fober   [EMAIL PROTECTED]
--
GRAME - Centre National de Creation Musicale -
9 rue du Garet  69001 Lyon France
tel:+33 (0)4 720 737 06 fax:+33 (0)4 720 737 01
http://www.grame.fr





RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Men Muheim

Thanx for the info. Unfortunately it does not really get clear to me
what the project does. I think the difference to my approach is that I
am talking about LAN and low latency (10ms)

Men

 -Original Message-
 From: [EMAIL PROTECTED]
[mailto:linux-audio-dev-
 [EMAIL PROTECTED]] On Behalf Of Steve Harris
 Sent: Mittwoch, 12. Juni 2002 20:03
 To: [EMAIL PROTECTED]
 Subject: Re: [linux-audio-dev] RFC: API for audio across network -
inter-
 host audio routing
 
 On Wed, Jun 12, 2002 at 06:25:14 +0200, Men Muheim wrote:
  Has anyone ever thought of implementing a library for transfer of
audio
  across networks? The API could be similar to JACK but would allow
 
 Have a look at MAS, they had an impressive demo at LinuxTag:
 http://mediaapplicationserver.net/
 
 It works with X but doesn't require it IIRC.
 
 That API is not that similar to jack, but hey.
 
 - Steve





RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Men Muheim

 Concerning the timing issues, one of the problem raised by audio
 transmission is the audio cards clock skew of the different stations
 involved in the transmission.
 I've done some work on this topic. It's available as a technical
report at
 ftp://ftp.grame.fr/pub/Documents/AudioClockSkew.pdf
 Hoping that it may help,

I am aware of clock skew. Therefore we either need synchronization as
available with IEEE1394 or a clock skew compensation as you describe it
in your paper.

- Men


PS: Where and when was your paper published. Maybe I would like to cite
it.





RE: [linux-audio-dev] RFC: API for audio across network -inter-host audio routing

2002-06-13 Thread Men Muheim

 Indeed I have and it is, in fact, what I plan to be spending most of
 this summer working on.  I don't know if you've seen gison's magic,
but
 it sounds very similar what you're doing:
 
 http://magic.gibson.com/

Looks interesting. Thanks for the info. 

Magic seems to be a network like replacement for ADAT/MADI, if I
understand the specification correctly. According to my opinion this is
still too audio sample oriented. I would like to have one network which
allows IP communication as well as audio routing. IEEE1394b seems to be
a good choice for that. It allows sample synchronization, provides QoS
and supports UTP cabling up to 100m.

The discussion here should not be about the network but about an audio
API which is independent of the underlying network stack.

  During the last year I have implemented a library that does such a
thing
  for a custom, synchronous network.
 
 Perhaps we can help each other?  My intention was to write either an
 alsa driver, or a seperate magic driver, that deals with all the magic
 network stuff, while exporting interfaces to alsa or jack, or whatever
 api fancies using the magic network.

I think the first thing to do is to define an audio API that reflects
network capabilities and design the according audio library / server.
Maybe the server could base on some of the JACK technology. Only after
that the network driver comes into place.

What do you think?

 I'm eager to see your code :)

I guess just publishing my code would not help too much without any
documentation or explanation. A minor problem is that the code is not my
property but belongs to the Swiss Federal Institute of Technology as I
am employed by them. But I am sure I could convince them that making the
code open source would help more than just burying it in an archive:-)
Give me some time to check that...

Regards,

Men





Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Steve Harris

On Thu, Jun 13, 2002 at 11:45:13 +0200, Men Muheim wrote:
 Thanx for the info. Unfortunately it does not really get clear to me
 what the project does. I think the difference to my approach is that I
 am talking about LAN and low latency (10ms)

MAS works over LANs, and should be capable of 10ms latency, which isn't
very low by BTW. You certainly can't play an instrument with 10ms
latency.

- Steve



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Lamar Owen

On Thursday 13 June 2002 09:29 am, Charlieb wrote:
 On Thu, 13 Jun 2002, Steve Harris wrote:
  MAS works over LANs, and should be capable of 10ms latency, which isn't
  very low by BTW. You certainly can't play an instrument with 10ms
  latency.

 Nonsense! What about tubas ?
 I admit a guitar would feel pretty awful w/ 10 ms latency, an oboe or
 trumpet only fells a little slow to speak, and a  Tuba , well, that
 would be an exceptionally responsive low brass...

French Horn.  Its bore is as long as a tuba's.  I have played horn before, and 
there is a definite, barely detectable delay between initiation of the note 
and the perception of the note (which is both by ear and by hand in the case 
of the horn).  The long bore is one reason the horn's attack is quite mellow.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Martijn Sipkema

 You certainly can't play an instrument with 10ms
 latency.

in 10ms sound travels somewhat more than 3 meters.
that why i use nearfield monitors :)

--martijn






Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Steve Harris

On Thu, Jun 13, 2002 at 01:29:11 +, Charlieb wrote:
  MAS works over LANs, and should be capable of 10ms latency, which isn't
  very low by BTW. You certainly can't play an instrument with 10ms
  latency.
 
 Nonsense! What about tubas ?
 I admit a guitar would feel pretty awful w/ 10 ms latency, an oboe or
 trumpet only fells a little slow to speak, and a  Tuba , well, that
 would be an exceptionally responsive low brass...

OK, I admit I have never tried playing a tuba with 10ms latency (at all
infact), but I'm not sure how you would arange it anyway! Though... an
electric tuba is a intreguing thought ;)

I would guess that a newwork audio system for low brass instruments is a
little specialised.

- Steve



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Steve Harris

On Thu, Jun 13, 2002 at 09:43:44 -0400, Lamar Owen wrote:
[latency and instruments]
 there is a definite, barely detectable delay between initiation of the note 
 and the perception of the note (which is both by ear and by hand in the case 
 of the horn).  The long bore is one reason the horn's attack is quite mellow.

I would describe 10ms as more than barely detectable its very noticable.

5ms (with my inexpert guitar noodling) feels uncomfortable, but is
barable. YMMV.

5ms is equivalent to a ~1.7m horn.

- Steve



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Barry Short

  You certainly can't play an instrument with 10ms 
   latency. 
  


Really? You might want to check my math on this, but if the speed of sound 
in air at 75 degrees F is about 1135 feet/second, then it takes about 
0.00088 seconds, or 0.88 ms, for the sound to travel 1 foot. So in 10 ms 
the sound would only travel about 11.35 feet. If you were playing an 
electric guitar and the amp was 11 feet away, there would be a 10 ms delay 
and I don't think you would have a problem. I've done it many, many times 
without a problem anyway :) 


Joe




There's a psychoacoustic phenomenon known as the Haas effect which states 
that a direct sound and it reflections (echos) are percieved as a single 
sound by the brain, where the time difference between the two is less than 
about 30ms. So if the brain can't distinguish between sounds at this level I 
think you'd get away with a 10ms delay in your instrument without the rest of 
the band calling you names :-)


Barry
 
-- 
[EMAIL PROTECTED]



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread John Lazzaro

 You certainly can't play an instrument with 10ms latency.

See:

http://ccrma-www.stanford.edu/groups/soundwire/delay_p.html
http://www-ccrma.stanford.edu/groups/soundwire/delay.html
http://www-ccrma.stanford.edu/groups/soundwire/WAN_aug17.html

These experiments show the limits of musical latency compensation --
the full story is complicated but in general compensating for 10ms
is quite doable ...

Also, to address another part of this thread, I'd really recommend
using RTP as an underlying technology -- don't underestimate the
amount of thought and work that has been put into these standards
over the past decade, start by reading:

http://www.ietf.org/internet-drafts/draft-ietf-avt-rtp-new-11.ps

and then browse the I-D's and RFC's at:

http://www.ietf.org/html.charters/avt-charter.html 

--jl

-
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Lamar Owen

On Thursday 13 June 2002 11:59 am, dgm4 wrote:
 Lamar Owen wrote:
 French Horn.  Its bore is as long as a tuba's.  I have played horn before,
  and there is a definite, barely detectable delay between initiation of
  the note and the perception of the note (which is both by ear and by hand
  in the case of the horn).  The long bore is one reason the horn's attack
  is quite mellow.

 To say nothing of pipe organs, where the latency can be nearly half a
 second (!) in a large church.

Yeah, but organists are trained to compensate for that.  That's one reason 
there are so many manuals.  But it does make the music interesting to play.  
I know a concert organist who is rather accomplished; I didn't even think 
about his techniques when I mentioned horn.  When one stop is half a second 
removed from another, odd and interesting effects can be produced by one who 
know the instrument and the acoustics of the venue well enough.

But I think most musicians don't want to have to deal with the pipe organ 
techniques.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Fred Gleason

On Thursday 13 June 2002 13:15, xk wrote:

 I'm not a professional musician, but a 25 ms latency makes me more than
 happy.

It really depends upon the specific application. For some problem-spaces 
(radio automation, for example), latency is just not a very important issue.  
For others (e.g. live performance sound reinforcement), it can be critical.

Cheers!


|-|
|Frederick F. Gleason, Jr.|WAVA Radio - 105 FM |Voice: 1-(703)-807-2266   |
| Director of Engineering |1901 N. Moore Street|  FAX: 1-(703)-807-2245   |
| |Arlington, VA 22209 |  Web: HTTP://www.wava.com|
|-|
|   It is one thing to praise discipline, and another to submit to it.|
|  -- Cervantes   |
|-|



RE: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Richard W.E. Furse

In my experience, audible separation of acoustic events normally happens
around 20ms (ignoring phase effects). Most instruments (including guitar)
are entirely playable with this sort of delay.

The pipe organ example is a good one - there is a huge variety of delay on
pipe organs, probably beyond the half second (I don't have the figures, but
there's often a significant delay between keypress and note as well as the
acoustic delay). I'm fine with small delays, but fast passages becomes
continuously less comfortable as the delay increases. The same is true for
other instruments such as guitar.

BTW, as I keep moaning, I think network audio is an important next step in
LAD development and ideally could be combined with the kind of step required
to get JACK firmly off the ground. I'm still plugging LADMEA ideas
(www.ladspa.org/ladmea/).

--Richard




Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Martijn Sipkema

 How about the 1.0-1.5 ms latencies that everbody tries to obtain (or
already
 has) in both Linux/Win world? That always made me wonder if this isn't
just
 hype like the 192 kHz issue.

 I'm not a professional musician, but a 25 ms latency makes me more than
 happy.

I would say that for playing a software synthesizer realtime a latency of
10ms or less is ok. I'm not sure what latency most hardware synthesizers
have, but this will probably not be much better, sometimes even more than
10ms. Only the transmission of a noteon/off MIDI command will use
1ms (3 * 320usec, no running status).


--martijn






Re: [linux-audio-dev] RFC: API for audio across network -inter-host audio routing

2002-06-13 Thread Bob Ham

On Thu, 2002-06-13 at 01:39, Dan Hollis wrote:

 We hold a patent on MaGIC

Curious.

-- 
Bob Ham: [EMAIL PROTECTED]  http://pkl.net/~node/

My music: http://mp3.com/obelisk_uk
GNU Hurd: http://hurd.gnu.org/



[OT] RE: [linux-audio-dev] RFC: API for audio across network -inter-host audio routing

2002-06-13 Thread Tobias Ulbricht


 The pipe organ example is a good one - there is a huge variety of delay on
 pipe organs, probably beyond the half second (I don't have the figures, but
 there's often a significant delay between keypress and note as well as the
 acoustic delay). I'm fine with small delays, but fast passages becomes
 continuously less comfortable as the delay increases. The same is true for

getting off-topic now. But have you ever measured the latency between
the organ and the audience in the church singing? It makes me sick of
church songs but nevertheless *some* people like it that way...
Well, it obviously depends on the instrument what latency is OK.

tobias.




Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Paul Winkler

On Thu, Jun 13, 2002 at 01:29:11PM +, Charlieb wrote:
 
 
 *
 Charlie Baker  [EMAIL PROTECTED]
  when everything isn't roses, you don't get
any headroom - Thomas Dolby New Toy
 *
 
 
 On Thu, 13 Jun 2002, Steve Harris wrote:
 
  On Thu, Jun 13, 2002 at 11:45:13 +0200, Men Muheim wrote:
   Thanx for the info. Unfortunately it does not really get clear to me
   what the project does. I think the difference to my approach is that I
   am talking about LAN and low latency (10ms)
 
  MAS works over LANs, and should be capable of 10ms latency, which isn't
  very low by BTW. You certainly can't play an instrument with 10ms
  latency.
 
 Nonsense! What about tubas ?
 I admit a guitar would feel pretty awful w/ 10 ms latency,

And yet electric guitarists do it all the time. Ever stood 10  feet away
from your amp? It's no big deal, you get used to it.

--

Paul Winkler
home:  http://www.slinkp.com
Muppet Labs, where the future is made - today!



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-13 Thread Steve Harris

On Thu, Jun 13, 2002 at 12:41:52 -0400, Paul Winkler wrote:
  Nonsense! What about tubas ?
  I admit a guitar would feel pretty awful w/ 10 ms latency,
 
 And yet electric guitarists do it all the time. Ever stood 10  feet away
 from your amp? It's no big deal, you get used to it.

Thats true, but doesn't match my experience of trying to play with 512
sample buffers... although that probably doesn't equate to 512 samples of
latency, its probably more.

I'l have to try it with a hardware delay line.

- Steve



Re: [linux-audio-dev] RFC: API for audio across network -inter-host audio routing

2002-06-13 Thread Bob Ham

This prompted me to look at my course notes, and here's a quote:

A patent may be granted for an invention only if
following conditions are satisfied:

 The invention is new;
 It involves an inventive step;
 It is capable of industrial application;
 The grant for a patent for it is not excluded by
subsections (2) and (3) below...

The exclusions referred to include:

 A scheme, rule or method for performing any mental
act, playing a game or doing business, or a program
for a computer;
 However, the Act only applies to a patent or
application for a patent relating to a thing _as such_.

I don't know what else they can patent; I certainly wouldn't imagine a
protocol would be patentable.  I hope there's not something else.

Any other people in the UK know anything about this?

Bob

-- 
Bob Ham: [EMAIL PROTECTED]  http://pkl.net/~node/

My music: http://mp3.com/obelisk_uk
GNU Hurd: http://hurd.gnu.org/



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-12 Thread Steve Harris

On Wed, Jun 12, 2002 at 06:25:14 +0200, Men Muheim wrote:
 Has anyone ever thought of implementing a library for transfer of audio
 across networks? The API could be similar to JACK but would allow

Have a look at MAS, they had an impressive demo at LinuxTag:
http://mediaapplicationserver.net/

It works with X but doesn't require it IIRC.

That API is not that similar to jack, but hey.

- Steve



Re: [linux-audio-dev] RFC: API for audio across network - inter-host audio routing

2002-06-12 Thread Steve Harris

On Wed, Jun 12, 2002 at 07:45:51 +0200, Peter Hanappe wrote:
 I wondered if it would be possible to write a JACK driver (i.e.
 replacement for current ALSA driver) that would stream the audio over
 a network. The driver is a shared object, so it's technically possible.
 I was thinking of the timing issues.

Hi Peter,

I planned one to work with IEEE1394 (or some other isochronous system): a
jack client that runs in one jack system conneccted to a jack driver in
the other that keeps the system running in sample sync.

It does mean that you can't output audio from the slave machine. :(

- Steve



Re: [linux-audio-dev] RFC: API for audio across network -inter-host audio routing

2002-06-12 Thread Bob Ham

On Wed, 2002-06-12 at 17:25, Men Muheim wrote:
 Has anyone ever thought of implementing a library for transfer of audio
 across networks?

Indeed I have and it is, in fact, what I plan to be spending most of
this summer working on.  I don't know if you've seen gison's magic, but
it sounds very similar what you're doing:

http://magic.gibson.com/

 During the last year I have implemented a library that does such a thing
 for a custom, synchronous network.

Perhaps we can help each other?  My intention was to write either an
alsa driver, or a seperate magic driver, that deals with all the magic
network stuff, while exporting interfaces to alsa or jack, or whatever
api fancies using the magic network.

I'm eager to see your code :)

Bob

-- 
Bob Ham: [EMAIL PROTECTED]  http://pkl.net/~node/

My music: http://mp3.com/obelisk_uk
GNU Hurd: http://hurd.gnu.org/



Re: [linux-audio-dev] RFC: API for audio across network -inter-host audio routing

2002-06-12 Thread Bob Ham

On Wed, 2002-06-12 at 21:18, Dan Hollis wrote:

 I talked to gibson directly about magic. They stated it's patented and 
 they won't give permission for open source implementation.

I wonder what they have patents on, and whether they're applicable to
the UK.

-- 
Bob Ham: [EMAIL PROTECTED]  http://pkl.net/~node/

My music: http://mp3.com/obelisk_uk
GNU Hurd: http://hurd.gnu.org/