[linux-audio-dev] low latency audio on crusoe laptop

2003-02-26 Thread jfm3
Using the attached scripted setpci commands I have gotten the audio
latency of my crusoe laptop down to 3 ms. Throughput to anything but the
audio device is worse, but who cares when you're just running one audio
synthesis/analysis process? I might crank up the pci latency of the ide
controller if I ever need to stream audio to or from disk. Results of
latencytest are posted at http://www.ouroboros-complex.org/latency.

My present kernel and alsa packages can be found at
http://www.ouroboros-complex.org/moon. These are compatible with the
Planet CCRMA project. See http://www.ouroboros-complex.org/moon/README.

I have effectively reduced the weight of my rig from 120 to 2 pounds.
Jack still doesn't work, but I can do pretty much everything in one Pd
process anyway. It's no huge MOTM system, but then again, it's not a
huge MOTM either.

Thank you everyone.

-- 
(jfm3  2838 BCBA 93BA 3058 ED95  A42C 37DB 66D1 B43C 9FD0)
#!/bin/bash

#"open up" the PCI bus by allowing fairly long bursts for all devices, increasing performance
#setpci -v -d *:* latency_timer=b0

#maximize latency timers for audio, allowing it to transmit more data per burst, preventing buffer over/underrun conditions
setpci -v -s 00:06.0 latency_timer=ff  

# north bridge
setpci -v -s 00:00.0 latency_timer=40

# firewire
setpci -v -s 00:09.0 latency_timer=10

# ethernet
setpci -v -s 00:0b.0 latency_timer=10

# usb
setpci -v -s 00:0f.0 latency_timer=10

# ide
setpci -v -s 00:10.0 latency_timer=10

# cardbus bridge
setpci -v -s 00:12.0 latency_timer=40

# usb
setpci -v -s 00:14.0 latency_timer=10


Re: [linux-audio-dev] LAD meeting - LinuxSoundNight

2003-02-26 Thread Lukas Degener


Guitar effects on a microphone is very hard to manage without uncontrollable
feedback.  But I saw it done - on a saxophone yet! - by a trio called 
Spongehead (guitar/bass, sax, and drums). The sax player used wah, echo,
and an octave divider (which on a tenor or bari gave him some very 
nice deep notes, allowing the sax to function as the bass player on 
some songs!). I don't remember if he ever used distortion.
He played through a big guitar amp, i think.

Well and of course, let's not forget the Maestro himself: Miles Davis 
blowes his horn, distorted and through a wah, i think e.g. on "Live 
Evil" . It's sometimes hard to tell which lines are played by him and 
which are played by the guitar.

Lukas



Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> > Because I think ALSA does too much in the kernel (and it is not
> > well documented eiter).
> 
> Wait a minute, why do you say that?

Because:
- I think ALSA is not that well documented.
- I'd rather see a combination of a device specific kernel driver and
a user-space driver than a common kernel interface.

> ALSA seems to do a lot less in kernel 
> space than OSS (a lot has been moved to alsa-lib), and also much code is 
> commonly shared between drivers, which is very nice.

I'm not comparing ALSA to OSS. And having user-space drivers
doesn't prevent code sharing. I just don't like the common device file
interface.

--ms






Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Kai Vehmanen
On Wed, 26 Feb 2003, Martijn Sipkema wrote:

> Because I think ALSA does too much in the kernel (and it is not
> well documented eiter).

Wait a minute, why do you say that? ALSA seems to do a lot less in kernel 
space than OSS (a lot has been moved to alsa-lib), and also much code is 
commonly shared between drivers, which is very nice.

--
 http://www.eca.cx
 Audio software for Linux!



Re: [linux-audio-dev] BruteFIR + jack = crackling noise?

2003-02-26 Thread Kai Vehmanen
On Thu, 27 Feb 2003, Patrick Shirkey wrote:

> But it would be very nice if I could use my usb quattro to manipulate 
> the sounds of my bandmates in realtime at lowlatency. I tried with ssm 
> at 64 bytes and there was noticible lag so we couldn't do anything live.
[...]
> The best I can get out of jack is 1024 but 2048 is more reliable.

Hmm, that sounds strange. How long is a noticable lag? :) Have you tried
with ecasound? I've personally tried a few USB audio devices and didn't
experience any unusual latency problems. Something like "ecasound -b:256
-z:nointbuf -i alsa -o alsa"  should be enough for effect-box type
real-time processing. Maybe I should test that again...

PS Although I must say, due to old USB-related work projects, I have a 
   real love-hate relationship with USB. Nowadays whenever I see someone 
   (un)plugging a USB-device, I automatically expect the machine to halt 
   completely or plain just not work. :) Fortunately there are exceptions, 
   my Rio500 and Midisport2x2 have almost changed my mind.. they actually 
   do work. :) But, but... don't take these words too seriously.

--
 http://www.eca.cx
 Audio software for Linux!



[linux-audio-dev] Re: CSL Motivation (fwd)

2003-02-26 Thread Kai Vehmanen
Let's continue the cross-post circus. :)

Does anyone here have good connections to the GNOME audio folks?  Is
gstreamer leading the whole thing, or are there others? I think it would
be great if we could at least manage to start living on the same planet
(... and maybe even someday, gasp, cooperate! >;)).

-- Forwarded message --
Date: Thu, 27 Feb 2003 00:05:04 +0200 (EET)
From: Kai Vehmanen <[EMAIL PROTECTED]>
To: KDE Multimedia <[EMAIL PROTECTED]>
Cc: Paul Davis <[EMAIL PROTECTED]>
Subject: Re: CSL Motivation

On Tue, 25 Feb 2003, Tim Janik wrote:

> and, more importantly, stefan and i wrote a paper going into the details of
> why we think a project like CSL is necessary and what we intend to achieve
> with it:

Ok, I already forwarded a few mails concerning this from lad. I'll add a
few comments of my own:

I think I understand your reasons behing CSL, and I think it (CSL) might
just be the necessary glue to unite KDE and GNOME on the multimedia front.

But what I see as a risk is that you forget the efforts and existing APIs
outside these two desktop projects. In the end, it's the applications that
count. It's certainly possible that you can port all multimedia apps that
come with GNOME and KDE to CSL, but this will never happen for the huge
set of audio apps that are listed at http://www.linuxsound.at. And these 
are apps that people (not all, but many) want to use.

A second point is that for many apps, the functionality of CSL is just not
enough. ALSA PCM API is a very large one, but for a reason. Implementing a
flexible capabilities query API is very difficult (example:  changing the
active srate affects the range of valid values for other parameters). The
selection of commonly used audio parameters has really grown (>2 channels,
different interleaving settings for channels, 20bit, 24bit, 24-in-4bytes,
24-in-3bytes, 24-in-lower3of4bytes, 32bit, 32bit-float, etc, etc ... these
are becoming more and more common. Then you have functionaliy for
selecting and querying available audio devices and setting up virtual
soundcards composed of multiple individual cards. These are all supported
by ALSA and just not available on other unices. Adding support for all
this into CSL would be a _huge_ task.

Perhaps the most important area of ALSA PCM API are the functions for
handling buffersize, interrupt-frequency and wake-up parameters. In other
words being able to set a buffersize value is not enough when writing
high-performance (low-latency, high-bandwidth) audio applications. You
need more control and this is what ALSA brings you. And it's good to note
that these are not only needed by music creation (or sw for musicians for
lack of a better term) apps, but also for desktop apps.  I have myself
written a few desktop'ish audio apps that have needed the added
flexibility of ALSA.

Now JACK, on the other hand, offers completely new types of functionality
for audio apps: audio routing between audio applications, connection
management and transport control. These are all essential for music apps,
but don't make sense in an audio i/o abstraction like CSL.

So to summarize, I really hope that you leave a possibilty for these APIs
(especially ALSA and JACK) in the KDE multimedia architecture, so that it
would be possible to run different apps without the need to completely
bypass other application groups (like is the situation today with
aRts/esd/ALSA/OSS/JACK apps).

As a more practical suggestion, I see the options as:

1) A front-end API that is part of the KDE devel API
a) aRts
b) gstreamer
c) CSL
d) Portaudio
e) ... others?
2) Backend server that is user-selectable (you have a nice GUI 
   widget for selecting which to use)
a) aRts (current design, again uses OSS/ALSA)
b) JACK (gstreamer already has support for it)
c) ALSA (dmix or aserver)
d) MAS
e) ... others?

All official (part of the base packages) KDE+GNOME apps would use (1), but
3rd party apps could directly interact with (2) if they so wished. If the
required (2) is not running, user can go to the configuration page and
change the audio backend.

Comments? :)

--
 http://www.eca.cx
 Audio software for Linux!



Re: [linux-audio-dev] BruteFIR + jack = crackling noise?

2003-02-26 Thread Patrick Shirkey
>yes, you can move audio over USB. the question is not whether you can,
>but whether you should, and my feeling is that professional or
>semi-professional users should avoid it completely, regardless of what
>Yamaha, Tascam, Edirol and others who want to provide *cheap*
>connectivity to home studio users say in the advertisements.
Actually they're not cheap at all. The main benefit of usb audio devices 
is the portability. However, now that firewire is becoming a much 
cheaper alternative usb devices are probably going to become obsolete 
like the Laser disc has.

But it would be very nice if I could use my usb quattro to manipulate 
the sounds of my bandmates in realtime at lowlatency. I tried with ssm 
at 64 bytes and there was noticible lag so we couldn't do anything live.

The best I can get out of jack is 1024 but 2048 is more reliable.

Having to use PCI devices is a PITA when you are trying to gig at 
different venues as they require a lot more space. There is also 
something elegant about being able to instantly connect your setup to a 
different computer by simply moving the USB cable.

However, it could be said that any sound device running on a PC is a 
waste of time for serious musos as you cannot beat the sound quality 
from a top of the line recording studio.

Each to their own but I would just like to be able to show people the 
true potential of Linux Audio and currently I cannot unless I get a PCI 
device. That, IMO, is what really sucks.

--
Patrick Shirkey - Boost Hardware Ltd.
Http://www.boosthardware.com
Http://www.djcj.org - The Linux Audio Users guide

Being on stage with the band in front of crowds shouting, "Get off! No! 
We want normal music!", I think that was more like acting than anything 
I've ever done.

Goldie, 8 Nov, 2002
The Scotsman


Re: [linux-audio-dev] LAD meeting - LinuxSoundNight

2003-02-26 Thread Paul Winkler
On Wed, Feb 26, 2003 at 08:35:58PM +0100, Frank Barknecht wrote:
> So one day, I went to rehearse with them. It was very unformal like "let's
> jam a bit, have some fun." So I put on my saxophone, plugged the mike into
> the distortion, the distortion into the wahwah or I plugged them the other
> way around, don't remember.
> 
> Then I blew my heart out.
> 
> But there was not distrotion, and there was no wahwah. In fact, there
> wasn't even a saxophone, there only was feedback, uncontrollable feedback.

Guitar effects on a microphone is very hard to manage without uncontrollable
feedback.  But I saw it done - on a saxophone yet! - by a trio called 
Spongehead (guitar/bass, sax, and drums). The sax player used wah, echo,
and an octave divider (which on a tenor or bari gave him some very 
nice deep notes, allowing the sax to function as the bass player on 
some songs!). I don't remember if he ever used distortion.
He played through a big guitar amp, i think.

Basically you want to:
1) use a very directional mic that mounts on the sax itself -
there are some made for this purpose.  I don't know what kind of
pickup the Spongehead guy used, but it seemed to have a cable
coming from near the mouthpiece???

2) use as little distortion as possible to get the effect you want.
considering how "fuzzy" a saxophone can already sound, i'm not
sure there's much point in using a fuzz on it!
But it would be fun to try other effects like chorus, phase, 
flange, tremolo ...

3) don't stand right in front of your amp :)

-- 

Paul Winkler
http://www.slinkp.com



Re: [linux-audio-dev] LAD meeting - LinuxSoundNight

2003-02-26 Thread Frank Barknecht
Hi,

Frank Neumann schrieb:
> _IF_ :-) we do some kind of live music, I hope we'll also manage to
> record it somehow, and encode/provide that later. I'm _very_ curious
> right now, though, what kind of music will evolve out of this :-).

I think, it would be so much fun to record something. I has been years
since I have played with other 'musicians' (yes, that's true) I stopped
playing in bands, when I was a big follower of 'Grunge' music. The older
ones here will remember that term. I saw Nirvana before you saw Nirvana ;) 

The band that is.

I'll tell you why I stopped then.

As a saxophone player it was hard to get into a hardcore band. Then I
decided: "Frank, what you're missing is distortion and wahwah." I went into
a guitar shop where I bought distortion and wahwah, you know, those little
boxes you can step hard on. I had met two cool guys, a drummer and a
bassist, who were doing an instrumental Space-Rock-Dub-Speed-Metal kind of
music like I was quite into those days. A bit like Blind Idiot God, if
someone still knows this strange american trio.

Those friends of mine even had a room to rehearse. It had high humidity,
but otherwise was quite ok. But they didn't have a guitarist that's why I
saw my big chance coming. Yes, I was gonna be famous. I had distortion and
wahwah. Okay, I had short hair, they had hair to the *ss, but we didn't
care. 

So one day, I went to rehearse with them. It was very unformal like "let's
jam a bit, have some fun." So I put on my saxophone, plugged the mike into
the distortion, the distortion into the wahwah or I plugged them the other
way around, don't remember.

Then I blew my heart out.

But there was not distrotion, and there was no wahwah. In fact, there
wasn't even a saxophone, there only was feedback, uncontrollable feedback.
I couldn't really play anything: either I couldn't be heard at all or I
could only produce scccrrrttchhscreetch.

A year later, those guys made a record, but I don't remember its name. And
I started to get into computers. 

Regards,
-- 
Frank Barknecht 


RE: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Mark Knecht
;-) I wish you guys had carried this conversation on with a new title as I
think this has nothing at all to do with 1394. ;-)

Just so everyone else not conversing here is clear, the size of the packet
transmitted across the 1394 bus and the size of a Jack data block need have
nothing to do with each other. Using 61883, or even a home grown protocol
like I am/was working on, 1394 packets will be given timestamps for
presentation at the receiving end, and 1394 can transmit these packets in
full Jack data block sizes, or smaller to make more efficient use of 1394
bus bandwidth.

Just being clear that 1394 and this conversation _can_ _be_ completely
orthogonal. (And should be IMO)

Thanks,
Mark

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Martijn
> Sipkema
> Sent: Wednesday, February 26, 2003 8:57 AM
> To: [EMAIL PROTECTED]
> Subject: Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis
>
>
> [...]
> > >I don't think an application should ask for a certain number of frames
> > >to wakeup. The driver should dictate when to wake up. This is the way
> > >my Audiowerk8 JACK driver works and it would get a lot more
> > >complicated if I had to add support for user-space wake up at
> > >arbitrary intervals.
> >
> > thats because you decided to write a simple driver that didn't fit
> > into the ALSA scheme.
>
> Because I think ALSA does too much in the kernel (and it is not
> well documented eiter).
>
> > the ALSA scheme deliberately offers such
> > capabilities, partly to handle devices like USB audio interfaces. if
> > you had written your driver as part of ALSA, it would not have to have
> > support for this - the "midlevel" ALSA code takes care of that stuff.
>
> One of the reasons I did not write an ALSA driver is because it supports
> all this.
>
> > something has to specify when to wake a task blocked on I/O to the
> > device. you can wake it up every interrupt, every N seconds, or every
> > N frames (with frames-per-interrupt level resolution). ALSA allows you
> > to do any of these. which one is most appropriate, and whether it
>
>
> This makes ALSA unnecessarily complicated and puts too much in the
> kernel IMHO.
>
> > should be set by a "control" application (common on windows/macos) or
> > by the application using the device driver is subject to reasonable
> > disagreement by sensible people.
>
> I think buffer size should be set by a "control" application or just read
> from
> a file by the user space driver or possibly even set at module loading.
>
> > >> and the interrupts occur at 420,
> > >> 840 and 1260 frames, then we should be woken up on the third
> > >> interrupt, process 1024 frames of data, and go back to sleep.
> > >
> > >This will not perform well since the available processing time per
> > >sample will fluctuate.
> >
> > agreed. but by the same argument, if the variability in the block size
> > was too great, we would also see too much variation in
> > cycles-per-frame due to processing overhead per interrupt, which will
> > also kill us.
> >
> > so the question seems to be: how much variation is acceptable, and
> > what should be responsible for handling it? a device which interrupted
> > at random intervals would simply not work; one that interrupts at 420
> > frames +/- 5 frames might be OK. should the h/w driver hide the
> > variation, or should each application be willing to deal with it?
>
> A decent device will not have more than a few % variance, maybe up
> to 10-15% when using varispeed, but that's unavoidable (constant size
> callbacks will differ in available processing time then).
>
> > most applications would have no problem, but there is an interesting
> > class of applications for whom it poses a real problem that i think
> > requires a common solution. i'm not sure what that solution should be.
>
> Using asynchronous processing is a solution. A EDF scheduler would
> be nice for this.
>
> --ms
>
>
>
>
>
>




Re: [linux-audio-dev] question re: hammerfall cards

2003-02-26 Thread j.c.w.
Paul,

since you're on the line here (heh heh heh) i have a quick one to run 
past you:  my asound.state file looks just like what you described, but 
i can only seem to get s/pdif out.  it doesn't seem to like my dat at all.

now before i killed my win2k partition, i tested all of the hardware and 
it was all happy and worked just fine.  now it's not.  i am using an 
aeb8-i and an aeb8-o.  does that change anything?

thanks for any assistance you can provide!

j.c.w.

Paul Davis wrote:
a) It looks like the Hammerfall driver doesn't have a mixer interface, is 
this correct ?


the hardware has no mixer.


b) It looks like the onboard audio chip is controlled by an OSS driver, it 
doesn't show up in the alsa drivers either, which is fine by me, since I'm 
not going to use it.  Is there any problem with OSS modules being loaded 
at the same time as ALSA modules ?


shouldn't be a problem.


c) I bought the card so I could record optical S/PDIF.  The manual says I 
need to tell the card that I want the ADAT1 Input source to be Optical 
S/PDIF.
   [ ... ]


Which I read as the Input source being ADAT optical instead of S/PDIF.
How do I set it to S/PDIF ?


there are a couple of ways. alsactl is probably the most obvious:

  % alsactl -f foo store
  ... edit "foo" ...
  % alsactl -f foo restore
in the file "foo" generated by the first step, you will find lots of
stuff, including this:
control.5 {
comment.access 'read write'
comment.type ENUMERATED
comment.item.0 ADAT1
comment.item.1 Coaxial
comment.item.2 Internal
iface PCM
name 'IEC958 Input Connector'
value Coaxial
}
control.6 {
comment.access 'read write'
comment.type BOOLEAN
iface PCM
name 'IEC958 Output also on ADAT1'
value false
}


You will want yours to look like this:

control.5 {
comment.access 'read write'
comment.type ENUMERATED
comment.item.0 ADAT1
comment.item.1 Coaxial
comment.item.2 Internal
iface PCM
name 'IEC958 Input Connector'
value ADAT1
}
control.6 {
comment.access 'read write'
comment.type BOOLEAN
iface PCM
name 'IEC958 Output also on ADAT1'
value true
}
which will then do S/PDIF I/O over the ADAT1 connector. 

using amixer is quicker but terser and you have to know/understand a
bit more to use it confidently.

d) the card came with a sub-D-connector that connects to the card's 15 pin 
port, which branches off two RCA jacks.  I don't suppose these RCA jacks 
provide an analogue output by any chance on which I can monitor for sound 
?


no, they are for co-axial S/PDIF output. there is no analog I/O of any
kind on this card.
--p (hammerfall driver author and happy owner of 4 of them :)





Re: [linux-audio-dev] LAD meeting - LinuxSoundNight

2003-02-26 Thread Frank Neumann

Hi list,
[EMAIL PROTECTED] wrote:

[..]

> > > Maybe we should add a section in the wikki for this issues.
> >
> > Done.
> >
> > See http://footils.org/cgi-bin/cms/pydiddy/LinuxSoundNight
> 
> I can't come :(
> Who's going to record it and put up .oggs?

_IF_ :-) we do some kind of live music, I hope we'll also manage to
record it somehow, and encode/provide that later. I'm _very_ curious
right now, though, what kind of music will evolve out of this :-).

Frank


Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
[...]
> >I don't think an application should ask for a certain number of frames
> >to wakeup. The driver should dictate when to wake up. This is the way
> >my Audiowerk8 JACK driver works and it would get a lot more
> >complicated if I had to add support for user-space wake up at
> >arbitrary intervals.
>
> thats because you decided to write a simple driver that didn't fit
> into the ALSA scheme.

Because I think ALSA does too much in the kernel (and it is not
well documented eiter).

> the ALSA scheme deliberately offers such
> capabilities, partly to handle devices like USB audio interfaces. if
> you had written your driver as part of ALSA, it would not have to have
> support for this - the "midlevel" ALSA code takes care of that stuff.

One of the reasons I did not write an ALSA driver is because it supports
all this.

> something has to specify when to wake a task blocked on I/O to the
> device. you can wake it up every interrupt, every N seconds, or every
> N frames (with frames-per-interrupt level resolution). ALSA allows you
> to do any of these. which one is most appropriate, and whether it


This makes ALSA unnecessarily complicated and puts too much in the
kernel IMHO.

> should be set by a "control" application (common on windows/macos) or
> by the application using the device driver is subject to reasonable
> disagreement by sensible people.

I think buffer size should be set by a "control" application or just read
from
a file by the user space driver or possibly even set at module loading.

> >> and the interrupts occur at 420,
> >> 840 and 1260 frames, then we should be woken up on the third
> >> interrupt, process 1024 frames of data, and go back to sleep.
> >
> >This will not perform well since the available processing time per
> >sample will fluctuate.
>
> agreed. but by the same argument, if the variability in the block size
> was too great, we would also see too much variation in
> cycles-per-frame due to processing overhead per interrupt, which will
> also kill us.
>
> so the question seems to be: how much variation is acceptable, and
> what should be responsible for handling it? a device which interrupted
> at random intervals would simply not work; one that interrupts at 420
> frames +/- 5 frames might be OK. should the h/w driver hide the
> variation, or should each application be willing to deal with it?

A decent device will not have more than a few % variance, maybe up
to 10-15% when using varispeed, but that's unavoidable (constant size
callbacks will differ in available processing time then).

> most applications would have no problem, but there is an interesting
> class of applications for whom it poses a real problem that i think
> requires a common solution. i'm not sure what that solution should be.

Using asynchronous processing is a solution. A EDF scheduler would
be nice for this.

--ms







Re: [linux-audio-dev] question re: hammerfall cards

2003-02-26 Thread Paul Davis
>a) It looks like the Hammerfall driver doesn't have a mixer interface, is 
>this correct ?

the hardware has no mixer.

>b) It looks like the onboard audio chip is controlled by an OSS driver, it 
>doesn't show up in the alsa drivers either, which is fine by me, since I'm 
>not going to use it.  Is there any problem with OSS modules being loaded 
>at the same time as ALSA modules ?

shouldn't be a problem.

>c) I bought the card so I could record optical S/PDIF.  The manual says I 
>need to tell the card that I want the ADAT1 Input source to be Optical 
>S/PDIF.
   [ ... ]

>Which I read as the Input source being ADAT optical instead of S/PDIF.
>How do I set it to S/PDIF ?

there are a couple of ways. alsactl is probably the most obvious:

  % alsactl -f foo store
  ... edit "foo" ...
  % alsactl -f foo restore

in the file "foo" generated by the first step, you will find lots of
stuff, including this:

control.5 {
comment.access 'read write'
comment.type ENUMERATED
comment.item.0 ADAT1
comment.item.1 Coaxial
comment.item.2 Internal
iface PCM
name 'IEC958 Input Connector'
value Coaxial
}
control.6 {
comment.access 'read write'
comment.type BOOLEAN
iface PCM
name 'IEC958 Output also on ADAT1'
value false
}



You will want yours to look like this:

control.5 {
comment.access 'read write'
comment.type ENUMERATED
comment.item.0 ADAT1
comment.item.1 Coaxial
comment.item.2 Internal
iface PCM
name 'IEC958 Input Connector'
value ADAT1
}
control.6 {
comment.access 'read write'
comment.type BOOLEAN
iface PCM
name 'IEC958 Output also on ADAT1'
value true
}

which will then do S/PDIF I/O over the ADAT1 connector. 

using amixer is quicker but terser and you have to know/understand a
bit more to use it confidently.

>d) the card came with a sub-D-connector that connects to the card's 15 pin 
>port, which branches off two RCA jacks.  I don't suppose these RCA jacks 
>provide an analogue output by any chance on which I can monitor for sound 
>?

no, they are for co-axial S/PDIF output. there is no analog I/O of any
kind on this card.

--p (hammerfall driver author and happy owner of 4 of them :)


[linux-audio-dev] question re: hammerfall cards

2003-02-26 Thread Thomas Vander Stichele
Hey,

I just got a new box and two hammerfall 9636 cards for a project at work.

The motherboard is a KT4 Ultra that has some onboard audio chip I don't 
care about but can't turn off either in the bios.

I've put in the first hammerfall card, got alsa 0.9.0 rc7 from the 
freshrpms site (it's a redhat box), and started configuring things.

I've got the feeling I'm missing some things, so here are some simple 
questions.

a) It looks like the Hammerfall driver doesn't have a mixer interface, is 
this correct ?
Here's an ls in the relevant dir:

[EMAIL PROTECTED] asound]# ls card0/
id  pcm0c  pcm0p  rme9652

b) It looks like the onboard audio chip is controlled by an OSS driver, it 
doesn't show up in the alsa drivers either, which is fine by me, since I'm 
not going to use it.  Is there any problem with OSS modules being loaded 
at the same time as ALSA modules ?

c) I bought the card so I could record optical S/PDIF.  The manual says I 
need to tell the card that I want the ADAT1 Input source to be Optical 
S/PDIF.

Right now I get, in /proc/asound/card0/rme9652:
[EMAIL PROTECTED] card0]# cat rme9652
RME Digi9636 (Rev 1.5) (Card #1)
Buffers: capture cdc0 playback cda0
IRQ: 5 Registers bus: 0xde00 VM: 0xd088a000
Control register: 44008
 
Latency: 1024 samples (2 periods of 4096 bytes)
Hardware pointer (frames): 0
Passthru: no
Clock mode: autosync
Pref. sync source: ADAT1
 
ADAT1 Input source: ADAT1 optical
 
IEC958 input: Coaxial
IEC958 output: Coaxial only
IEC958 quality: Consumer
IEC958 emphasis: off
IEC958 Dolby: off
IEC958 sample rate: error flag set
 
ADAT Sample rate: 44100Hz
ADAT1: No Lock
ADAT2: No Lock
ADAT3: No Lock
 
Timecode signal: no
Punch Status:
 
 1: off  2: off  3: off  4: off  5: off  6: off  7: off  8: off
 9: off 10: off 11: off 12: off 13: off 14: off 15: off 16: off
17: off 18: off

Which I read as the Input source being ADAT optical instead of S/PDIF.
How do I set it to S/PDIF ?

d) the card came with a sub-D-connector that connects to the card's 15 pin 
port, which branches off two RCA jacks.  I don't suppose these RCA jacks 
provide an analogue output by any chance on which I can monitor for sound 
?

I'm sure I'll bug you with more questions later on, but these are the most 
pressing ones at this point.

Any help is greatly appreciated.

Thomas

-- 

The Dave/Dina Project : future TV today ! - http://davedina.apestaart.org/
<-*- thomas (dot) apestaart (dot) org -*->
Oh, baby, give me one more chance
<-*- thomas  (at) apestaart (dot) org -*->
URGent, the best radio on the Internet - 24/7 ! - http://urgent.rug.ac.be/




Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> > Well, I'll shut up about it. I still think it is a mistake. I haven't
heard
> > any
> > convincing (to me) arguments why an application should not handle
variable
> > sized callbacks.
>
> Because it makes certain types of processing viable, which they are not
> really in variable block systems (eg. LADSPA, VST). Have a look at an
> phase vocoder implementation in LADSPA (e.g.
> http://plugin.org.uk/ladspa-swh/pitch_scale_1193.xml) or VST and see how
> nasty and inefficient they are.

If I understand that code correctly then you wait for 'FFT frame size'
samples
to be available and then process that entire FFT frame. This will not
introduce
a variable amount of processing time/sample and will not work for large FFT
frames. Adding en extra FFT frame delay and processing asynchronously
would solve this. I'm not saying this is easy, but I don't think an
algorithm
like this should rely on a callback being one (or more) FFT frame long.

> Conversly we haven't heard any convincing arguments about why we should
> have variable block sizes ;) I don't think that allowing (some?) USB
> devices to run with less latency counteracts the cost to block processing
> algorithms.

I think is at least as valid an argument as a possible increase in
performance
for some algorithms on some hardware.

> I dont know what EASI xfer is.

EASI is a hardware abstraction framework from Emagic. It was meant to
be a open alternative to ASIO. It didn't make it and now that Emagic
has been acquired by Apple it is no longer supported by Emagic I
guess, as I can not find anything about it on their site anymore.

http://www.sipkema-digital.com/~msipkema/EASI_99may25.pdf

--ms





Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Paul Davis
>I agree. Events should be timestamped. But maybe that not the only reason.
>Certainly EASI has variable size callbacks because this is what some
>hardware delivers.

sure, but then ALSA does the same. the question is whether to export
this detail up to applications or not.

>> i feel that it should be the job of ALSA to handle period sizes. if it
>> doesn't do a good job, it should be fixed. if we ask for a wakeup
>> every time 1024 frames are available,
>
>I don't think an application should ask for a certain number of frames
>to wakeup. The driver should dictate when to wake up. This is the way
>my Audiowerk8 JACK driver works and it would get a lot more
>complicated if I had to add support for user-space wake up at
>arbitrary intervals.

thats because you decided to write a simple driver that didn't fit
into the ALSA scheme. the ALSA scheme deliberately offers such
capabilities, partly to handle devices like USB audio interfaces. if
you had written your driver as part of ALSA, it would not have to have
support for this - the "midlevel" ALSA code takes care of that stuff.

something has to specify when to wake a task blocked on I/O to the
device. you can wake it up every interrupt, every N seconds, or every
N frames (with frames-per-interrupt level resolution). ALSA allows you
to do any of these. which one is most appropriate, and whether it
should be set by a "control" application (common on windows/macos) or
by the application using the device driver is subject to reasonable
disagreement by sensible people.

>> and the interrupts occur at 420,
>> 840 and 1260 frames, then we should be woken up on the third
>> interrupt, process 1024 frames of data, and go back to sleep.
>
>This will not perform well since the available processing time per
>sample will fluctuate.

agreed. but by the same argument, if the variability in the block size
was too great, we would also see too much variation in
cycles-per-frame due to processing overhead per interrupt, which will
also kill us.

so the question seems to be: how much variation is acceptable, and
what should be responsible for handling it? a device which interrupted
at random intervals would simply not work; one that interrupts at 420
frames +/- 5 frames might be OK. should the h/w driver hide the
variation, or should each application be willing to deal with it?

most applications would have no problem, but there is an interesting
class of applications for whom it poses a real problem that i think
requires a common solution. i'm not sure what that solution should be.

--p




Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> >any
> >convincing (to me) arguments why an application should not handle
variable
> >sized callbacks. VST process() is variable size I think as are EASI xfer
> >callbacks, but clearly JACK needs constant callbacks and there is nothing
> >I can do about that...
>
> as i understand it, VST is only variable to allow for automation. And
> if you follow the discussion here about XAP and elsewhere about PTAF,
> you will see that many people consider this a mistake that comes from
> not using "events" in the correct way.

I agree. Events should be timestamped. But maybe that not the only reason.
Certainly EASI has variable size callbacks because this is what some
hardware delivers.

> i feel that it should be the job of ALSA to handle period sizes. if it
> doesn't do a good job, it should be fixed. if we ask for a wakeup
> every time 1024 frames are available,

I don't think an application should ask for a certain number of frames
to wakeup. The driver should dictate when to wake up. This is the way
my Audiowerk8 JACK driver works and it would get a lot more
complicated if I had to add support for user-space wake up at
arbitrary intervals.

> and the interrupts occur at 420,
> 840 and 1260 frames, then we should be woken up on the third
> interrupt, process 1024 frames of data, and go back to sleep.

This will not perform well since the available processing time per
sample will fluctuate.

> the h/w
> driver should handle this, not JACK. the latency behaviour will be
> just as requested by the user.

IMHO JACK should be able to handle drivers that generate interrupts
with variable available frames by allowing non-const callbacks. There
is no way to only allow const callbacks without adding either large
latency or hurting performance for driver that don't generate interrupts
on available frames. It seems some soundcards, USB and possibly
FireWire audio are all better served with non-const callbacks. And
I still have not seen any convincing arguments that non-const callbacks
are a problem for JCAK client applications.

--ms







Re: [linux-audio-dev] LAD meeting - LinuxSoundNight

2003-02-26 Thread Paul Winkler
On Wed, Feb 26, 2003 at 04:09:24PM +0100, Frank Barknecht wrote:
> Hi,
> 
> Lukas Degener schrieb:
> > Maybe we should add a section in the wikki for this issues.
> 
> Done. 
> 
> See http://footils.org/cgi-bin/cms/pydiddy/LinuxSoundNight

I can't come :(
Who's going to record it and put up .oggs?

-- 

Paul Winkler
http://www.slinkp.com



RE: [linux-audio-dev] [ANNOUNCE] polarbear-0.5.1

2003-02-26 Thread Mark Knecht
Like they say in those TV commercials...

"Sweet!"

Thanks Maarten!

Cheers,
Mark

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Maarten de
> Boer
> Sent: Wednesday, February 26, 2003 5:46 AM
> To: [EMAIL PROTECTED];
> [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: [linux-audio-dev] [ANNOUNCE] polarbear-0.5.1
> 
> 
> Small bugfix: compile with both fltk 1.0 and fltk 1.1
> 
> 



Re: [linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Steve Harris
On Wed, Feb 26, 2003 at 09:48:35 -0500, Dave Phillips wrote:
> Paul Davis wrote:
>  
> > if anyone has a Hang drum available, i can tap out some pretty patterns :)
> 
> I've been listening to some water drumming by African rainforest
> dwellers. All we need is a sufficiently large tub, enough liquid, and
> we're all set... ;)

Now that sounds like fun :) If I have enough space I will pack some
acoustic noisemaking toys.

- Steve


Re: [linux-audio-dev] LAD meeting - LinuxSoundNight

2003-02-26 Thread Frank Barknecht
Hi,

Lukas Degener schrieb:
> Maybe we should add a section in the wikki for this issues.

Done. 

See http://footils.org/cgi-bin/cms/pydiddy/LinuxSoundNight

ciao
-- 
Frank Barknecht


Re: [linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Steve Harris
On Wed, Feb 26, 2003 at 09:35:46 -0500, Paul Davis wrote:
> if anyone has a Hang drum available, i can tap out some pretty
> patterns :)
> 
> oh wait, i'm the engineer, right?

I play a mean powerdrill ;) but have no useful musical skills, maybe I
should do the engineering.

- Steve 


Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Steve Harris
On Wed, Feb 26, 2003 at 01:51:41 +0100, Martijn Sipkema wrote:
> Well, I'll shut up about it. I still think it is a mistake. I haven't heard
> any
> convincing (to me) arguments why an application should not handle variable
> sized callbacks.

Because it makes certain types of processing viable, which they are not
really in variable block systems (eg. LADSPA, VST). Have a look at an
phase vocoder implementation in LADSPA (e.g.
http://plugin.org.uk/ladspa-swh/pitch_scale_1193.xml) or VST and see how
nasty and inefficient they are.

Conversly we haven't heard any convincing arguments about why we should
have variable block sizes ;) I don't think that allowing (some?) USB
devices to run with less latency counteracts the cost to block processing
algorithms.

>  VST process() is variable size I think as are EASI xfer
> callbacks, but clearly JACK needs constant callbacks and there is nothing
> I can do about that...

I wouldn't hold up VST as a good example, it has many design flaws IMHO.
As Paul pointed out VST and LADSPA require variable sized blocks becuase
they have no event system. I dont know what EASI xfer is. Its not JACK
that needs the fixed sizes, its that applications, JACK itsself couldn't
care less.

- Steve


Re: [linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread delire
wow linux is really coming along.. ;p

julian oliver

On Wed, 26 Feb 2003 09:48:35 -0500
Dave Phillips <[EMAIL PROTECTED]> wrote:

//Paul Davis wrote:
// 
//> if anyone has a Hang drum available, i can tap out some pretty patterns :)
//
//I've been listening to some water drumming by African rainforest
//dwellers. All we need is a sufficiently large tub, enough liquid, and
//we're all set... ;)
//
//== dp
//
//


Re: [linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Dave Phillips
Paul Davis wrote:
 
> if anyone has a Hang drum available, i can tap out some pretty patterns :)

I've been listening to some water drumming by African rainforest
dwellers. All we need is a sufficiently large tub, enough liquid, and
we're all set... ;)

== dp


Re: [linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Paul Davis
if anyone has a Hang drum available, i can tap out some pretty
patterns :)

oh wait, i'm the engineer, right?

--p


Re: [linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Dave Phillips
Greetings:

  If a guitar is available I might be able to play it... ;)

Best regards,

== Dave Phillips

The Book Of Linux Music & Sound at http://www.nostarch.com/lms.htm
The Linux Soundapps Site at http://linux-sound.org


Frank Barknecht wrote:
> 
> Hi,
> 
> Lukas Degener schrieb:
> > Tobias Ulbricht wrote:
> > >I'd love to have a jazzy jam-session or, er, well listen to you guys.
> > >I would bring in my piano.
> > >
> > Sounds like a nice idea. Maybe i can bring a bass guitar although it is
> > not my "weapon of choice".
> > This depends on space left in the transport vehicle of whoevers gives me
> > a lift. Unfortunately i don't have a car myself.
> >
> > maybe anyone could contribute some kind of drums/percussion? do we have
> > PA? If not, and if someone is willing to pick me up with a van or other
> > big vehicle, i could aswell bring mine.
> 
> I can play the saxophone, but I won't take it with me.  Maybe the ZKM has a
> midi wind controller? Alternativly I can play the midi faderbox ;) with Pd
> on a laptop. I'm a decent drummer with Pd, but only 4/4 beats please.
> 
> But I tend to do minimal technoid house normally (see the oggs on
> footils.org)
> 
> > Maybe we should add a section in the wikki for this issues.
> 
> Feel free to do so ;)
> 
> ciao
> --
> Frank Barknecht


Re: [linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Frank Barknecht
Hi,

Lukas Degener schrieb:
> Tobias Ulbricht wrote:
> >I'd love to have a jazzy jam-session or, er, well listen to you guys.
> >I would bring in my piano.
> >
> Sounds like a nice idea. Maybe i can bring a bass guitar although it is 
> not my "weapon of choice".
> This depends on space left in the transport vehicle of whoevers gives me 
> a lift. Unfortunately i don't have a car myself.
> 
> maybe anyone could contribute some kind of drums/percussion? do we have 
> PA? If not, and if someone is willing to pick me up with a van or other 
> big vehicle, i could aswell bring mine.

I can play the saxophone, but I won't take it with me.  Maybe the ZKM has a
midi wind controller? Alternativly I can play the midi faderbox ;) with Pd
on a laptop. I'm a decent drummer with Pd, but only 4/4 beats please. 

But I tend to do minimal technoid house normally (see the oggs on
footils.org)

> Maybe we should add a section in the wikki for this issues.

Feel free to do so ;)

ciao
-- 
Frank Barknecht


Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Paul Davis
>Well, I'll shut up about it. I still think it is a mistake. I haven't heard

i don't want you to shut about it. its a very important design
decision. 

>any
>convincing (to me) arguments why an application should not handle variable
>sized callbacks. VST process() is variable size I think as are EASI xfer
>callbacks, but clearly JACK needs constant callbacks and there is nothing
>I can do about that...

as i understand it, VST is only variable to allow for automation. And
if you follow the discussion here about XAP and elsewhere about PTAF,
you will see that many people consider this a mistake that comes from
not using "events" in the correct way.

i feel that it should be the job of ALSA to handle period sizes. if it
doesn't do a good job, it should be fixed. if we ask for a wakeup
every time 1024 frames are available, and the interrupts occur at 420,
840 and 1260 frames, then we should be woken up on the third
interrupt, process 1024 frames of data, and go back to sleep. the h/w
driver should handle this, not JACK. the latency behaviour will be
just as requested by the user. 

--p


[linux-audio-dev] [ANNOUNCE] polarbear-0.5.1

2003-02-26 Thread Maarten de Boer
Small bugfix: compile with both fltk 1.0 and fltk 1.1



Re: [linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Lukas Degener
Tobias Ulbricht wrote:

Oh. 

I'd love to have a jazzy jam-session or, er, well listen to you guys.
I would bring in my piano.
 

Sounds like a nice idea. Maybe i can bring a bass guitar although it is 
not my "weapon of choice".
This depends on space left in the transport vehicle of whoevers gives me 
a lift. Unfortunately i don't have a car myself.

maybe anyone could contribute some kind of drums/percussion? do we have 
PA? If not, and if someone is willing to pick me up with a van or other 
big vehicle, i could aswell bring mine.

Maybe we should add a section in the wikki for this issues.

Lukas



Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
>  > > According to the mLAN spec you need a buffer of around ~250us
(depending
>  > > on format) to collate the packets.
>  >
>  > Still there is no guarantee that 10 packets always have exactly the
same
>  > number of samples. You say the mLAN spec says you need a buffer of
>  > around ~250us. Note that is doesn't say a buffer of a number of frames.
>  > The bottom line is these packets are sent at regular time intervals,
not
>  > at a fixed number of frames and thus JACK should support this by
>  > allowing non-const (frames) callbacks IMHO.
>
> As was previously pointed out several times, this is not JACK's
> job.

Well, I think it is. And I've mentioned it a couple of times also.

> The driver should assemble the the data into fixed size blocks.

Why?

> This will not introduce any signifcant latency, unless the periods
> are nearly the same, in which case the latency could double.

This will always introduce a fairly large latency unless you are
willing to accept process time/sample to vary and thus be able
to do significantly less processing.

> The model you propose may be fine when you have *one* HW interface and

Which is the common case. When using more than one interface
then there needs to be buffering. When syncing audio to video there
needs to be buffering also. This should be done in the application
such as in OpenML ( http://www.khronos.org ).

> *one* application, but it does not scale without introducing  a lot
> of complexity.

Is has nothing to do with one or more applications. Non-const size
(frames) callbacks for just as well with more applications (using JACK).

I've made my point, several times. Nobody thinks I'm right, so I'll
shut up about it. I still think it is a mistake...

--ms





Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> On Wed, Feb 26, 2003 at 12:38:38 +0100, Martijn Sipkema wrote:
> > Still there is no guarantee that 10 packets always have exactly the same
> > number of samples. You say the mLAN spec says you need a buffer of
> > around ~250us. Note that is doesn't say a buffer of a number of frames.
> > The bottom line is these packets are sent at regular time intervals, not
> > at a fixed number of frames and thus JACK should support this by
> > allowing non-const (frames) callbacks IMHO.
>
> Why? Surely its much easier to wait until you have n samples and then send
> them round. The extra 250us of latency is hardly punishing.
>
> You must do that where you have a soundcard<->mLAN bridge in any case, in
> order to sync the graphs.
>
> IMHO if jack makes things hard for app developers by forcing them to deal
> with odd sized data blocks then its not doing its job. As we have
> discussed on the jack list there are a number of situations where you cant
> reliably or efficiently handle variable block sizes.

Well, I'll shut up about it. I still think it is a mistake. I haven't heard
any
convincing (to me) arguments why an application should not handle variable
sized callbacks. VST process() is variable size I think as are EASI xfer
callbacks, but clearly JACK needs constant callbacks and there is nothing
I can do about that...

--ms






Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Fons Adriaensen
Martijn Sipkema writes:
 > > 
 > > According to the mLAN spec you need a buffer of around ~250us (depending
 > > on format) to collate the packets.
 > 
 > Still there is no guarantee that 10 packets always have exactly the same
 > number of samples. You say the mLAN spec says you need a buffer of
 > around ~250us. Note that is doesn't say a buffer of a number of frames.
 > The bottom line is these packets are sent at regular time intervals, not
 > at a fixed number of frames and thus JACK should support this by
 > allowing non-const (frames) callbacks IMHO.

As was previously pointed out several times, this is not JACK's
job. The driver should assemble the the data into fixed size blocks.
This will not introduce any signifcant latency, unless the periods
are nearly the same, in which case the latency could double.

The model you propose may be fine when you have *one* HW interface and
*one* application, but it does not scale without introducing  a lot
of complexity.

-- 
FA



Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Steve Harris
On Wed, Feb 26, 2003 at 12:38:38 +0100, Martijn Sipkema wrote:
> Still there is no guarantee that 10 packets always have exactly the same
> number of samples. You say the mLAN spec says you need a buffer of
> around ~250us. Note that is doesn't say a buffer of a number of frames.
> The bottom line is these packets are sent at regular time intervals, not
> at a fixed number of frames and thus JACK should support this by
> allowing non-const (frames) callbacks IMHO.

Why? Surely its much easier to wait until you have n samples and then send
them round. The extra 250us of latency is hardly punishing.

You must do that where you have a soundcard<->mLAN bridge in any case, in
order to sync the graphs.

IMHO if jack makes things hard for app developers by forcing them to deal
with odd sized data blocks then its not doing its job. As we have
discussed on the jack list there are a number of situations where you cant
reliably or efficiently handle variable block sizes.

- Steve


Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
[...]
> The bottom level packets are sent at fixed time intervals (obviously,
> corresponding to the frame clock of the bus), but these packets are tiny
> and you get millions of them per second. A useful packet of audio data
> will be made up of a bunch of these.
> 
> According to the mLAN spec you need a buffer of around ~250us (depending
> on format) to collate the packets.

Still there is no guarantee that 10 packets always have exactly the same
number of samples. You say the mLAN spec says you need a buffer of
around ~250us. Note that is doesn't say a buffer of a number of frames.
The bottom line is these packets are sent at regular time intervals, not
at a fixed number of frames and thus JACK should support this by
allowing non-const (frames) callbacks IMHO.

--ms








[linux-audio-dev] [ANNOUNCE] polarbear-0.5.0

2003-02-26 Thread Maarten de Boer
Hello,

I just released polarbear. I had the code lying around, and just merged
it with the jack/alsa i/o code of tapiir. Note that this is the first
public release. I did not test it thoroughly, and I am not sure if the
GUI is obvious enough (it should be if you are familiar with complex
filters), so any input is welcome.

polarbear is a tool for designing filters in the complex domain. Filters
can be designed by placing any number of poles and zeros on the z plane.
>From this the filter coefficients are calculated, and the filter can be
applied in real time on an audio stream.

polarbear can be found at
http://www.iua.upf.es/~mdeboer/projects/polarbear/

For the (far) future, the idea is that polarbear and tapiir can work
together, in the sense that the filter coefs calculated by polarbear can
be used to control the gains of tapiir. maybe polarbear and tapiir might
even merge. that would be some animal :-)

Maarten


Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Steve Harris
On Wed, Feb 26, 2003 at 11:17:31 +0100, Martijn Sipkema wrote:
> > > I'm not sure, but it seems the audio transport over FireWire does not
> > > deliver a constant number of frames per packet. Does this mean that
> > > JACK cannot support FireWire audio without extra buffering?
> > 
> > ISO packets are a fixed size, so there will be a constant number of
> > frames per packet.
> 
> No, I don't think so. The packets are a fixed size and they are sent at
> a fixed interval which means the number of samples per packet will
> differ by one. That what it says in the paper. And that is what JACK
> won't support properly because it is considered a 'broken' design.

The bottom level packets are sent at fixed time intervals (obviously,
corresponding to the frame clock of the bus), but these packets are tiny
and you get millions of them per second. A useful packet of audio data
will be made up of a bunch of these.

According to the mLAN spec you need a buffer of around ~250us (depending
on format) to collate the packets.

- Steve


Re: [linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Tobias Ulbricht

Oh. 

I'd love to have a jazzy jam-session or, er, well listen to you guys.
I would bring in my piano.

frank, Matthias, joern: I'll mail you later about we *might* (slight chance) get a 
streaming server. And - sorry - I'm so late.

greetings, tobias.

On Wed, Feb 26, 2003 at 10:44:37AM +0100, Frank Barknecht wrote:
> Hallo,
> 
> with the LAD meeting getting closer, I'm getting a bit curious about,
> what the plans are for the open "Linux Sound Night" on 15.3.? Will we
> hear some of you guys perform and Paul records it?
> 
> ciao
> -- 
>  Frank Barknecht   _ __footils.org__


[linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Fons Adriaensen
Frank Barknecht writes:
 > Hallo,
 > 
 > with the LAD meeting getting closer, I'm getting a bit curious about,
 > what the plans are for the open "Linux Sound Night" on 15.3.? Will we
 > hear some of you guys perform and Paul records it?

And will there be a Ladies' Programme, as at the AES conventions ? ;-)

-- 
FA


Re: [linux-audio-dev] Linux Alsa Audio over 1394 - a Thesis

2003-02-26 Thread Martijn Sipkema
> > I'm not sure, but it seems the audio transport over FireWire does not
> > deliver a constant number of frames per packet. Does this mean that
> > JACK cannot support FireWire audio without extra buffering?
> 
> ISO packets are a fixed size, so there will be a constant number of
> frames per packet.

No, I don't think so. The packets are a fixed size and they are sent at
a fixed interval which means the number of samples per packet will
differ by one. That what it says in the paper. And that is what JACK
won't support properly because it is considered a 'broken' design.

--ms





[linux-audio-dev] LAD meeting - Linux Sound Night

2003-02-26 Thread Frank Barknecht
Hallo,

with the LAD meeting getting closer, I'm getting a bit curious about,
what the plans are for the open "Linux Sound Night" on 15.3.? Will we
hear some of you guys perform and Paul records it?

ciao
-- 
 Frank Barknecht   _ __footils.org__