Re: [LAD] Yoshimi Midi Learn: 1st testing release

2011-06-26 Thread Renato
On Sun, 26 Jun 2011 02:17:12 +0200
louis cherel cherel.lo...@gmail.com wrote:

 Hello everyone,
 
 Every one knows Yoshimi, the fork of ZynAddSubFx.
 One thing was lacking to yoshimi to be perfect: to be nearly fully
 controlled by midi controls ( no OSC, sorry ).
 ZynAddSubFx had possibilities to control a few parameters with
 complicated NRPN, Yoshimi recently had ( in the test versions ) some
 features too.
 But now I'm proud to announce you the work of licnep ( not me, I'm
 just a bug reporter ) who made the midiLearn function for yoshimi.
 It's not stable for now because it's recent, and not full, but here
 are the present features:
 
 * Control System effects, Part Insert Effects
 * Master/Part Volume, Pan, System Effect Sends
 * Most of ADsynth parameters
 * Add/Remove controller
 * detect the channel and the number
 * reset the knob ( its position )
 I think it's a feature that's very useful and could help many
 yoshimi/zyn users.
 

Hello, this sounds good :) I tried compiling from git some weeks ago but
got errors and didn't have time to report, I'll try again later today. 
May I ask:
1) do the controls affect the sound in real time or only on the next
note? I noticed while playing with yoshimi that if for example you
tweak the filter cutoff frequency while a note is playing it won't
affect the current sound, but only the next note will be affected. This
would prevent making cool effects with automation...
2) will this be merged in the main yoshimi project?

cheers
renato
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread pshirkey
 On Sun, Jun 26, 2011 at 12:22:58AM +0200, Jörn Nettingsmeier wrote:

 it seems you have just proven that the maximum duration of any pure tone
 is 1/f. that is quite extraordinary.

 0.5 / f actually, which is extraextraordinary.

 Both Joern and I have invited the original poster to
 explain more clearly what he wants to achieve. Because
 as it stands his question doesn't make sense and can't
 be answered.


I think you understood what I am looking for below.

 If the purpose of this list is to let people help each
 other with audio software related problems, then the
 other replies so far are, to put it gently, 'unfortunate'.

 To put some things right:

 - Absolute phase response *does* matter. It's quite
 easy to create e.g. a filter that has a perfectly flat
 amplitude response, modifies only the phase, and sounds
 as a e.g. a resonance or even a reverb. You won't hear
 the relatively harmless phase response of a tyipcal amp,
 but that doesn't mean you can't hear phase errors in
 general.


Does anyone have a code example for this type of filter?



 - Phase is related to delay but it is not the same thing.
 Group delay is again something different. Mixing up all
 these is not going to help anyone understand things any
 better.


 Ciao,

 --
 FA


 ___
 Linux-audio-dev mailing list
 Linux-audio-dev@lists.linuxaudio.org
 http://lists.linuxaudio.org/listinfo/linux-audio-dev


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread pshirkey
  From the long list of answers, I see lots of speculation about Mr.
 Shirkey's question. Some time back he approached me on the work that had
 been done a very long time ago on phase-modulation to achieve panning.
 He never replied to my subsequent information or queries.

Sorry about that. Been a little hectic the past few months.

  I suspect that
 this question is really all about the phase of the signal being
 transported through any given audio driver, ALSA, JACK or whatever, so
 the analysis is somewhat germane.

 I think that a phase demod analyzer might be his attempt to solve his
 real problem. I speculate that Patrick didn't actually ask his real
 question.


I'm not sure that I can provide all the information at this stage
anyway... :-)

 So, perhaps any Haas-effect plugin would satisfy Patrick's needs.

 Other than that, I'd make a really cool spectrum analyzer that ran the
 Fourier analysis on two channels, correlated their phases then made a
 +/- line vs. frequency for all to see so that the phase of the
 components of the spectrum could be watched for phase relationships.

 Suggestions?


That would be a useful tool.


--
Patrick Shirkey
Boost Hardware Ltd


 On 06/25/2011 09:23 AM, pshir...@boosthardware.com wrote:
 Hi,

 Can anyone point me to a simple code example for how to determine the
 phase at a specific time in a waveform?

 ex. if I have a sample that is 5 seconds long and want to know the phase
 at 2.5 seconds

 I'm open to code in any language or a scripted example if such a tool
 exists. If there is an ui which has that feature I am also interested.


 Cheers

 --
 Patrick Shirkey
 Boost Hardware Ltd
 ___
 Linux-audio-dev mailing list
 Linux-audio-dev@lists.linuxaudio.org
 http://lists.linuxaudio.org/listinfo/linux-audio-dev

 ___
 Linux-audio-dev mailing list
 Linux-audio-dev@lists.linuxaudio.org
 http://lists.linuxaudio.org/listinfo/linux-audio-dev


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread pshirkey
 On Sat, 25 Jun 2011 16:23:29 +0200 (CEST), pshir...@boosthardware.com
 wrote:
 Hi,

 Can anyone point me to a simple code example for how to determine the
 phase at a specific time in a waveform?

 ex. if I have a sample that is 5 seconds long and want to know the
 phase
 at 2.5 seconds

 I'm open to code in any language or a scripted example if such a tool
 exists. If there is an ui which has that feature I am also
 interested.


 in general, theory i mean, phase equals delay; it can be computed
 deterministically iif the signal is a pure tone (sinusoidal) otherwise
 you must have a reference signal and referring to its fundamental
 frequency (1st order harmonic) or, when in-the-field, you can pick one
 conventional signal tone (i've been told that a 1khz is a usual choice:)
 and compare to that.

 if you are to compare two known signals (often input vs. output of some
 lti transformation, eg. filter), than the computation to do is
 correlation. well, you should use fft for that, but i'm sure you know
 that:)


You wouldn't happen to have an example of this code?

Perhaps there is something in qtractor along these lines already?

I have looked at integrating something with librubberband but first things
first right?



--
Patrick Shirkey
Boost Hardware Ltd
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Jörn Nettingsmeier

On 06/26/2011 04:17 AM, Fons Adriaensen wrote:

On Sun, Jun 26, 2011 at 12:22:58AM +0200, Jörn Nettingsmeier wrote:



- Phase is related to delay but it is not the same thing.
Group delay is again something different. Mixing up all
these is not going to help anyone understand things any
better.


well, i was trying to connect all those buzzwords... but you are right, 
it should be done more carefully. let me try again.


*delay* makes the *phase* response curve steeper. it doesn't introduce 
any non-linearities in the phase response.


amplitude response over frequency can be interpreted as-is, but phase 
response needs to be looked at with your first-derivative glasses on: a 
system comprising a perfect speaker and your perfect ear only has zero 
phase when you stick your head into the speaker.

as soon as you move away, the phase drops, the steeper the further you go.
morale: constant amplitude response is what we want. constant phase 
response almost never happens, because of delays that creep in. instead, 
we want _linear_ phase response.


*group* *delay* is a *time* *delay* for a specific frequency. if you 
have a linear-phase system, the group delay is a _constant_: high 
frequencies may be phase-shifted by more cycles, but the time it takes 
them to arrive is the same as for low frequencies.
i think you get the group delay when you differentiate the phase 
response wrt frequency (but don't believe me when i talk calculus...)


it's important not to confuse phase delay with group delay: phase delay 
talks about a number of cycles of delay, whereas group delay is about 
time. when you want to assess how well a system responds to transients, 
you don't care how often the high frequencies have been wiggling around 
on the way to your ear drum - you want them to arrive at the same time 
as the low frequencies. hence, you care about the group delay, not the 
phase delay.


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Jörn Nettingsmeier

On 06/26/2011 10:50 AM, pshir...@boosthardware.com wrote:

So, perhaps any Haas-effect plugin would satisfy Patrick's needs.


so this is about panning? that's actually pretty easily done with just a 
time delay in addition to level difference. unless you want to spread 
complex sounds out in space, in which case you replace the delay with a 
frequency-dependent allpass.


careful with the term haas effect. all the poor guy did was check the 
time window in which an echo would not be perceived as a separate event. 
basically, the haas effect says you can get hide a P.A. delay system 
that's 10dB louder than the main P.A. if it hits between 10 and 30ms 
later, and still maintain good on-stage localisation, with the delay 
being practically inaudible.

sometimes also called the law of the first wavefront.

this has nothing to do with stereo localisation. time delays relevant 
for left-right localisation are in the 0 - 1ms range, some authors give 
a bit less, others a bit more.



Other than that, I'd make a really cool spectrum analyzer that ran the
Fourier analysis on two channels, correlated their phases then made a
+/- line vs. frequency for all to see so that the phase of the
components of the spectrum could be watched for phase relationships.


that is precisely what dual-fft tools for p.a. system calibration do, 
and they're extremely useful.
they allow you to constantly monitor the system with _program_ material, 
without having to use MLS noise or any other specific measurement signal.
one channel is used for the direct signal from the mixer, the other is 
fed by a measurement microphone with delay compensation. you get instant 
phase and amplitude response. good systems also give you an additional 
confidence curve that tells you how much you can trust which parts of 
the spectrum. for instance, if your program material is a boy soprano, 
the confidence of the measurement in the low end is practically zero.

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Erik de Castro Lopo
pshir...@boosthardware.com wrote:

  - Absolute phase response *does* matter. It's quite
  easy to create e.g. a filter that has a perfectly flat
  amplitude response, modifies only the phase, and sounds
  as a e.g. a resonance or even a reverb. You won't hear
  the relatively harmless phase response of a tyipcal amp,
  but that doesn't mean you can't hear phase errors in
  general.
 
 
 Does anyone have a code example for this type of filter?

An IIR allpass filter where the unit delay is replaced 
with a much longer delay, ie tens, hundreds or even thousands
of samples.

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Fons Adriaensen
On Sun, Jun 26, 2011 at 11:43:46AM +0200, Jörn Nettingsmeier wrote:
 On 06/26/2011 04:17 AM, Fons Adriaensen wrote:
 On Sun, Jun 26, 2011 at 12:22:58AM +0200, Jörn Nettingsmeier wrote:

 - Phase is related to delay but it is not the same thing.
 Group delay is again something different. Mixing up all
 these is not going to help anyone understand things any
 better.

 well, i was trying to connect all those buzzwords... but you are right,  
 it should be done more carefully. let me try again.

 *delay* makes the *phase* response curve steeper. it doesn't introduce  
 any non-linearities in the phase response.

 amplitude response over frequency can be interpreted as-is, but phase  
 response needs to be looked at with your first-derivative glasses on: a  
 system comprising a perfect speaker and your perfect ear only has zero  
 phase when you stick your head into the speaker.
 as soon as you move away, the phase drops, the steeper the further you go.
 morale: constant amplitude response is what we want. constant phase  
 response almost never happens, because of delays that creep in. instead,  
 we want _linear_ phase response.

Right. And 'linear' here means 'without a constant term' - we don't
want our system to be a Hilbert transform for example. 

 *group* *delay* is a *time* *delay* for a specific frequency. if you  
 have a linear-phase system, the group delay is a _constant_: high  
 frequencies may be phase-shifted by more cycles, but the time it takes  
 them to arrive is the same as for low frequencies.
 i think you get the group delay when you differentiate the phase  
 response wrt frequency (but don't believe me when i talk calculus...)

Correct. It it the derivative of the phase response w.r.t. angular 
frequency (minus that value if your convention is that a delay
corresponds to positive time).

Group delay actually tells us how the 'envelope' of a signal is
modified by nonlinear phase response, something we can easily hear
on any 'percussive' signals.

Let w = 2 * pi * f

Suppose you have some filter that has a non-linear phase 
response, e.g.

P(w) = a * w^2(radians)

The corresponding phase delay is

D(w) = P(w) / w = a * w  (seconds)
  
The group delay is

G(w) = dP(w)/dw = 2 * a * w (seconds) 

Now if you have a relatively narrowband signal centered at
some frequency w1, e.g. a 'ping' with a gentle attack, then
it would appear to be delayed by  2 * a * w1, not  a * w1,
because what we hear as delay is the delay on the envelope,
not on the 'cycles'.

Ciao,

-- 
FA


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Fons Adriaensen
On Sun, Jun 26, 2011 at 10:44:33AM +0200, pshir...@boosthardware.com wrote:

 I think you understood what I am looking for below.
 
Unfortunately, I can only guess what the context of
your question was, and I'd probably be wrong :-(

 Does anyone have a code example for this type of filter?

The example Erik gave is a perfect one.
 
As said, there's no such thing as 'the phase of a waveform'.
For a general waveform, and no matter what interpretation
of 'phase' you'd choose, it would be a function of frequency.

If the waveform is cyclic (e.g. a triangle wave) you could
define some 'phase' value on it, using either the fundamental
frequency or some arbitrary point in the waveform as reference.

But even for a simple sine wave the term 'phase' can mean
different things. Take

  s(t) = sin (w * t + phi), with w = 2 * pi * f

All the following are correct:

(1) If you take 'phase' as a property of s(t) as a
whole, you could say its phase is phi.

(2) If look at absolute phase at time t, it would
be w * t + phi.

(3) If you use t = 0 as a phase reference point, the
phase at time t would be w * t.

It all depends on the context which one you use.

To add some more ambiguity, compare

  s1(t) = sin (w * t + phi) 
  s2(t) = cos (w * t + phi)

In many cases it doesn't matter which one you use when 
defining or explaining something. If you have a maths
background you'd prefer cos() for real signals, since
that's the real part of the complex single frequency
signal exp(j * (w * t + phi)). If you are defining e.g.
a oscillator opcode in a synthesis system you'd prefer
sin(), as this starts at zero for phi = 0. In both
cases you could legitimately refer to 'phi' as 'the
phase'. But the two waveforms are 90 degrees out of
phase w.r.t. each other...

Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Gordon JC Pearce
On Sat, 25 Jun 2011 16:23:29 +0200 (CEST)
pshir...@boosthardware.com wrote:

 Hi,
 
 Can anyone point me to a simple code example for how to determine the
 phase at a specific time in a waveform?
 
 ex. if I have a sample that is 5 seconds long and want to know the phase
 at 2.5 seconds
 
 I'm open to code in any language or a scripted example if such a tool
 exists. If there is an ui which has that feature I am also interested.
 

There isn't really a way to do this.  How would you tell the difference between 
0.5*sin(pi/2)
and 1.0*sin(pi/6) - try it and see, what are the answers?

If you want to determine phase you need to know the amplitude.  The only sane 
way to do both is
to use a complex sample with an in-phase and a quadrature component.

Gordon MM0YEQ
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Jörn Nettingsmeier

On 06/26/2011 05:47 PM, Arnold Krille wrote:

On Sunday 26 June 2011 11:58:54 Jörn Nettingsmeier wrote:

On 06/26/2011 10:50 AM, pshir...@boosthardware.com wrote:

Other than that, I'd make a really cool spectrum analyzer that ran the
Fourier analysis on two channels, correlated their phases then made a
+/- line vs. frequency for all to see so that the phase of the
components of the spectrum could be watched for phase relationships.

that is precisely what dual-fft tools for p.a. system calibration do,
and they're extremely useful.
they allow you to constantly monitor the system with _program_ material,
without having to use MLS noise or any other specific measurement signal.
one channel is used for the direct signal from the mixer, the other is
fed by a measurement microphone with delay compensation. you get instant
phase and amplitude response. good systems also give you an additional
confidence curve that tells you how much you can trust which parts of
the spectrum. for instance, if your program material is a boy soprano,
the confidence of the measurement in the low end is practically zero.


I use(d) japa for this :-) Altough that lacks phase display...


it will do the job nicely in its difference setting, but you have to 
find (and compensate for) the delay manually, like a real man.
modern p.a. tools allow sissies like yours truly to just click on the 
find delay button and be done :-D

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Arnold Krille
On Sunday 26 June 2011 18:05:12 Jörn Nettingsmeier wrote:
 On 06/26/2011 05:47 PM, Arnold Krille wrote:
  On Sunday 26 June 2011 11:58:54 Jörn Nettingsmeier wrote:
  On 06/26/2011 10:50 AM, pshir...@boosthardware.com wrote:
  Other than that, I'd make a really cool spectrum analyzer that ran the
  Fourier analysis on two channels, correlated their phases then made a
  +/- line vs. frequency for all to see so that the phase of the
  components of the spectrum could be watched for phase relationships.
  
  that is precisely what dual-fft tools for p.a. system calibration do,
  and they're extremely useful.
  they allow you to constantly monitor the system with _program_ material,
  without having to use MLS noise or any other specific measurement
  signal. one channel is used for the direct signal from the mixer, the
  other is fed by a measurement microphone with delay compensation. you
  get instant phase and amplitude response. good systems also give you an
  additional confidence curve that tells you how much you can trust
  which parts of the spectrum. for instance, if your program material is
  a boy soprano, the confidence of the measurement in the low end is
  practically zero.
  
  I use(d) japa for this :-) Altough that lacks phase display...
 
 it will do the job nicely in its difference setting, but you have to
 find (and compensate for) the delay manually, like a real man.
 modern p.a. tools allow sissies like yours truly to just click on the
 find delay button and be done :-D

The venues I used japa in are small enough and the music slow enough to just 
use the second-slowest settings from japa and call it a day. :-D

But a PA setup helper with speaker-managment and x-over and convolver-engine 
is probably one of the killer apps linux audio is missing.
But then again, real men use jconvolver, drc and aliki to adopt for the rooms 
and PAs defficiencies. Unless they calculate the correction responses by hand...


signature.asc
Description: This is a digitally signed message part.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Rui Nuno Capela

On 06/26/2011 10:13 AM, pshir...@boosthardware.com wrote:

On Sat, 25 Jun 2011 16:23:29 +0200 (CEST), pshir...@boosthardware.com
wrote:

Hi,

Can anyone point me to a simple code example for how to determine the
phase at a specific time in a waveform?

ex. if I have a sample that is 5 seconds long and want to know the
phase
at 2.5 seconds

I'm open to code in any language or a scripted example if such a tool
exists. If there is an ui which has that feature I am also
interested.



in general, theory i mean, phase equals delay; it can be computed
deterministically iif the signal is a pure tone (sinusoidal) otherwise
you must have a reference signal and referring to its fundamental
frequency (1st order harmonic) or, when in-the-field, you can pick one
conventional signal tone (i've been told that a 1khz is a usual choice:)
and compare to that.

if you are to compare two known signals (often input vs. output of some
lti transformation, eg. filter), than the computation to do is
correlation. well, you should use fft for that, but i'm sure you know
that:)



You wouldn't happen to have an example of this code?

Perhaps there is something in qtractor along these lines already?



no, not at all :) qtractor doesn't have anything of the sort, sorry. i 
was just clearing dust from the top of my mind re. lti systems theory :)




I have looked at integrating something with librubberband but first things
first right?



hmmm... there's this wsola algorithm for time-stretching in the 
time-domain (cf. frequency-domain as in rubberband) where 
auto-correlation is computed to find the optimum overlap window point 
(by waveform similarity). well, that's maybe an idea spark... but again 
you need the zero-phase/origin reference signal waveform anyway to 
correlate to... 'coz, as others said, your question doesn't make much 
sense as is :)


cheers
--
rncbc aka Rui Nuno Capela
rn...@rncbc.org
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] gladish crashes

2011-06-26 Thread Emanuel Rumpf
output + backtrace
-

canvas::ports_connected(3, 24, 2, 10)
canvas::ports_connected(1, 1, 4, 25)
canvas::ports_connected(1, 2, 4, 26)
canvas_cls::on_realize

Program received signal SIGSEGV, Segmentation fault.
0x0042b36c in ladish_room_proxy_get_recent_projects (proxy=0x0,
max_items=10, callback=0x4200a2 add_recent_project,
context=0x7fffd9b0) at ../proxies/room_proxy.c:481
481   if (!dbus_call(0, proxy_ptr-service, proxy_ptr-object,
IFACE_RECENT_ITEMS, get, q, max_items, NULL, reply_ptr))



(gdb) bt
#0  0x0042b36c in ladish_room_proxy_get_recent_projects (proxy=0x0,
max_items=10, callback=0x4200a2 add_recent_project,
context=0x7fffd9b0) at ../proxies/room_proxy.c:481
#1  0x0042013f in fill_project_dynmenu (
callback=0x4213b6 ladish_dynmenu_add_entry, context=0x7fffe4021700)
at ../gui/menu.c:141
#2  0x00421683 in populate_dynmenu_menu (menu_item=0x7fffe4001c30,
dynmenu_ptr=0x7fffe4021700) at ../gui/dynmenu.c:167
#3  0x7730981c in g_closure_invoke ()
   from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#4  0x7731b019 in ?? ()
   from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#5  0x77324258 in g_signal_emit_valist ()
   from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#6  0x7732441f in g_signal_emit ()
   from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#7  0x74a566ce in gtk_widget_activate ()
   from /usr/lib/libgtk-x11-2.0.so.0
#8  0x7494f94d in gtk_menu_shell_activate_item ()
   from /usr/lib/libgtk-x11-2.0.so.0
#9  0x7494dce5 in ?? () from /usr/lib/libgtk-x11-2.0.so.0
#10 0x7fffec4a9ed4 in ?? () from /usr/lib/libdbusmenu-gtk.so.3
#11 0x7fffec4aadd8 in dbusmenu_gtk_parse_menu_structure ()



 from /usr/lib/libdbusmenu-gtk.so.3
#12 0x7fffec6b2a76 in ?? ()
   from /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so
#13 0x7fffec6b2efa in ?? ()
   from /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so
#14 0x7488795b in ?? () from /usr/lib/libgtk-x11-2.0.so.0
#15 0x7fffec6b2ad8 in ?? ()
   from /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so
#16 0x7fffec6b2efa in ?? ()
   from /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so
#17 0x7fffec6b2ad8 in ?? ()
   from /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so
#18 0x7fffec6b2de1 in ?? ()
   from /usr/lib/gtk-2.0/2.10.0/menuproxies/libappmenu.so
#19 0x76e4b4eb in ?? () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#20 0x76e49bcd in g_main_context_dispatch ()
   from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#21 0x76e4a3a8 in ?? () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#22 0x76e4a9f2 in g_main_loop_run ()
   from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#23 0x74938af7 in gtk_main () from /usr/lib/libgtk-x11-2.0.so.0
#24 0x00413881 in main (argc=1, argv=0x7fffe348)
at ../gui/main.c:194
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] gladish crashes

2011-06-26 Thread Emanuel Rumpf
also valgrind is reporting the following:



==13299==
==13299== Conditional jump or move depends on uninitialised value(s)
==13299==at 0xA6540CB: __GI___strcasecmp_l (strcmp.S:243)
==13299==by 0xA5EDF60: __gconv_open (gconv_open.c:70)
==13299==by 0xA5FC106: _nl_find_msg (dcigettext.c:990)
==13299==by 0xA5FC818: __dcigettext (dcigettext.c:654)
==13299==by 0x5B76C94: ??? (in
/lib/x86_64-linux-gnu/libglib-2.0.so.0.2800.6)
==13299==by 0x5B7994D: g_dgettext (in
/lib/x86_64-linux-gnu/libglib-2.0.so.0.2800.6)
==13299==by 0x7EFA6FE: gtk_get_option_group (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7EFA909: gtk_parse_args (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7EFA978: gtk_init_check (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7EFA9A8: gtk_init (in /usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x4135AE: main (main.c:93)
==13299==
==13299== Use of uninitialised value of size 8
==13299==at 0xA656204: __GI___strcasecmp_l (strcmp.S:2257)
==13299==by 0xA5EDF60: __gconv_open (gconv_open.c:70)
==13299==by 0xA5FC106: _nl_find_msg (dcigettext.c:990)
==13299==by 0xA5FC818: __dcigettext (dcigettext.c:654)
==13299==by 0x5B76C94: ??? (in
/lib/x86_64-linux-gnu/libglib-2.0.so.0.2800.6)
==13299==by 0x5B7994D: g_dgettext (in
/lib/x86_64-linux-gnu/libglib-2.0.so.0.2800.6)
==13299==by 0x7EFA6FE: gtk_get_option_group (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7EFA909: gtk_parse_args (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7EFA978: gtk_init_check (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7EFA9A8: gtk_init (in /usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x4135AE: main (main.c:93)
==13299==
==13299== Use of uninitialised value of size 8
==13299==at 0xA656208: __GI___strcasecmp_l (strcmp.S:2258)
==13299==by 0xA5EDF60: __gconv_open (gconv_open.c:70)
==13299==by 0xA5FC106: _nl_find_msg (dcigettext.c:990)
==13299==by 0xA5FC818: __dcigettext (dcigettext.c:654)
==13299==by 0x5B76C94: ??? (in
/lib/x86_64-linux-gnu/libglib-2.0.so.0.2800.6)
==13299==by 0x5B7994D: g_dgettext (in
/lib/x86_64-linux-gnu/libglib-2.0.so.0.2800.6)
==13299==by 0x7EFA6FE: gtk_get_option_group (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7EFA909: gtk_parse_args (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7EFA978: gtk_init_check (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7EFA9A8: gtk_init (in /usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x4135AE: main (main.c:93)
==13299==
Loading glade from ./gui/gladish.ui
==13299== Conditional jump or move depends on uninitialised value(s)
==13299==at 0xC758510: inflateReset2 (in
/lib/x86_64-linux-gnu/libz.so.1.2.3.4)
==13299==by 0xC758605: inflateInit2_ (in
/lib/x86_64-linux-gnu/libz.so.1.2.3.4)
==13299==by 0xCE03100: png_create_read_struct_2 (pngread.c:164)
==13299==by 0x1859C215: ??? (in
/usr/lib/gdk-pixbuf-2.0/2.10.0/loaders/libpixbufloader-png.so)
==13299==by 0x904971D: ??? (in /usr/lib/libgdk_pixbuf-2.0.so.0.2300.3)
==13299==by 0x9049A0B: gdk_pixbuf_new_from_file (in
/usr/lib/libgdk_pixbuf-2.0.so.0.2300.3)
==13299==by 0x80275CB: ??? (in /usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x802CBAB: gtk_window_set_icon_from_file (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x41F602: set_main_window_icon (gtk_builder.c:39)
==13299==by 0x41F760: init_gtk_builder (gtk_builder.c:84)
==13299==by 0x413605: main (main.c:108)
==13299==






==13299== Conditional jump or move depends on uninitialised value(s)
==13299==at 0x844F20C: ??? (in /usr/lib/libgdk-x11-2.0.so.0.2400.4)
==13299==by 0xF0A127A: ??? (in /usr/lib/liboverlay-scrollbar-0.1.so.0.0.12)
==13299==by 0xF0A4A78: ??? (in /usr/lib/liboverlay-scrollbar-0.1.so.0.0.12)
==13299==by 0x56CD764: g_closure_invoke (in
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0.2800.6)
==13299==by 0x56DE7E2: ??? (in
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0.2800.6)
==13299==by 0x56E8257: g_signal_emit_valist (in
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0.2800.6)
==13299==by 0x56E841E: g_signal_emit (in
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0.2800.6)
==13299==by 0x802095D: gtk_widget_map (in
/usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7F60943: ??? (in /usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x7E7EA0E: ??? (in /usr/lib/libgtk-x11-2.0.so.0.2400.4)
==13299==by 0x56CD764: g_closure_invoke (in
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0.2800.6)
==13299==by 0x56DE7E2: ??? (in
/usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0.2800.6)
==13299==
==13299== Conditional jump or move depends on uninitialised value(s)
==13299==at 0x844C133: ??? (in /usr/lib/libgdk-x11-2.0.so.0.2400.4)
==13299==by 0x844C335: ??? (in /usr/lib/libgdk-x11-2.0.so.0.2400.4)
==13299==by 0x844CD90: ??? (in /usr/lib/libgdk-x11-2.0.so.0.2400.4)
==13299==by 

[LAD] Just spotted this

2011-06-26 Thread Folderol
Thought it might be of interest.

http://www.xmos.com/news/24-may-2011/xmos-first-industry-move-avb-software-open-source

-- 
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Just spotted this

2011-06-26 Thread Duncan Gray
I've worked with XMOS, they're first rate. I hope they make it. Their 
processor technology is killer.


On 06/26/2011 01:03 PM, Folderol wrote:

Thought it might be of interest.

http://www.xmos.com/news/24-may-2011/xmos-first-industry-move-avb-software-open-source


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread gene heskett
On Sunday, June 26, 2011 02:48:01 PM Gordon JC Pearce did opine:

 On Sat, 25 Jun 2011 16:23:29 +0200 (CEST)
 
 pshir...@boosthardware.com wrote:
  Hi,
  
  Can anyone point me to a simple code example for how to determine the
  phase at a specific time in a waveform?
  
  ex. if I have a sample that is 5 seconds long and want to know the
  phase at 2.5 seconds
  
  I'm open to code in any language or a scripted example if such a tool
  exists. If there is an ui which has that feature I am also interested.
 
 There isn't really a way to do this.  How would you tell the difference
 between 0.5*sin(pi/2) and 1.0*sin(pi/6) - try it and see, what are the
 answers?
 
 If you want to determine phase you need to know the amplitude.  The only
 sane way to do both is to use a complex sample with an in-phase and a
 quadrature component.
 
 Gordon MM0YEQ

I do not see how a repeatable, and therefore measurable quadrature 
component can be developed in a complex, multi-frequency waveform since the 
quadrature component is just as frequency dependent as any other method of 
measurement.

Since the human ear is not sensitive to phasing, other than diffs between 
the two ears from delay/echo/reverb effects that help us determine 
direction of src, to me, this argument is moot and possibly a waste of 
time.

Cheers, gene
-- 
There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order.
-Ed Howdershelt (Author)
If parents would only realize how they bore their children.
-- G.B. Shaw
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Just spotted this

2011-06-26 Thread Fons Adriaensen
On Sun, Jun 26, 2011 at 01:24:39PM -0500, Duncan Gray wrote:

 I've worked with XMOS, they're first rate. I hope they make it. Their  
 processor technology is killer.

Couldn't agree more.

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Just spotted this

2011-06-26 Thread Emanuel Rumpf
2011/6/26 Fons Adriaensen f...@linuxaudio.org:
 On Sun, Jun 26, 2011 at 01:24:39PM -0500, Duncan Gray wrote:

 I've worked with XMOS, they're first rate. I hope they make it. Their
 processor technology is killer.

 Couldn't agree more.

And the  USB 2.0 Audio Class is finally coming:
http://www.xmos.com/products/development-kits/usbaudio2
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Yoshimi Midi Learn: 1st testing release

2011-06-26 Thread Renato
On Sun, 26 Jun 2011 12:35:22 +0200
louis cherel cherel.lo...@gmail.com wrote:

 On 26/06/2011 09:43, Renato wrote:
  Hello, this sounds good :) I tried compiling from git some weeks
  ago but got errors and didn't have time to report, I'll try again
  later today. May I ask:
  1) do the controls affect the sound in real time or only on the next
  note? I noticed while playing with yoshimi that if for example you
  tweak the filter cutoff frequency while a note is playing it won't
  affect the current sound, but only the next note will be affected.
  This would prevent making cool effects with automation...
  2) will this be merged in the main yoshimi project?
 
  cheers
  renato
 For the errors, you may need to install libsm-dev and the fltk dev 
 packages ( I don't know which they are ).
 To be sure, you can first install the official yoshimi package from
 the package manager of your distrib and next make the
 yoshimi-midiLearn branch.
 

ok, I managed to compile correctly. I would not want to install
system-wide though, so I tried to run from the local dir but got:

renato@acerarch ~/src/yoshimi-midi-learn/prove/yoshimi/src $ ./yoshimi 
Default instrument tar file /usr/local/share/yoshimi/yoshimi-instruments.tar.gz 
not found
Failed to establish program bank database
Bad things happened, Yoshimi strategically retreats. Serious problems dealing 
with the instrument database
Flushing log:
Default instrument tar file /usr/local/share/yoshimi/yoshimi-instruments.tar.gz 
not found
Failed to establish program bank database
Bad things happened, Yoshimi strategically retreats. Serious problems dealing 
with the instrument database


should I point it to the yoshimi-instruments.tar.gz that's found one directory 
up (in the root of the cloned git)? how?

cheers
renato
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Gordon JC Pearce
On Sun, 26 Jun 2011 14:54:48 -0400
gene heskett ghesk...@wdtv.com wrote:

 I do not see how a repeatable, and therefore measurable quadrature 
 component can be developed in a complex, multi-frequency waveform since the 
 quadrature component is just as frequency dependent as any other method of 
 measurement.

Really?  Because I'm generating some quadrature samples right now - they're how 
software-defined radios work.  In this case you generate two local oscillators 
90 degrees apart and use them to switch a pair of synchronous detectors.  The 
resulting downmixed RF is now at audio frequencies (just like a 
direct-conversion receiver) and can be passed to a soundcard, then processed on 
the PC to extract a particular signal.

See https://github.com/gordonjcp/lysdr if you want to play.

Gordon MM0YEQ
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Yoshimi Midi Learn: 1st testing release

2011-06-26 Thread Emanuel Rumpf
2011/6/26 Renato renn...@gmail.com:

 Flushing log:
 Default instrument tar file 
 /usr/local/share/yoshimi/yoshimi-instruments.tar.gz not found


 should I point it to the yoshimi-instruments.tar.gz that's found one 
 directory up (in the root of the cloned git)? how?


maybe : ?
sudo  ln -s   ../yoshimi-instruments.tar.gz
/usr/local/share/yoshimi/yoshimi-instruments.tar.gz
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] [OT] Comparison recordings of CoreSound Tetramic and Soundfield ST450

2011-06-26 Thread Jörn Nettingsmeier

Hi *!


For those interested in Ambisonic surround sound: finally I've managed 
to upload some side-by-side recordings of a Tetramic and the new ST450 
which have been sitting on my harddisk for way too long.


http://stackingdwarves.net/download/TetraMic_vs_ST450/

Hopefully the recordings are worth your time even if you're not 
currently shopping for a new surround microphone.


Attached is a README that goes with the audio files.


Enjoy,


Jörn



A comparison of the CoreSound Tetramic (CS 2050) 
  and a pre-series prototype of the Soundfield ST450.


  Jörn Nettingsmeier netti...@stackingdwarves.net
January 2011

Thanks to Soundfield Ltd. and S.E.A. Vertrieb  Consulting GmbH for
providing me with an ST450 for testing.

The Tetramic was recorded in A-Format and processed with Fons Adriaensen's
Tetraproc using the custom filter coefficients provided by CoreSound. 
The ST450 went through its preamp/converter box, which produces B-Format.
Both mics were co-incident, the ST450 mounted upright in its cradle, and
the Tetramic on top, in end-fire mode. They were recorded through an RME 
Micstasy with digitally controlled and matched gain, via ADAT through a 
Focusrite Saffire PRO26, to a Linux Audio workstation running Ardour.
The tracks have been roughly matched for equal loudness and rotated for
congruent localisation by ear.


All files are classical first-order B-format 24bit WAV at 48khz.
If you want to listen to them on stereo speakers, you can either use a UHJ
encoder, or a virtual stereo microphone. Both are available as LADSPA
plugins for Linux and Mac users as part of Fons' AMB plugin package. For
users of that other operating system, Google is your friend.
For serious A/B comparison, you should probably import each pair of files
into some DAW and play it back in sync while switching between the mics. 
For recreational listening under Linux, try starting JACK and AmbDec with
a suitable first-order decoder, and then use

mplayer -channels 4 -ao jack:port=ambdec filename

.

*.*

First, two brief excerpts from a concert by The Kites, a recorder ensemble
from Germany. The concert was part of the Montag-Tontag series at the
Kunsthaus Essen, a very small location that holds about 40 guests if you
squeeze them a bit. Way too close-miked for recorders, directly in front of
the stage, about 1.5m away from the musicians, at standing ear-height. 
And of course the acoustics are quite hostile to recorders, so Fons'
zita-rev1 had to help a bit. Still too direct for my taste, but good for 
comparing mic performance.

Ye olde Englishe musicke, with some very neat ad-libbing over Pastime with
Good Company:

The_Kites-Three_Pieces_from_the_Court_of_Henry_VIII-CoreSound_TetraMic-24bit-48k.amb
The_Kites-Three_Pieces_from_the_Court_of_Henry_VIII-Soundfield_ST450-24bit-48k.amb

Dave Holland's Conference of the Birds. The Petzold bass flute was gently
amplified using an AER acousticube amp on stage. Something apparently bumped 
into the mic stand during the performance, thus a GLAME 5-pole highpass at 100Hz
was applied to both mics. Same reverb settings as before.

The_Kites-Conference_of_the_Birds-CoreSound_TetraMic-24bit-48k.amb
The_Kites-Conference_of_the_Birds-Soundfield_ST450-24bit-48k.amb

You will notice that the applause is louder on the rear left - that's how
the audience was seated.

*.*

Next, a 12-minute free improvisation by Vincent Royer on viola, Stefan Werni on
double bass and Thorsten Töpp on classical guitar. Same location, roughly
the same microphone position. No amplifiers - yes, that _is_ a weapons-grade
double bass. Beautiful collaboration with a keen sense for space and tone - 
give it a spin even if the word free jazz gives you the creeps, chances
are you'll like it. In any case, it's a great test signal.

No artificial reverb this time, the Kunsthaus works quite well for this kind
of intimate music.

Royer-Werni-Töpp_Improvisation_#3-CoreSound_TetraMic-24bit-48k.amb
Royer-Werni-Töpp_Improvisation_#3-Soundfield_ST450-24bit-48k.amb

You should hear the viola to the right, the double bass in the center, and
the guitar to the left.

*.*

My private conclusion:

The ST450 is quite bassy and warm, which can be nice but it's not how I like
my microphones. However, it's easily tuned to taste with some gentle EQ, and
I can see how the basic sound would appeal to most musicians and location
recordists.
The Tetramic does an amazing job for the price, if and only if the music is 
loud enough.
I have no clear preference for the sound of either, although the ST450 is
nicer to tricky instruments such as strings, or voices.
But if you need better signal-to-noise ratio, there's no way around the
ST450. Of all the Soundfields I have used so far (a DSF-1, a Mark IV, an
ST350), the ST450 has the best localisation.
Unfortunately, it will set you back by about 4k pounds, which makes the
Tetramic look really good in comparison. But that price includes a very good

Re: [LAD] [OT] Comparison recordings of CoreSound Tetramic and Soundfield ST450

2011-06-26 Thread Harry Van Haaren
Hey Jörn!

Thanks for the upload, will be grabbing them troughout the next week or two.
Am really intrested as to if there's
much difference... both in the sound itself  in location encoding.. :)

Will post back once I've got them  listened back!
Thanks again, -Harry


2011/6/27 Jörn Nettingsmeier netti...@folkwang-hochschule.de

 Hi *!


 For those interested in Ambisonic surround sound: finally I've managed to
 upload some side-by-side recordings of a Tetramic and the new ST450 which
 have been sitting on my harddisk for way too long.

 http://stackingdwarves.net/**download/TetraMic_vs_ST450/http://stackingdwarves.net/download/TetraMic_vs_ST450/

 Hopefully the recordings are worth your time even if you're not currently
 shopping for a new surround microphone.

 Attached is a README that goes with the audio files.


 Enjoy,


 Jörn




 ___
 Linux-audio-dev mailing list
 Linux-audio-dev@lists.linuxaudio.org
 http://lists.linuxaudio.org/listinfo/linux-audio-dev


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Determining Phase

2011-06-26 Thread Duncan Gray

On the subject of the word Quadrature:

The word Quadrature is shorthand for the imaginary component of a 
complex-valued variable. Furthermore, it is almost exclusively used in 
the domain of carrier modulated waveforms. The word for the real 
component is In-Phase. That is why so many texts on the subject use I 
and Q to describe these two concepts. This choice of words was due to 
the fact that some radiomen in the '20s created a 90-degree delayed 
signal from the carrier (a quatrature relationship with respect to the 
carrier in the ancient college trigonometry literature) and (I doubt 
ignorantly) left out the fact that it was simply a practical way to 
perform complex analysis on a narrow-band waveform.


@Gene:
It was unfortunate that you used the mathematically reserved word 
complex to describe a complicated signal. The concept of quadrature in 
an audio baseband signal assumes a single frequency, say 1000 Hz., 
wherein that arbitrarily chosen carrier is assumed for Quadrature 
analysis. In fact, the Fourier Transform assumes an infinite number of 
such carriers to create a continuous, complex-valued transform of the 
pure real audio channel. The FFT is a Discrete Fourier Transform that 
periodizes the waveform to create a finite number of discrete carriers, 
each at the bin related to the length of the transform. Each bin of 
frequency output by that transform has a real and an imaginary component 
-- Quadrature.


@Gordon, please remember that your SDAR code ASSUMES the carrier 
frequency so as to be able to analyze Quadrature. It is a more 
complicated concept in general complex analysis of a broadband signal 
such as audio.


Duncan
-- In the limit as productivity approaches infinity all jobs become 
entertainment or neurosis.


On 06/26/2011 05:02 PM, Gordon JC Pearce wrote:

On Sun, 26 Jun 2011 14:54:48 -0400
gene heskettghesk...@wdtv.com  wrote:


I do not see how a repeatable, and therefore measurable quadrature
component can be developed in a complex, multi-frequency waveform since the
quadrature component is just as frequency dependent as any other method of
measurement.

Really?  Because I'm generating some quadrature samples right now - they're how 
software-defined radios work.  In this case you generate two local oscillators 
90 degrees apart and use them to switch a pair of synchronous detectors.  The 
resulting downmixed RF is now at audio frequencies (just like a 
direct-conversion receiver) and can be passed to a soundcard, then processed on 
the PC to extract a particular signal.

See https://github.com/gordonjcp/lysdr if you want to play.

Gordon MM0YEQ
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev