Re: [LAD] a *simple* ring buffer, comments pls?

2011-07-20 Thread Kai Vehmanen

Hi,

thanks folks for the annual ringbuffer thread, it's always a pleasure to 
read (and you learn something new at every iteration). ;)


On Tue, 12 Jul 2011, Dan Muresan wrote:


I wonder if

{
pthread_mutex_t dummy = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_lock(&dummy);
pthread_mutex_unlock(&dummy);
}

doesn't provide a portable full memory barrier. The dummy is different


This is exactly what I've been thinking about using in my code. It does 
have bit of a "i'm giving up" feel to it ;), but it would seem to be the 
only really easy, portable way to ensure a full barrier, even on exotic 
hardware. I'd love to use the new C++0x atomic+barrier ops, but I'm afraid 
that's still too bleeding edge as a build-dependency, and I'm not 
motivated enough to add (and test) a spagetti blob of autoconf magic to 
test for various available options.



each time, so no contention -- but still inefficient since  this would
be a 2-step full barrier. Nevertheless, it could be a portable
fallback.


True, but I wonder if the performance hit is really big enough to warrant 
the complexity (in terms of additional testing and maintaining more 
optimal barrier implementations). Predictability, reliability (data 
coherency) and code readability might be worth more than the performance 
hit. But yeah, some actual numbers would be needed (and thus I'm still 
just contemplating using this approach).


PS My plan B is to wait and see for another 10 years (and many more
   long threads about this topic on this list), and then
   I can at least just start using C++0x already... ;)
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Meego pulseaudio "compliance" and "enforcement" (was Re: [Meego-handset] Enabling Speakerphone)

2010-12-19 Thread Kai Vehmanen

Hi,

On Sun, 19 Dec 2010, Nick Copeland wrote:

That was a very interesting post, will be keeping it. Now PA does use 
quite a lot of CPU on the N900 - the +/-2% it requires on my laptop 
translates into about 25% on Maemo, this really is quite an overhead and 
as far as I can tell it does not change with sampling rate (I get the 
same overhead with 48kHz as with 44.1kHz although I will retest that).


btw, one thing to watch out when doing measurements is the CPU 
frequency... N900 uses very aggressive cpufreq and even with heavy audio 
processing, the CPU is not necessarily running at full speed (so even if 
top shows 25%, that doesn't necessarily mean that only 75% processing 
capacity is left for apps). But this is really only a problem when making 
measurements (the cpufreq is boosted automatically when needed).


You can use standard kernel interfaces to limit the CPU frequencies 
(although for sake of your battery, I'd stick to the defaults for all 
normal usage).


And another btw, in case someone has missed this, do check Jyri Sarha's 
slideset presented at plumber's conf 2009:

http://linuxplumbersconf.org/2009/slides/Jyri-Sarha-audio_miniconf_slides.pdf

E.g. slide 15 has a good diagram of how the pipelines are connected.

Some things to try:
  - use an output low overehead output (e.g. headset)
  - use srate that matches hw rate (48kHz in this case) ->no SRC is
involved


I am assuming, possibly incorrectly, that it uses Lennart's correct resampling
algorithm. Is this the case? If so is there any way an application can request


There should be no resampling happening (if you use 48kHz), so it
should not be that. But if src is used in your case, N900 uses speex src 
(speex-fixed-2) optimized for arm NEON (see Siarhei's comment in bug for

info and links:
https://test.maemo.org/testzilla/show_bug.cgi?id=5794#c10 ).


Don't take this too seriously, but I've personally found it a bit 
suprising how much negative feedback there has been about the audio mixer 
CPU cycles. In phones (versus desktop/laptops), some processing is needed 
as audio quality (especially for voice calls) is a closely followed aspect 
(it is benchmarked by various parties and various organizations have 
detailed requirements concerning it). And you can't just solve this by 
using a bit better components as the devices are physically so small (and 
packed with stuff), are often used in very audio-hostile/noisy 
environments, all that makes the acoustics design anything but easy 
(and fine-tuning with the help of SW always helps). E.g. the 10" laptop 
I'm writing this on has speakers the size of almost half the length of a 
phone, and the netbook speakers still sound pretty bad. Maybe the laptop 
could use a bit of DSP help as well to get more out of the same hw... 
(BruteFIR, anyone..). :P


And OTOH, if the whole thing had been hid on a HW blob (and as is common, 
with no possibility to run 3rd party software on it, nor otherwise control 
it), roughly same amount of cycles would still be spent, although on 
another core (and not showing up in Linux 'top' output). For overall 
battery consumption this can (but not of course in all cases) be worse as 
the Linux CPU is anyways waking up, so it can be actually more efficient 
to do the processing there as well (the caches are hot and CPU already 
powered, so doing a bit more work per wakeup is in fact quite efficient). 
Now that N900 does this in a more open way, hordes of people are screaming 
bloody murder about wasting CPU on processing samples! ;D


Oh well, I do understand people want all the CPU cycles they can for apps, 
and this is of course perfectly reasonable (and thus don't take this too 
seriously).



On a more serious note, the key take away (especially to meego-handset 
folks) is that this particular worry has very little to do with 
Pulseaudio. You can do a MeeGo device without any of the abovementioned 
processing, use separate DSPs, and still use Pulseaudio on top as a 
frontend mixer (with very little overhead). Bypassing  PA completely 
should be possible as well, but then the resource framework enforcement 
points need to be extended to cover the hw mixers (so that the handset 
behaves as expected with regards to routing, application priority and 
volume policies in effect -> you don't want "new email" tones to your 
video recording, and so forth).

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Meego pulseaudio "compliance" and "enforcement" (was Re: [Meego-handset] Enabling Speakerphone)

2010-12-19 Thread Kai Vehmanen

Hi,

On Sat, 18 Dec 2010, Niels Mayer wrote:


It has decent performance on maemo and appears to use pulseaudio,
which eats 1/3 of the CPU of the 'sunvox' process. The sunvox

[...]

 PID  PPID USER STAT   RSS %MEM %CPU COMMAND
1916  1162 user S 6388  2.5 35.1 /usr/bin/sunvox
 825 1 pulseR <   3812  1.5 12.1 /usr/bin/pulseaudio --system


so as mentioned in other replies, this is not really pulseaudio to blame. 
On N900 there is some fixed processing that must be applied to all streams 
(and there are different pipelines for different uses and routes, so it's 
not always constant). With N900 this code is now in PA (as it's the system 
mixer, there's no hw mixer under the hood). The load is smaller e.g. when 
you use earpiece/headset (versus speaker), and also it should be lower if 
you have the apps use the hw rate (48kHz in N900 case). You can challenge 
the N900 design, but none of this is really specific to Pulseaudio.


The N900 audio design in fact gives quite a lot of responsibility to Linux 
side (perhaps an interesting comparison is the Palm Pre as it also uses 
Pulseaudio). This has some cost, but also a great deal of flexibility (in 
terms of what can be implemented, but also for work flows as you can 
devel&debug the whole audio system with same Linux development tools). The 
more traditional approach is to put all of this logic to a separate HW 
block, but then the Linux side has even less control and access to what 
happens to the audio streams. In these cases, Pulseaudio becomes just a 
frontend mixer for streams originating from/to Linux apps. In this sense 
N900 allows you much closer access to the HW, and there is more potential 
for really low-latency uses with this architecture. I'm personally 
thrilled of the potential of this approach (while not claiming the 
implementation is perfect yet).



Doing an "ls -lR /" in a remote xterm (over SSH) results in some audio
glitching, but no "desynchronization" where the audio just stops
playing.


This is a real problem for sure.

I believe the cause for the glitches is that sunvox's audio thread is not 
run under SCHED_FIFO/RR. The real/nonrealtime sync point really should be 
in application's hands (like it is by default in JACK apps). If Pulseaudio 
alone tries to handle this (trying to buffer audio from/to nonrealtime 
apps), you will get huge latencies (and huge utilization of 
unswappable locked memory).


But the problem is not really Pulseaudio, but instead the mechanisms to 
allow/manage FIFO/RR. JACK of course provides a good system for setting 
the scheduling for clients, but the core problem is not really setting the 
scheduling policy, but how to manage and coordinate the system-wide impact 
of multiple real-time apps, and their possible conflicts.


When you build a JACK system, jackd and related apps take priority over 
everything else on the system (in real-time sense). Avoiding glitches for 
them is of utmost priority (UI can lag as long as audio runs without 
xruns). On a mobile device, there are various other important functions 
that need real-time (the exact list varies from product to product but 
there are always some), and which must work. So even if you use apps like 
Sunvox, you still want to be able to receive/make calls reliably, be 
certain that the device wakes you up in the morning, and is ready for 
quick camera shooting when the situation arises. So overall system 
reliability and responsiveness is really important. Having lots of 
uncoordinated (e.g. developed separately, not referring to quality or 
cases of possible malicious intent here) real-time apps is a tricky 
problem. It's not a new problem, agreed, but still a difficult to solve in 
the generic case.


Ideally we'd have more scheduling policy options in Linux for this (there 
have been no shortage of proposals, also from Nokia), but currently 
FIFO/RR are still the best options on the table for low-latency audio.


Now there are some building blocks in place already in MeeGo/Maemo (e.g. 
cgroups scheduling is used and there is a system for managing resources), 
but there are still gaps that cause trouble to apps. Currently the only 
way to achieve JACK-style low-latency is to write your app's audio thread 
as a PA module. Aside cgroups, real-time watchdogs could be used, and even 
with the risk of restarting past flamewars ;), PolicyKit is obviously one 
option as well to improve this area.


But to summarize, the core problems that still need addressing w.r.t. 
JACK-style low latency, are not really related to Pulseaudio (nor solved 
simply by switching to JACK). If MeeGo/Maemo would have used JACK, but had 
no managed system in place for passing FIFO/RR to apps, the only option to 
implement low-latency engines would be to write them as inprocess JACK 
apps. So you'd have the same exact problem as with Pulseaudio (and have 
many apps running nonrealtime despite their need for low-latency access to 
the mixer -> glitches in cases 

Re: [LAD] Meego pulseaudio "compliance" and "enforcement" (was Re: [Meego-handset] Enabling Speakerphone)

2010-12-18 Thread Kai Vehmanen

Hi,

On Sat, 18 Dec 2010, Niels Mayer wrote:


The list comprises a good number of people with expertise in both
pulseaudio; hopefully the Jack sound server authors, including Paul
Davis, will be willing to publicly share their perspectives on the
issues raised regarding the role of pulseaudio on a handset and  Linux
audio performance/latency/efficiency issues.


btw, just a note that there are lots of LAD/JACK regulars (including me) 
currently working with Maemo/MeeGo, so there's some (or actually quite a 
lot) of history to this discussion.


Here's my reply to Benno's mail (July this year) about this same topic on 
maemo-developers: 
http://lists.maemo.org/pipermail/maemo-developers/2010-July/027087.html


I think many (not all but many) issues you Niels raised in your original 
mail would happen with JACK as well:

  - Most of the CPU load attributed to Pulseaudio is actually
algorithms specific to the product (in-process clients in JACK
terms, so would show up with jackd as well).
  - Bugs in kernel drivers that cause scheduling spikes (your WLAN
scan example) would affect jackd just as badly (bear in mind
that your average mobile phone has a huge number of hw drivers,
so there's a lot of kernel code to go through for fine-tuning);
The Maemo5 N900 firmware should fair much better (it has gone
through more fine-tuning).. This is very familiar to all of us
building machines to run jackd. Even one badly behaving driver
can ruin the whole thing, and hunting these down is not a small
task.
  - Locking the sample-rate is not specific to Pulseaudio. It's
more related to the problem of supporting multiple, totally
independent clients; e.g. you could lock the hw to 8000 for calls,
but if then video playback is started, while the call is still
ongoing, do you a) stick to 8000Hz for the whole video as well,
b) have a glitch as you need to reprogram the hw to different rate,
or c) do you do the common thing and run at a common backend rate
(like 48000) and resample some of the client streams.

Agreed, JACK has many strong points that apply to MeeGo context as well, 
especially in providing a good API for out-of-process clients to get 
low-latency access to the audio engine. You can do the same with 
Pulseaudio, but it's more complicated and more prono to errors (e.g. the 
QAudio recording example Benno mentioned in the above thread). But then 
OTOH, Pulseaudio has many strong points as well. Both projects can be 
extended/developed. Of course here one important factor is what the 
projects themselves want to do... and here Pulseaudio has a much more 
explicit focus on wanting to solve the wider audio-on-the-desktop problem, 
and it has had that for a long time.

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] List archives

2009-08-04 Thread Kai Vehmanen

Hi,

On Thu, 23 Jul 2009, Robin Gareus wrote:


There seem to be several archives of this list:

  http://lalists.stanford.edu/lad/


The "original" LAD list until 2002 server. Since then, they keep
backup-copies of all list emails; subscription there is no longer possible.


well, to be precise, the list of LAD hosts:

 - 1998-2001: ginette.musique.umontreal.ca (Alex Burton)
  (e.g. http://lalists.stanford.edu/lad/1998/0007.html)
 - 2001-2007: music.columbia.edu (Douglas Repetto, Jörn Nettingsmeier)
  http://lalists.stanford.edu/lad/2001/May/0596.html
 - 2007-: lists.linuxaudio.org (Marc-Olivier Barre)
  http://lalists.stanford.edu/lau/2007/02/0828.html

The last two hosts have maintained their own web archives for lists. Then 
we additionally have various other 3rd party archives around the web.


Anyways, the original host at umontreal.ca did not provide a list archive. 
So in March 2000 I volunteered to set up and host the archive (with an 
address-mangling script from Paul Winkler that is still in use ;)):


  http://lalists.stanford.edu/lad/2000/Mar/0002.html

... in 2004, I couldn't anymore provide the bandwidth for the archive:

  http://lalists.stanford.edu/lad/2004/10/0148.html

But Fernando offered to host the pages and thus the pages moved to 
'lalists.stanford.edu/la*'. So these still combine the archives spanning 
all three list hosts. But like in 2004, I'm happy to get rid of the 
archive maintenance. :) And agreed, the best solution is to merge the 
archives with those already at lists.linuxaudio.org.


Now, I can upload the hypermail created html tree (of certain 
years/months) to lists.linuxaudio.org, but mboxes will require more effort 
(there is no clean set of mbox files for the whole archive, but the 
archive has been created from personal mbox folders from various people 
and I don't have the full set at hand -- I can try to dig it up though).



@Nando: on http://lalists.stanford.edu/ "website" links should point to
http://lists.linuxaudio.org


Thanks, updated.___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] RFC: Default discovery paths for LADSPA, LRDF, LV2 and DSSI (and more?)

2009-07-05 Thread Kai Vehmanen
Hi,

On Thu, 25 Jun 2009, Stefano D'Angelo wrote:
[using LADSPA unique ids]
> That is bad of Ecasound, since ladspa.h says:
> 
> "Plugin types should be identified by file and label rather than by index or 
> plugin name, which may be changed in
> new plugin versions."

Ecasound allows to use both label (-el) and unique id (-eli) to identify 
plugins. In many cases label is sufficient (versus file+label), and in the 
rare case of a conflict, then unique-id is a practical way to select the 
correct plugin. At least nobody has complained about the current 
implementation so far. Still, adding a 3rd option (file+label) is perhaps 
not a bad idea. It's just that for command-line line use, the shorter 
variants are less cumbersome to use.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev