Re: [pulseaudio-discuss] Heads-up: the routing patches will start to get merged soon

2014-04-04 Thread Tanu Kaskinen
On Thu, 2014-04-03 at 21:34 +0200, David Henningsson wrote:
> On 04/03/2014 11:07 AM, Tanu Kaskinen wrote:
> > Hi all,
> > 
> > There's a big pile of routing patches from me that haven't been merged
> > to master. Some of them have been reviewed and are in the "routing"
> > branch, and some (most) are pending review. The plan was that Arun would
> > review them, but it turned out that he doesn't have time for that after
> > all. It appears that nobody has time to review those patches, so to
> > avoid blocking the work forever, I plan to start pushing them to master
> > without waiting for reviews any longer. If anyone has objections or
> > questions about this, let me know.
> > 
> > I don't expect the merging process to happen overnight, because
> > currently most of my time goes to the Tizen volume API, and there's
> > probably also significant amount of rebasing work to do (the routing
> > branch forked from master in September).
> 
> When I stopped reviewing (mainly due to lack of time), the patches added
> more complexity than it solved actual problems. I tried to point that
> out repeatedly last year and help you back on track, but the latest was
> a "I give up" here [1]. If this situation has not changed significantly
> since, my opinion is that I don't we should merge anything of the
> routing work to master. This is because the cost of the added complexity
> weighs heavier than the benefit of added features (or solved problems).
> If it has changed significantly, could you give a summary on why I
> should re-evaluate this opinion?

I don't think there's significant change.

So you want to trigger merging only once there are significant benefits
for users. Peter and Arun, what's your opinion, is this a fair
requirement? What benefit would be significant enough? Which of these
(if any) would be sufficient?

1) Automatic port switching when an application wants to play to an
inactive node: "paplay --routing=some-inactive-port-node foo.wav"

2) Support for loopback routing without loading module-loopback
manually: "pactl set-node-routing some-input-port-node
some-output-port-node"

3) Support for automatic combine sink creation: "pactl set-node-routing
some-playback-stream-node output-node1,output-node2"

4) A Murphy module that uses nodes for routing

5) Non-Murphy routing module that does something smart (please define
"something smart")

6) Something else

> But a related question, if it's suddenly okay to push complicated stuff
> without review, should I go ahead and push my Ubuntu phone stuff as
> well? It's been waiting for review for about half a year now.

It appears to be rather self-contained, and doesn't add any public API,
so if you prefer pushing it now instead of waiting for review, I'm OK
with that.

-- 
Tanu

___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss


Re: [pulseaudio-discuss] Heads-up: the routing patches will start to get merged soon

2014-04-04 Thread David Henningsson
On 04/04/2014 10:14 AM, Tanu Kaskinen wrote:
> On Thu, 2014-04-03 at 21:34 +0200, David Henningsson wrote:
>> On 04/03/2014 11:07 AM, Tanu Kaskinen wrote:
>>> Hi all,
>>>
>>> There's a big pile of routing patches from me that haven't been merged
>>> to master. Some of them have been reviewed and are in the "routing"
>>> branch, and some (most) are pending review. The plan was that Arun would
>>> review them, but it turned out that he doesn't have time for that after
>>> all. It appears that nobody has time to review those patches, so to
>>> avoid blocking the work forever, I plan to start pushing them to master
>>> without waiting for reviews any longer. If anyone has objections or
>>> questions about this, let me know.
>>>
>>> I don't expect the merging process to happen overnight, because
>>> currently most of my time goes to the Tizen volume API, and there's
>>> probably also significant amount of rebasing work to do (the routing
>>> branch forked from master in September).
>>
>> When I stopped reviewing (mainly due to lack of time), the patches added
>> more complexity than it solved actual problems. I tried to point that
>> out repeatedly last year and help you back on track, but the latest was
>> a "I give up" here [1]. If this situation has not changed significantly
>> since, my opinion is that I don't we should merge anything of the
>> routing work to master. This is because the cost of the added complexity
>> weighs heavier than the benefit of added features (or solved problems).
>> If it has changed significantly, could you give a summary on why I
>> should re-evaluate this opinion?
> 
> I don't think there's significant change.
> 
> So you want to trigger merging only once there are significant benefits
> for users. 

...and those benefits outweighs the cost of the additional complexity of
maintaining another node/edge layer. Which means that the more
complexity you add, the more user benefit you need.

> Peter and Arun, what's your opinion, is this a fair
> requirement? What benefit would be significant enough? Which of these
> (if any) would be sufficient?
> 
> 1) Automatic port switching when an application wants to play to an
> inactive node: "paplay --routing=some-inactive-port-node foo.wav"
> 
> 2) Support for loopback routing without loading module-loopback
> manually: "pactl set-node-routing some-input-port-node
> some-output-port-node"
> 
> 3) Support for automatic combine sink creation: "pactl set-node-routing
> some-playback-stream-node output-node1,output-node2"
> 
> 4) A Murphy module that uses nodes for routing
> 
> 5) Non-Murphy routing module that does something smart (please define
> "something smart")
> 
> 6) Something else

I was originally hoping for a generic solution to our current routing
issues, such as making it easy to configure which (hotpluggable)
devices/ports should be used in which scenarios, with a sensible
default. Maybe some type of device priority order, like Colin Guthrie
has suggested (and even implemented, but it was never merged).

And for the bug where default sink/source can change after S3, to be fixed.

But it does not look like that's the direction you're heading, or is it?

>> But a related question, if it's suddenly okay to push complicated stuff
>> without review, should I go ahead and push my Ubuntu phone stuff as
>> well? It's been waiting for review for about half a year now.
> 
> It appears to be rather self-contained, and doesn't add any public API,
> so if you prefer pushing it now instead of waiting for review, I'm OK
> with that.

Ok, what does Peter/Arun think about this?

-- 
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic
___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss


Re: [pulseaudio-discuss] On scaling the HRIR in module-virtual-surround-sink

2014-04-04 Thread Tanu Kaskinen
On Thu, 2014-04-03 at 21:36 -0500, CCOR58 wrote:
> Have I subscribed to the wrong list?
> 
> I originally thought this was a help type list, but it appears to be 
> more of a developer's forum.

This list is for both kinds of discussions.

> I was looking for a forum where I could get a better understanding of 
> the overall linux audio system; ie ALSA, Jack Audio, pulse audio and 
> hardware and how they all interact.
> 
> The individual .net or .org sites for each of these separate entities 
> are detailed about their own projects but seem somewhat vague about how 
> they all tie together.
> 
> I thought I understood them and had a box running Mint 16 ALSA JACK & 
> PULSE and when I set up the jack patch system and understood what was a 
> dedicated connection; feeding a 1004 Hz tone into the line in on the 
> hardware seemed to be fed everywhere and same with a mic which caused 
> feedback loop and loud squeal.
> 
>  From what I read once everything showed up in the jack connections 
> panel, jack had control and this strange feedback should not have occurred.

What connections did you have in jack? If you had no connections at all,
then this sounds like the alsa mixer settings were set up so that there
was a direct loop in the hardware from line in to speakers. AFAIK, Jack
doesn't control the alsa mixer settings at all, and certainly not those
settings that can cause loops. You can hopefully fix this with
alsamixer. alsamixer has three modes that can be cycled with the tab
key: Playback, Capture and All. If you select the Playback mode (which
is the default anyway) and you see volume/mute elements that refer to
input (e.g. "Mic"), make sure those are muted, because if "Mic" playback
is not muted, it means that the mic input is played to the speakers (or
whatever output is enabled).

-- 
Tanu

___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss


Re: [pulseaudio-discuss] Heads-up: the routing patches will start to get merged soon

2014-04-04 Thread Tanu Kaskinen
On Fri, 2014-04-04 at 10:39 +0200, David Henningsson wrote:
> On 04/04/2014 10:14 AM, Tanu Kaskinen wrote:
> > Peter and Arun, what's your opinion, is this a fair
> > requirement? What benefit would be significant enough? Which of these
> > (if any) would be sufficient?
> > 
> > 1) Automatic port switching when an application wants to play to an
> > inactive node: "paplay --routing=some-inactive-port-node foo.wav"
> > 
> > 2) Support for loopback routing without loading module-loopback
> > manually: "pactl set-node-routing some-input-port-node
> > some-output-port-node"
> > 
> > 3) Support for automatic combine sink creation: "pactl set-node-routing
> > some-playback-stream-node output-node1,output-node2"
> > 
> > 4) A Murphy module that uses nodes for routing
> > 
> > 5) Non-Murphy routing module that does something smart (please define
> > "something smart")
> > 
> > 6) Something else
> 
> I was originally hoping for a generic solution to our current routing
> issues, such as making it easy to configure which (hotpluggable)
> devices/ports should be used in which scenarios, with a sensible
> default. Maybe some type of device priority order, like Colin Guthrie
> has suggested (and even implemented, but it was never merged).
> 
> And for the bug where default sink/source can change after S3, to be fixed.
> 
> But it does not look like that's the direction you're heading, or is it?

I'm heading towards "a generic solution to our current routing issues",
but that solution will depend on Murphy, which will provide the
configurability and the default routing rules. In my opinion,
implementing another solution with good configurability and
better-than-current default routing without Murphy should be implemented
by someone else, if a non-Murphy-based solution is desired.

If I understood correctly, you wish that I'd implement a full generic
non-Murphy-based solution before merging the node infrastructure, but
it's unclear to me whether that wish is a minimum requirement or not,
and if it's not, what's the minimum requirement? Something to remember
is that nodes aren't useful only for automatic routing, they also
provide a nicer routing interface for clients.

-- 
Tanu

___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss


Re: [pulseaudio-discuss] Heads-up: the routing patches will start to get merged soon

2014-04-04 Thread Peter Meerwald

> > Peter and Arun, what's your opinion, is this a fair
> > requirement? What benefit would be significant enough? Which of these
> > (if any) would be sufficient?

> Ok, what does Peter/Arun think about this?

Tanu is starting to do some advertising :), this is good!

> > 1) Automatic port switching when an application wants to play to an
> > inactive node: "paplay --routing=some-inactive-port-node foo.wav"
> > 
> > 2) Support for loopback routing without loading module-loopback
> > manually: "pactl set-node-routing some-input-port-node
> > some-output-port-node"

I think there is no point in keeping stuff separate forever; if it doesn't 
terribly interfere with the current code, routing should be merged soon

there is no other way to understand (and probably fix) whatever Tanu is 
doing; routing doesn't have to be 'perfect'

I'd ask for code under test/ and a list of meaningful/useful examples 
(such as stated above) and can be checked and regression tested

regards, p.

-- 

Peter Meerwald
+43-664-218 (mobile)
___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss


[pulseaudio-discuss] Less CPU overhead with a new protocol channel mechanism

2014-04-04 Thread David Henningsson
In low latency scenarios, PulseAudio uses up quite a bit of CPU. A while
ago I did some profiling and noticed that much of the time was spent
inside the ppoll syscall.

I couldn't let go of that problem, and I think optimising PulseAudio is
a good thing. So I went ahead and did some research, which ended up with
a lock-free ringbuffer in shared memory, combined with eventfds for
notification. I e, I added a new channel I called "srchannel" in
addition to the existing "iochannel" that usually uses UNIX pipes.

When running this solution with my low-latency test programs, I ended up
with the following result. The tests were done on my core i3 laptop from
2010, and I just ran top and tried to take an approximate average.

Reclatencytest: Recording test program. Asks for 10 ms of latency, ends
up with a new packet every 5 ms.

With iochannel:
Pulseaudio main thread - 2.6% CPU
Alsa-source thread - 1.7% CPU
Reclatencytest - 2.6% CPU
Total: 6.9% CPU

With srchannel:
Pulseaudio main thread - 2.3% CPU
Alsa-source thread - 1.7% CPU
Reclatencytest - 1.7% CPU
Total: 5.3% CPU

I e, CPU usage reduced by ~25%.

Palatencytest: Playback test program. Asks for 20 ms of latency (I tried
10 ms, but it was too unstable), ends up with a new packet every 8 ms.

With iochannel:
Pulseaudio main thread - 2.3% CPU
Alsa-sink thread - 2.2% CPU
Palatencytest - 1.3% CPU
Total: 5.8% CPU

With srchannel:
Pulseaudio main thread - 1.7% CPU
Alsa-sink thread - 2.2% CPU
Palatencytest - 1.0% CPU
Total: 4.9% CPU

I e, CPU usage reduced by ~15%.

Now, this is not all there is to it. In a future generation of this
patch, I'd like to investigate the possibility we can have the client
listen to more than one ringbuffer, so we can set up a ringbuffer
directly between the I/O-thread and the client, too. That should lead to
even bigger savings, and hopefully more stable audio as well (less
jitter if we don't pass through the non-RT main thread).

As for the implementation, I have a hacky/drafty patch which I'm happy
to show to anyone interested. Here's how the patch works:

Setup:

1) A client connects and SHM is enabled like usual. (In case SHM cannot
be enabled, we can't enable the new srchannel either.)
2) The server allocates a new memblock for the two ringbuffers (one in
each direction) and sends this to the client using the iochannel.
3) The server allocates two pa_fdsem objects (these are wrappers around
eventfd).
4) The server prepares an additional packet to the client, with a new
command PA_COMMAND_ENABLE_RINGBUFFER.
5) The server attaches the eventfds to the packet. Much like we do with
pa_creds today, file descriptors can be shared over a socket using the
mechanism described e g here [1].
6) The client receives the memblock and then the packet with the eventfds.
7) Both client and server are now enabling the ringbuffer for all
packets from that moment on (assuming they don't need to send additional
pa_creds or eventfds, which have to be sent over the iochannel).

The shared memblock contains two ringbuffers. There are atomic variables
to control the lock-free ringbuffer, so they have to writable by both
sides. (As a quick hack, I just enabled both sides to write on all
memblocks.)

The two ringbuffer objects are encapsulated by an srchannel object,
which looks just like the iochannel to the outside world. Writing to an
srchannel first writes to the ringbuffer memory, increases the atomic
"count" variable, and signals the pa_fdsem. On the reader side that
wakes up the reader's pa_fdsem, the ringbuffer's memory is read and
"count" is decreased.

The pstream object has been modified to be able to read from both an
srchannel and an iochannel (in parallel), and writing can go to either
channel depending on circumstances.

Okay, so this was a fun project and it seems promising. How do you feel
I should proceed with it? I expect a response from you, perhaps along
some of these lines:

 1) Woohoo, this is great! Just make your patches upstreamable and I
promise I'll review them right away!

 2) Woohoo, this is great! But I don't have any time to review them, so
just finish your patches up, and push them without review!

 3) This is interesting, but I don't have any time to review them, so
put your patches in a drawer for the forseeable future.

 4) This is interesting, but some reduced CPU usage in low latency
scenarios isn't worth the extra code to maintain. (And the extra 64K per
client, for the ringbuffers.)

 5) I think the entire idea is bad, because...

-- 
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic

[1] http://keithp.com/blogs/fd-passing/
___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss


Re: [pulseaudio-discuss] Less CPU overhead with a new protocol channel mechanism

2014-04-04 Thread Peter Meerwald
Hello,

> In low latency scenarios, PulseAudio uses up quite a bit of CPU. A while
> ago I did some profiling and noticed that much of the time was spent
> inside the ppoll syscall.

this is indeed a relevant problem; PA easily uses 20%+ CPU on 
embedded ARM systems in low-latency duplex workloads according to my 
measurements

>  1) Woohoo, this is great! Just make your patches upstreamable and I
> promise I'll review them right away!
> 
>  2) Woohoo, this is great! But I don't have any time to review them, so
> just finish your patches up, and push them without review!

I'm somewhere in between 1 and 2; I'd like to reproduce your 
measurements on ARM

p.

-- 

Peter Meerwald
+43-664-218 (mobile)
___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss


Re: [pulseaudio-discuss] Heads-up: the routing patches will start to get merged soon

2014-04-04 Thread David Henningsson
On 04/04/2014 11:44 AM, Peter Meerwald wrote:
> 
>>> Peter and Arun, what's your opinion, is this a fair
>>> requirement? What benefit would be significant enough? Which of these
>>> (if any) would be sufficient?
> 
>> Ok, what does Peter/Arun think about this?
> 
> Tanu is starting to do some advertising :), this is good!
> 
>>> 1) Automatic port switching when an application wants to play to an
>>> inactive node: "paplay --routing=some-inactive-port-node foo.wav"
>>>
>>> 2) Support for loopback routing without loading module-loopback
>>> manually: "pactl set-node-routing some-input-port-node
>>> some-output-port-node"
> 
> I think there is no point in keeping stuff separate forever; if it doesn't 
> terribly interfere with the current code, routing should be merged soon

It does interfere a lot with the current code.

-- 
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic
___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss


Re: [pulseaudio-discuss] Less CPU overhead with a new protocol channel mechanism

2014-04-04 Thread Tanu Kaskinen
On Fri, 2014-04-04 at 11:46 +0200, David Henningsson wrote:
> In low latency scenarios, PulseAudio uses up quite a bit of CPU. A while
> ago I did some profiling and noticed that much of the time was spent
> inside the ppoll syscall.
> 
> I couldn't let go of that problem, and I think optimising PulseAudio is
> a good thing. So I went ahead and did some research, which ended up with
> a lock-free ringbuffer in shared memory, combined with eventfds for
> notification. I e, I added a new channel I called "srchannel" in
> addition to the existing "iochannel" that usually uses UNIX pipes.

What does "sr" in "srchannel" mean?

> When running this solution with my low-latency test programs, I ended up
> with the following result. The tests were done on my core i3 laptop from
> 2010, and I just ran top and tried to take an approximate average.
> 
> Reclatencytest: Recording test program. Asks for 10 ms of latency, ends
> up with a new packet every 5 ms.
> 
> With iochannel:
> Pulseaudio main thread - 2.6% CPU
> Alsa-source thread - 1.7% CPU
> Reclatencytest - 2.6% CPU
> Total: 6.9% CPU
> 
> With srchannel:
> Pulseaudio main thread - 2.3% CPU
> Alsa-source thread - 1.7% CPU
> Reclatencytest - 1.7% CPU
> Total: 5.3% CPU
> 
> I e, CPU usage reduced by ~25%.
> 
> Palatencytest: Playback test program. Asks for 20 ms of latency (I tried
> 10 ms, but it was too unstable), ends up with a new packet every 8 ms.
> 
> With iochannel:
> Pulseaudio main thread - 2.3% CPU
> Alsa-sink thread - 2.2% CPU
> Palatencytest - 1.3% CPU
> Total: 5.8% CPU
> 
> With srchannel:
> Pulseaudio main thread - 1.7% CPU
> Alsa-sink thread - 2.2% CPU
> Palatencytest - 1.0% CPU
> Total: 4.9% CPU
> 
> I e, CPU usage reduced by ~15%.
> 
> Now, this is not all there is to it. In a future generation of this
> patch, I'd like to investigate the possibility we can have the client
> listen to more than one ringbuffer, so we can set up a ringbuffer
> directly between the I/O-thread and the client, too. That should lead to
> even bigger savings, and hopefully more stable audio as well (less
> jitter if we don't pass through the non-RT main thread).
> 
> As for the implementation, I have a hacky/drafty patch which I'm happy
> to show to anyone interested. Here's how the patch works:
> 
> Setup:
> 
> 1) A client connects and SHM is enabled like usual. (In case SHM cannot
> be enabled, we can't enable the new srchannel either.)
> 2) The server allocates a new memblock for the two ringbuffers (one in
> each direction) and sends this to the client using the iochannel.
> 3) The server allocates two pa_fdsem objects (these are wrappers around
> eventfd).
> 4) The server prepares an additional packet to the client, with a new
> command PA_COMMAND_ENABLE_RINGBUFFER.

Is this negotiation done in a way that allows us to cleanly drop support
for srchannel later if we want? Let's say that this is implemented in
protocol version 32 and for some reason removed in 33. If the server
uses protocol version 32 and the client uses version 33, can the client
refuse the srchannel feature and fall back to something else? Or vice
versa, if the server uses version 33 and the client uses version 32, can
the server refuse this feature and fall back to something else? I'm just
thinking that this might not be the final solution for IPC, someone
might implement a "kdbus channel", for example.

> 5) The server attaches the eventfds to the packet. Much like we do with
> pa_creds today, file descriptors can be shared over a socket using the
> mechanism described e g here [1].
> 6) The client receives the memblock and then the packet with the eventfds.
> 7) Both client and server are now enabling the ringbuffer for all
> packets from that moment on (assuming they don't need to send additional
> pa_creds or eventfds, which have to be sent over the iochannel).
> 
> The shared memblock contains two ringbuffers. There are atomic variables
> to control the lock-free ringbuffer, so they have to writable by both
> sides. (As a quick hack, I just enabled both sides to write on all
> memblocks.)
> 
> The two ringbuffer objects are encapsulated by an srchannel object,
> which looks just like the iochannel to the outside world. Writing to an
> srchannel first writes to the ringbuffer memory, increases the atomic
> "count" variable, and signals the pa_fdsem. On the reader side that
> wakes up the reader's pa_fdsem, the ringbuffer's memory is read and
> "count" is decreased.

How does rewinding work with the ringbuffers? Is this a relevant
question at all, or is this just a channel for sending packets just like
with iochannel-backed pstream (I'm not terribly familiar with how the
current iochannel and SHM transport work)?

> The pstream object has been modified to be able to read from both an
> srchannel and an iochannel (in parallel), and writing can go to either
> channel depending on circumstances.

Do you expect that control data would go via iochannel and audio data
would go through srchannel, if y

Re: [pulseaudio-discuss] Less CPU overhead with a new protocol channel mechanism

2014-04-04 Thread David Henningsson
On 04/04/2014 03:24 PM, Tanu Kaskinen wrote:
> On Fri, 2014-04-04 at 11:46 +0200, David Henningsson wrote:
>> In low latency scenarios, PulseAudio uses up quite a bit of CPU. A while
>> ago I did some profiling and noticed that much of the time was spent
>> inside the ppoll syscall.
>>
>> I couldn't let go of that problem, and I think optimising PulseAudio is
>> a good thing. So I went ahead and did some research, which ended up with
>> a lock-free ringbuffer in shared memory, combined with eventfds for
>> notification. I e, I added a new channel I called "srchannel" in
>> addition to the existing "iochannel" that usually uses UNIX pipes.
> 
> What does "sr" in "srchannel" mean?

"Shared Ringbuffer" or "SHM Ringbuffer".

>> When running this solution with my low-latency test programs, I ended up
>> with the following result. The tests were done on my core i3 laptop from
>> 2010, and I just ran top and tried to take an approximate average.
>>
>> Reclatencytest: Recording test program. Asks for 10 ms of latency, ends
>> up with a new packet every 5 ms.
>>
>> With iochannel:
>> Pulseaudio main thread - 2.6% CPU
>> Alsa-source thread - 1.7% CPU
>> Reclatencytest - 2.6% CPU
>> Total: 6.9% CPU
>>
>> With srchannel:
>> Pulseaudio main thread - 2.3% CPU
>> Alsa-source thread - 1.7% CPU
>> Reclatencytest - 1.7% CPU
>> Total: 5.3% CPU
>>
>> I e, CPU usage reduced by ~25%.
>>
>> Palatencytest: Playback test program. Asks for 20 ms of latency (I tried
>> 10 ms, but it was too unstable), ends up with a new packet every 8 ms.
>>
>> With iochannel:
>> Pulseaudio main thread - 2.3% CPU
>> Alsa-sink thread - 2.2% CPU
>> Palatencytest - 1.3% CPU
>> Total: 5.8% CPU
>>
>> With srchannel:
>> Pulseaudio main thread - 1.7% CPU
>> Alsa-sink thread - 2.2% CPU
>> Palatencytest - 1.0% CPU
>> Total: 4.9% CPU
>>
>> I e, CPU usage reduced by ~15%.
>>
>> Now, this is not all there is to it. In a future generation of this
>> patch, I'd like to investigate the possibility we can have the client
>> listen to more than one ringbuffer, so we can set up a ringbuffer
>> directly between the I/O-thread and the client, too. That should lead to
>> even bigger savings, and hopefully more stable audio as well (less
>> jitter if we don't pass through the non-RT main thread).
>>
>> As for the implementation, I have a hacky/drafty patch which I'm happy
>> to show to anyone interested. Here's how the patch works:
>>
>> Setup:
>>
>> 1) A client connects and SHM is enabled like usual. (In case SHM cannot
>> be enabled, we can't enable the new srchannel either.)
>> 2) The server allocates a new memblock for the two ringbuffers (one in
>> each direction) and sends this to the client using the iochannel.
>> 3) The server allocates two pa_fdsem objects (these are wrappers around
>> eventfd).
>> 4) The server prepares an additional packet to the client, with a new
>> command PA_COMMAND_ENABLE_RINGBUFFER.
> 
> Is this negotiation done in a way that allows us to cleanly drop support
> for srchannel later if we want? Let's say that this is implemented in
> protocol version 32 and for some reason removed in 33. If the server
> uses protocol version 32 and the client uses version 33, can the client
> refuse the srchannel feature and fall back to something else? Or vice
> versa, if the server uses version 33 and the client uses version 32, can
> the server refuse this feature and fall back to something else? I'm just
> thinking that this might not be the final solution for IPC, someone
> might implement a "kdbus channel", for example.

Right now, this is just a hacky patch (that always enables the srchannel
if shm is available). This needs a few more thoughts, probably. But one
idea could be that the client currently sends a bit to indicate it wants
SHM, and we could add another bit to indicate that it wants srchannels
as well. Another would be that the command to enable the ringbuffer
requires a positive reply from the client before the actual switch.

>> 5) The server attaches the eventfds to the packet. Much like we do with
>> pa_creds today, file descriptors can be shared over a socket using the
>> mechanism described e g here [1].
>> 6) The client receives the memblock and then the packet with the eventfds.
>> 7) Both client and server are now enabling the ringbuffer for all
>> packets from that moment on (assuming they don't need to send additional
>> pa_creds or eventfds, which have to be sent over the iochannel).
>>
>> The shared memblock contains two ringbuffers. There are atomic variables
>> to control the lock-free ringbuffer, so they have to writable by both
>> sides. (As a quick hack, I just enabled both sides to write on all
>> memblocks.)
>>
>> The two ringbuffer objects are encapsulated by an srchannel object,
>> which looks just like the iochannel to the outside world. Writing to an
>> srchannel first writes to the ringbuffer memory, increases the atomic
>> "count" variable, and signals the pa_fdsem. On the reader side that
>> wakes up the reader's

Re: [pulseaudio-discuss] Heads-up: the routing patches will start to get merged soon

2014-04-04 Thread David Henningsson
On 04/04/2014 11:31 AM, Tanu Kaskinen wrote:
> I'm heading towards "a generic solution to our current routing issues",
> but that solution will depend on Murphy, which will provide the
> configurability and the default routing rules. In my opinion,
> implementing another solution with good configurability and
> better-than-current default routing without Murphy should be implemented
> by someone else, if a non-Murphy-based solution is desired.

(Just summing up what we discussed on IRC)

So the result from all this work is that normal desktop users will get
nothing, except an API and quite some infrastructure to maintain.

> If I understood correctly, you wish that I'd implement a full generic
> non-Murphy-based solution before merging the node infrastructure, but
> it's unclear to me whether that wish is a minimum requirement or not,
> and if it's not, what's the minimum requirement? 

I'm not sure what to answer to this question right now. I'd like to hear
what others have to say as well.

In addition, Colin Guthrie's patches two years ago which implement the
device priority lists should perhaps be revived, either instead of this
routing patch set, or in parallel/combination with it. Because that's
something that would actually bring benefit to users. And seen in
hindsight, we probably should have merged that patch set instead of
waiting for this routing patch set.

-- 
David Henningsson, Canonical Ltd.
https://launchpad.net/~diwic
___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss


Re: [pulseaudio-discuss] recording from Built-in Audio too slow

2014-04-04 Thread mattes
Does anybody have some thoughts on this?

I did some more googling on the issue. All I find
is problem description but no solution.

Some indicate this is supposedly an application problem.
But the questions arises, why does the app react different
depending on where you pull the data from pulse audio.

I tested (see below) two different app with very similiar results.
Other using Audacity expiriencing the same problem.

I don't know much about pulse audio.
So could someone shed some light on this

Its appreciated.

This is happening on a Fedora 19 system, as well as Fedora 17

Mat



On Fri, 28 Mar 2014 20:27:32 -0700 "mattes"  wrote

> Trying to record audio that is already playing on the system.
> E.g. live conference.  For Recording is use gnome-sound-recorder or 
> audio-recorder, which by default records from the mic input. 
> Using pavucontrol as a helper, I switch the under the recording tab
> to the 'Monitor of Built-in Audio Analog Stereo' to get access to the
> internal audio channel.
> 
> It works, but there is a nasty side effect. When playing back the recorded
> sound clip, I noticed  that the pitch is different. As it turns out the clip
> is playing in slomo, roughly 10%+ slower. Enuff to be annoying. It seems
> that playback time is longer than the actually recording time.
> 
> I switch to a different recorder, but no change the problem still evident.
> 
> One thing I noticed is that, the slow down does not occur when I record e.g.
> from microphone. Starting the recording from the MIC input and then switching
> during the recording to 'Monitor of Built-in Audio', shows that the slowdown
> start when the switch happens.
> 
> The laptop is running Fedora 19. close to be uptodate
> 
> Any advice how this can be fixed?
> 
> Mat
> 
> this way I get access to the


___
pulseaudio-discuss mailing list
pulseaudio-discuss@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/pulseaudio-discuss