Re: pipewire memory usage

2021-12-14 Thread Dominique Martinet
Carlos O'Donell wrote on Tue, Dec 14, 2021 at 11:07:42AM -0500:
> > So I guess we're just chasing after artifacts from the allocator, and
> > it'll be hard to tell which it is when I happen to see pipewire-pulse
> > with high memory later on...
> 
> It can be difficult to tell the difference between:
> (a) allocator caching
> (b) application usage
> 
> To help with we developed some additional tracing utilities:
> https://pagure.io/glibc-malloc-trace-utils

Thanks for the pointer, I knew something would be able to do this but I
didn't remember what allowed this.

I don't see this in any package, maybe it'd be interesting to ship these
for easy use?
(yes, it's not difficult to git clone and configure/make locally, but
I'll forget about it again whereas a package might be easier to
remember)

For now, I can confirm that all memory is indeed freed in a timely
manner as far as pipewire-pulse knows.

> > From what I can see the big allocations are (didn't look at lifetime of each
> > alloc):
> >  - load_spa_handle for audioconvert/libspa-audioconvert allocs 3.7MB
> >  - pw_proxy_new allocates 590k
> >  - reply_create_playback_stream allocates 4MB
> >  - spa_buffer_alloc_array allocates 1MB from negotiate_buffers
> >  - spa_buffer_alloc_array allocates 256K x2 + 128K
> >from negotiate_link_buffers
> 
> On a 64-bit system the maximum dynamic allocation size is 32MiB.
> 
> As you call malloc with ever larger values the dynamic scaling will scale up 
> to
> at most 32MiB (half of a 64MiB heap). So it is possible that all of these 
> allocations
> are placed on the mmap/sbrk'd heaps and stay there for future usage until 
> freed back.

Yes, that's my guess as well - as they're all different sizes the cache
can blow up.

> Could you try running with this env var:
> 
> GLIBC_TUNABLES=glibc.malloc.mmap_threshold=131072
> 
> Note: See `info libc tunables`.

with this the max moved down from ~300-600MB to 80-150MB, and it comes
back down to 80-120MB instead of ~300MB.


> > maybe some of these buffers sticking around for the duration of the
> > connection could be pooled and shared?
>  
> They are pooled and shared if they are cached by the system memory allocator.
> 
> All of tcmalloc, jemalloc, and glibc malloc attempt to cache the userspace 
> requests
> with different algorithms that match given workloads.

Yes, I didn't mean pooling as pooling allocator, but I meant live
pooling usage e.g. every objects could use the same objects when they
need to.
I can understand buffers being made per-client so an overhead of 1-2MB
per client is acceptable, but the bulk of the spa handle seem to be
storing many big ports?

$ pahole -y impl spa/plugins/audioconvert/libspa-audioconvert.so.p/merger.c.o 
struct impl {
...
struct portin_ports[64]; /*   256 1153024 */
/* --- cacheline 18020 boundary (1153280 bytes) --- */
struct portout_ports[65];/* 1153280 1171040 */
/* --- cacheline 36317 boundary (2324288 bytes) was 32 bytes ago --- */
struct spa_audio_info  format;   /* 2324320   284 */
...
$ pahole -y impl spa/plugins/audioconvert/libspa-audioconvert.so.p/splitter.c.o
struct impl {
...
struct portin_ports[1];  /*   184 18056 */
/* --- cacheline 285 boundary (18240 bytes) --- */
struct portout_ports[64];/* 18240 1155584 */
/* --- cacheline 18341 boundary (1173824 bytes) --- */
...

Which themselves have a bunch of buffers:
struct port {
...
struct buffer  buffers[32];  /*   576 17408 */

(pahole also prints useful hints that the structures have quite a bit of
padding, so some optimization there could save some scraps, but I think
it's more fundamental than this)


I understand that allocating once in bulk is ideal for latency so I have
no problem with overallocating a bit, but I'm not sure if we need so
many buffers laying around when clients are mute and probably not using
most of these :)
(I also understand that this isn't an easy change I'm asking about, it
doesn't have to be immediate)


BTW I think we're getting a bit gritty, which might be fine for the list
but probably leave some pipewire devs out. Perhaps it's time to move to
a new pipewire issue?
-- 
Dominique
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: pipewire memory usage

2021-12-14 Thread Carlos O'Donell
On 12/14/21 07:08, Dominique Martinet wrote:
> I've double-checked with traces in load_spa_handle/unref_handle and it
> is all free()d as soon as the client disconnects, so there's no reason
> the memory would still be used... And I think we're just looking at some
> malloc optimisation not releasing the memory.
> 
> To confirm, I've tried starting pipewire-pulse with jemalloc loaded,
> LD_PRELOAD=/usr/lib64/libjemalloc.so , and interestingly after the 100
> clients exit the memory stays at ~3-400MB but as soon as single new
> client connects it jumps back down to 20MB, so that seems to confirm it.
> (with tcmalloc it stays all the way up at 700+MB...)

 
> So I guess we're just chasing after artifacts from the allocator, and
> it'll be hard to tell which it is when I happen to see pipewire-pulse
> with high memory later on...

It can be difficult to tell the difference between:
(a) allocator caching
(b) application usage

To help with we developed some additional tracing utilities:
https://pagure.io/glibc-malloc-trace-utils

The idea was to get a full API trace of malloc family calls and then play them 
back
in a simulator to evaluate the heap/arena usage when threads were involved.

Knowing the exact API calls lets you determine if you have (a), where the API 
calls
show a small usage but in reality RSS is higher, or (b) where the API calls 
show there
are some unmatched free()s and the usage is growing.

It seems like you used jemalloc and then found that memory usage stays low?

If that is the case it may be userspace caching from the allocator.

jemalloc is particularly lean with a time-decay thread that frees back to the OS
in order to reduce memory usage down to a fixed percentage. The consequence of
this is that you get latency on the allocation side, and the application has to
take this into account.

> From what I can see the big allocations are (didn't look at lifetime of each
> alloc):
>  - load_spa_handle for audioconvert/libspa-audioconvert allocs 3.7MB
>  - pw_proxy_new allocates 590k
>  - reply_create_playback_stream allocates 4MB
>  - spa_buffer_alloc_array allocates 1MB from negotiate_buffers
>  - spa_buffer_alloc_array allocates 256K x2 + 128K
>from negotiate_link_buffers

On a 64-bit system the maximum dynamic allocation size is 32MiB.

As you call malloc with ever larger values the dynamic scaling will scale up to
at most 32MiB (half of a 64MiB heap). So it is possible that all of these 
allocations
are placed on the mmap/sbrk'd heaps and stay there for future usage until freed 
back.

Could you try running with this env var:

GLIBC_TUNABLES=glibc.malloc.mmap_threshold=131072

Note: See `info libc tunables`.

> maybe some of these buffers sticking around for the duration of the
> connection could be pooled and shared?
 
They are pooled and shared if they are cached by the system memory allocator.

All of tcmalloc, jemalloc, and glibc malloc attempt to cache the userspace 
requests
with different algorithms that match given workloads.

-- 
Cheers,
Carlos.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: pipewire memory usage

2021-12-14 Thread Dominique Martinet
Wim Taymans wrote on Tue, Dec 14, 2021 at 09:09:30AM +0100:
> I can get it as high as that too but then it stays there and doesn't really
> grow anymore so it does not seem like
> it's leaking. Maybe it's the way things are done, there is a lot of ldopen
> and memfd/mmap.

Right, I've had a look with massif and it looks like the memory is
reused properly -- when the next batch of clients come in all previously
used memory is freed and promptly reallocated for the new clients.

The problem seems to be more that there is no sign of memory being
released even after some time, I've left pipewire-pulse run for a while
and it stays at 300ishMB of RSS all this time.
Connecting a single new client at this point does increase memory
(+8-9MB) so it doesn't look like it's reusing the old memory, but
looking at massif the numbers all fell down close to 0 so everything
-is- freed successfully... And it's a bit weird.


FWIW, here's some massif output file if you're curious.
I ran 100 clients, 100 clients, 1 client for a while, then 100 clients
again:
https://gaia.codewreck.org/local/massif.out.pipewire


I've double-checked with traces in load_spa_handle/unref_handle and it
is all free()d as soon as the client disconnects, so there's no reason
the memory would still be used... And I think we're just looking at some
malloc optimisation not releasing the memory.

To confirm, I've tried starting pipewire-pulse with jemalloc loaded,
LD_PRELOAD=/usr/lib64/libjemalloc.so , and interestingly after the 100
clients exit the memory stays at ~3-400MB but as soon as single new
client connects it jumps back down to 20MB, so that seems to confirm it.
(with tcmalloc it stays all the way up at 700+MB...)

So I guess we're just chasing after artifacts from the allocator, and
it'll be hard to tell which it is when I happen to see pipewire-pulse
with high memory later on...



That all being said, I agree with Zbigniew that the allocated amount per
client looks big.

From what I can see the big allocations are (didn't look at lifetime of each
alloc):
 - load_spa_handle for audioconvert/libspa-audioconvert allocs 3.7MB
 - pw_proxy_new allocates 590k
 - reply_create_playback_stream allocates 4MB
 - spa_buffer_alloc_array allocates 1MB from negotiate_buffers
 - spa_buffer_alloc_array allocates 256K x2 + 128K
   from negotiate_link_buffers

maybe some of these buffers sticking around for the duration of the
connection could be pooled and shared?

-- 
Dominique
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: pipewire memory usage

2021-12-14 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Dec 14, 2021 at 09:09:30AM +0100, Wim Taymans wrote:
> I can get it as high as that too but then it stays there and doesn't really
> grow anymore so it does not seem like
> it's leaking. Maybe it's the way things are done, there is a lot of ldopen
> and memfd/mmap.

This doesn't sound right. 340 *MB* is just too much.

It might be useful to look at smem to get the USS:
$ smem -P '\bpipewire'
  PID User Command Swap  USS  PSS  RSS 
 2450 zbyszek  /usr/bin/pipewire   2288225922326528700 
 2452 zbyszek  /usr/bin/pipewire-pulse 3412   241784   242097   246924 

So 241 MB of non-shared data seems like a lot.
It seems like pipewire-pulse starts with reasonable memory use,
but then grows quite a lot over time.
(This is still with 0.3.40. I'm upgrading now and I'll report if this changes
significantly.)

Zbyszek
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: pipewire memory usage

2021-12-14 Thread Wim Taymans
I can get it as high as that too but then it stays there and doesn't really
grow anymore so it does not seem like
it's leaking. Maybe it's the way things are done, there is a lot of ldopen
and memfd/mmap.

Wim

On Mon, Dec 13, 2021 at 11:42 PM Dominique Martinet 
wrote:

> Wim Taymans wrote on Mon, Dec 13, 2021 at 09:22:42AM +0100:
> > There was a leak in 0.3.40 that could explain this, see
> > https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/1840
> >
> > Upcoming 0.3.41 will have this fixed. At least I can't reproduce this
> > anymore with the test you posted below.
>
> Thanks for testing!
>
> I've also taken the time of rebuilding pipewire from source (current
> master, just on top of 0.3.41) but unfortunately it doesn't look like it
> solves the issue here, so it must be something specific in my
> environment.
>
> fresh start:
> myuser  335184  1.0  0.0  56384 11596 ?S /opt/pipewire/bin/pipewire
> myuser  335197  2.7  0.0  36000 11480 ?S /usr/bin/pipewire-media-session
> myuser  335208  0.5  0.0  31312  6428 ?S /opt/pipewire/bin/pipewire-pulse
>
> after running 100 mpv like last time:
> myuser  335184  5.3  0.3 174836 63360 ?S /opt/pipewire/bin/pipewire
> myuser  335197  1.6  0.0  36708 12336 ?S /usr/bin/pipewire-media-session
> myuser  335208  9.2  2.1 666020 341196 ?   S /opt/pipewire/bin/pipewire-pulse
>
>
>
> `pactl stat` is happy though:
> Currently in use: 89 blocks containing 3.4 MiB bytes total.
> Allocated during whole lifetime: 89 blocks containing 3.4 MiB bytes total.
> Sample cache size: 0 B
>
> I've run out of free time this morning but since it's not a known issue
> I'll debug this a bit more after getting home tonight and report an
> issue proper.
> Since it's easy to reproduce here I'm sure I'll find the cause in no
> time...
>
> --
> Domnique
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
> Do not reply to spam on the list, report it:
> https://pagure.io/fedora-infrastructure
>
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: pipewire memory usage

2021-12-13 Thread Dominique Martinet
Wim Taymans wrote on Mon, Dec 13, 2021 at 09:22:42AM +0100:
> There was a leak in 0.3.40 that could explain this, see
> https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/1840
> 
> Upcoming 0.3.41 will have this fixed. At least I can't reproduce this
> anymore with the test you posted below.

Thanks for testing!

I've also taken the time of rebuilding pipewire from source (current
master, just on top of 0.3.41) but unfortunately it doesn't look like it
solves the issue here, so it must be something specific in my
environment.

fresh start:
myuser  335184  1.0  0.0  56384 11596 ?Shttps://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: pipewire memory usage

2021-12-13 Thread Wim Taymans
There was a leak in 0.3.40 that could explain this, see
https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/1840

Upcoming 0.3.41 will have this fixed. At least I can't reproduce this
anymore with the test you posted below.

Wim

On Sun, Dec 12, 2021 at 12:49 PM Dominique Martinet 
wrote:

> Fabio Valentini wrote on Sun, Dec 12, 2021 at 12:25:11PM +0100:
> > > on my laptop, /usr/bin/pipewire uses 56M RSS, 5M SHR,
> > > but/usr/bin/pipewire-pulse uses 347M RSS, 4M SHR.
> > > 56M is okeyish, but 347M seems a lot. I think firefox is going
> > > through pipewire-pulse, so that interface might be getting more use
> > > than native pipewire. But what are the expected values for this?
> >
> > That certainly seems high to me. On my system I see values like
> > - pipewire: resident memory ~18M, shared memory ~8M
> > - pipewire-pulse: redident memory ~19M, shared memory ~6M
> > even while playing audio from firefox, for example.
> >
> > Where did you get those RSS values?
> > I checked in gnome-system-monitor and with ps -aux, and both reported
> > the same values for resident memory (RSS).
>
> To add another datapoint I also have always seen pretty high RSS usage
> from pipewire-pulse:
>
> $ ps aux|grep pipewire
> myuser   14645  0.5  0.4 198772 79100 ?Ssl  Dec07  38:08
> /usr/bin/pipewire
> myuser   14646  0.6  3.4 713516 555756 ?   SLsl Dec07  45:29
> /usr/bin/pipewire-pulse
> myuser   14652  0.0  0.0  38112 12228 ?Sl   Dec07   0:04
> /usr/bin/pipewire-media-session
>
> (so 555MB RSS)
>
>
> I've also noticed that the background cpu% usage seems to increase, so
> I'd say the memory is still reachable somewhere and there must be some
> list getting big and skimmed through from time to time; restarting the
> pipewire processes when I start seeing them climb too high in htop makes
> them behave again...
>
>
> ... Okay, so there's an obvious leak when pulse clients connect and
> leave on pipewire-0.3.40-1.fc34.x86_64, about 3MB per client (!).
>
>
> I've just pkill pipewire to restart it:
> asmadeus  293661  0.5  0.0  31612  7276 ?S /usr/bin/pipewire-pulse
> asmadeus  293675  1.5  0.0  56092 11488 ?S /usr/bin/pipewire
> asmadeus  293678  5.0  0.0  37528 12364 ?S /usr/bin/pipewire-media-session
>
> then ran mpv in a tight loop, 100 times:
> for i in {1..100}; do mpv somefile.opus -length 1 & done; wait
> (pulseaudio output)
>
> and rss climbed to 313MB:
> asmadeus  293661  2.6  1.9 689228 313832 ?   S /usr/bin/pipewire-pulse
> asmadeus  293675  1.8  0.4 188392 76844 ?S /usr/bin/pipewire
> asmadeus  293678  0.5  0.0  38168 12672 ?S /usr/bin/pipewire-media-session
>
> another 100 times brings it up to 652'672.
>
> I had noticed that firefox likes to create new output streams and closes
> them everytime there's a video, even if sound is muted, so I'd think
> that on some sites it would behave quite similarly to that.
>
>
> I've uploaded pw-dump after this if that's any help:
> https://gaia.codewreck.org/local/tmp/pw-dump
>
> But it should be easy to reproduce, I don't think I have anything
> too specific sound-wise here...
>
>
> Happy to open an issue upstream if there isn't one yet, I haven't had a
> look. Trying to reproduce on master would likely be first.
> Please let me know if you take over or I'll look into it further over
> the next few days...
> --
> Dominique Martinet | Asmadeus
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
> Do not reply to spam on the list, report it:
> https://pagure.io/fedora-infrastructure
>
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: pipewire memory usage

2021-12-12 Thread Dominique Martinet
Fabio Valentini wrote on Sun, Dec 12, 2021 at 12:25:11PM +0100:
> > on my laptop, /usr/bin/pipewire uses 56M RSS, 5M SHR,
> > but/usr/bin/pipewire-pulse uses 347M RSS, 4M SHR.
> > 56M is okeyish, but 347M seems a lot. I think firefox is going
> > through pipewire-pulse, so that interface might be getting more use
> > than native pipewire. But what are the expected values for this?
> 
> That certainly seems high to me. On my system I see values like
> - pipewire: resident memory ~18M, shared memory ~8M
> - pipewire-pulse: redident memory ~19M, shared memory ~6M
> even while playing audio from firefox, for example.
> 
> Where did you get those RSS values?
> I checked in gnome-system-monitor and with ps -aux, and both reported
> the same values for resident memory (RSS).

To add another datapoint I also have always seen pretty high RSS usage
from pipewire-pulse:

$ ps aux|grep pipewire
myuser   14645  0.5  0.4 198772 79100 ?Ssl  Dec07  38:08 
/usr/bin/pipewire
myuser   14646  0.6  3.4 713516 555756 ?   SLsl Dec07  45:29 
/usr/bin/pipewire-pulse
myuser   14652  0.0  0.0  38112 12228 ?Sl   Dec07   0:04 
/usr/bin/pipewire-media-session

(so 555MB RSS)


I've also noticed that the background cpu% usage seems to increase, so
I'd say the memory is still reachable somewhere and there must be some
list getting big and skimmed through from time to time; restarting the
pipewire processes when I start seeing them climb too high in htop makes
them behave again...


... Okay, so there's an obvious leak when pulse clients connect and
leave on pipewire-0.3.40-1.fc34.x86_64, about 3MB per client (!).


I've just pkill pipewire to restart it:
asmadeus  293661  0.5  0.0  31612  7276 ?Shttps://gaia.codewreck.org/local/tmp/pw-dump

But it should be easy to reproduce, I don't think I have anything
too specific sound-wise here...


Happy to open an issue upstream if there isn't one yet, I haven't had a
look. Trying to reproduce on master would likely be first.
Please let me know if you take over or I'll look into it further over
the next few days...
-- 
Dominique Martinet | Asmadeus
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: pipewire memory usage

2021-12-12 Thread Zbigniew Jędrzejewski-Szmek
On Sun, Dec 12, 2021 at 12:25:11PM +0100, Fabio Valentini wrote:
> On Sat, Dec 11, 2021 at 6:46 PM Zbigniew Jędrzejewski-Szmek
>  wrote:
> >
> > Hi,
> >
> > on my laptop, /usr/bin/pipewire uses 56M RSS, 5M SHR,
> > but/usr/bin/pipewire-pulse uses 347M RSS, 4M SHR.
> > 56M is okeyish, but 347M seems a lot. I think firefox is going
> > through pipewire-pulse, so that interface might be getting more use
> > than native pipewire. But what are the expected values for this?
> 
> That certainly seems high to me. On my system I see values like
> - pipewire: resident memory ~18M, shared memory ~8M
> - pipewire-pulse: redident memory ~19M, shared memory ~6M
> even while playing audio from firefox, for example.
> 
> Where did you get those RSS values?
> I checked in gnome-system-monitor and with ps -aux, and both reported
> the same values for resident memory (RSS).

I used htop. But 'ps -o user,pid,vsz,rss,share,command' gives similar
numbers.

Zbyszek
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: pipewire memory usage

2021-12-12 Thread Fabio Valentini
On Sat, Dec 11, 2021 at 6:46 PM Zbigniew Jędrzejewski-Szmek
 wrote:
>
> Hi,
>
> on my laptop, /usr/bin/pipewire uses 56M RSS, 5M SHR,
> but/usr/bin/pipewire-pulse uses 347M RSS, 4M SHR.
> 56M is okeyish, but 347M seems a lot. I think firefox is going
> through pipewire-pulse, so that interface might be getting more use
> than native pipewire. But what are the expected values for this?

That certainly seems high to me. On my system I see values like
- pipewire: resident memory ~18M, shared memory ~8M
- pipewire-pulse: redident memory ~19M, shared memory ~6M
even while playing audio from firefox, for example.

Where did you get those RSS values?
I checked in gnome-system-monitor and with ps -aux, and both reported
the same values for resident memory (RSS).

Fabio
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


pipewire memory usage

2021-12-11 Thread Zbigniew Jędrzejewski-Szmek
Hi,

on my laptop, /usr/bin/pipewire uses 56M RSS, 5M SHR,
but/usr/bin/pipewire-pulse uses 347M RSS, 4M SHR.
56M is okeyish, but 347M seems a lot. I think firefox is going
through pipewire-pulse, so that interface might be getting more use
than native pipewire. But what are the expected values for this?

Zbyszek
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure