Am 24.06.22 um 16:13 schrieb Peter Lieven:
Am 23.06.22 um 12:59 schrieb Ilya Dryomov:
On Thu, Jun 23, 2022 at 11:32 AM Peter Lieven <p...@kamp.de> wrote:
Am 22.06.22 um 15:46 schrieb Josh Baergen:
Hey Peter,

I found relatively large allocations in the qemu smaps and checked the 
contents. It contained several hundred repetitions of osd and pool names. We 
use the default builds on Ubuntu 20.04. Is there a special memory allocator in 
place that might not clean up properly?
I'm sure you would have noticed this and mentioned it if it was so -
any chance the contents of these regions look like log messages of
some kind? I recently tracked down a high client memory usage that
looked like a leak that turned out to be a broken config option
resulting in higher in-memory log retention:
https://tracker.ceph.com/issues/56093. AFAICT it affects Nautilus+.
Hi Josh, hi Ilya,


it seems we were in fact facing 2 leaks with 14.x. Our long running VMs with 
librbd 14.x have several million items in the osdmap mempool.

In our testing environment with 15.x I see no unlimited increase in the osdmap 
mempool (compared this to a second dev host with 14.x client where I see the 
increase wiht my tests),

but I still see leaking memory when I generate a lot of osdmap changes, but 
this in fact seem to be log messages - thanks Josh.


So I would appreciate if #56093 would be backported to Octopus before its final 
release.
I picked up Josh's PR that was sitting there unnoticed but I'm not sure
it is the issue you are hitting.  I think Josh's change just resurrects
the behavior where clients stored only up to 500 log entries instead of
up to 10000 (the default for daemons).  There is no memory leak there,
just a difference in how much memory is legitimately consumed.  The
usage is bounded either way.

However in your case, the usage is slowly but constantly growing.
In the original post you said that it was observed both on 14.2.22 and
15.2.16.  Are you saying that you are no longer seeing it in 15.x?

After I understood whats the background of Josh issue I can confirm that I 
still see increasing memory which is not caused

by osdmap items and also not by log entries. There must be something else going 
on.


I still see increased memory (heap) usage. Might it be that it is just heap 
fragmentation?

We mainly see data from inside the VM in these memory areas (this might be data 
from buffered writes), but also librbd data.

Is it possible that data from buffered writes is not always freed properly?

The dirty areas I see are all in the area of several MB up to 64MB.


Thanks

Peter


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to