On 10/30/22 07:19, Nico Schottelius wrote:

Good morning ceph-users,

we currently have one OSD based on a SATA SSD (750GB raw) that consume
around 42 GB of RAM. The cluster status is HEALTH_OK, no rebalancing or
pg change.

I can go ahead and just kill it, but I was wondering if there is an easy
way to figure out why it is consuming so much memory and help the
OSD to recover it? It has been using a 40+ GB amount of memory for a few
days now, without any known cluster changes.

This is based on ceph 16.2.10 + rook.

It might be this bug [1] with backport [2] (16.2.11). Is this the only OSD that has that high memory usage?

Gr. Stefan

[1]: https://tracker.ceph.com/issues/53729
[2]: https://tracker.ceph.com/issues/55631
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to