[ceph-users] Re: mgr memory usage constantly increasing

2023-05-23 Thread Tobias Hachmer
Hi Eugen, Am 5/23/23 um 12:50 schrieb Eugen Block: there was a thread [1] just a few weeks ago. Which mgr modules are enabled in your case? Also the mgr caps seem to be relevant here. [1] https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/BKP6EVZZHJMYG54ZW64YABYV6RLPZNQO/

[ceph-users] Re: mgr memory usage constantly increasing

2023-05-23 Thread Tobias Hachmer
Hello list, forget to mention the CEPH version: 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable) Greetings Tobias smime.p7s Description: S/MIME cryptographic signature ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] mgr memory usage constantly increasing

2023-05-23 Thread Tobias Hachmer
: 60 up (since 6d), 60 in (since 8w) tcmu-runner: 2 portals active (2 hosts) data: volumes: 1/1 healthy pools: 6 pools, 2161 pgs objects: 15.42M objects, 45 TiB usage: 135 TiB used, 75 TiB / 210 TiB avail pgs: 2161 active+clean Thanks and kind regards Tobias

[ceph-users] Re: cephfs - max snapshot limit?

2023-04-27 Thread Tobias Hachmer
Hi sur5r, Am 4/27/23 um 10:33 schrieb Jakob Haufe: > On Thu, 27 Apr 2023 09:07:10 +0200 > Tobias Hachmer wrote: > >> But we observed that max 50 snapshot are preserved. If a new snapshot is >> created the oldest 51st is deleted. >> >> Is there a limit for

[ceph-users] cephfs - max snapshot limit?

2023-04-27 Thread Tobias Hachmer
Hello, we are running a 3-node ceph cluster with version 17.2.6. For CephFS snapshots we have configured the following snap schedule with retention: /PATH 2h 72h15d6m But we observed that max 50 snapshot are preserved. If a new snapshot is created the oldest 51st is deleted. Is there a