Hi Eugen,
Am 5/23/23 um 12:50 schrieb Eugen Block:
there was a thread [1] just a few weeks ago. Which mgr modules are
enabled in your case? Also the mgr caps seem to be relevant here.
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/BKP6EVZZHJMYG54ZW64YABYV6RLPZNQO/
Hello list,
forget to mention the CEPH version:
17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5)
quincy (stable)
Greetings
Tobias
smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list -- ceph-users@ceph.io
To
: 60 up (since 6d), 60 in (since 8w)
tcmu-runner: 2 portals active (2 hosts)
data:
volumes: 1/1 healthy
pools: 6 pools, 2161 pgs
objects: 15.42M objects, 45 TiB
usage: 135 TiB used, 75 TiB / 210 TiB avail
pgs: 2161 active+clean
Thanks and kind regards
Tobias
Hi sur5r,
Am 4/27/23 um 10:33 schrieb Jakob Haufe:
> On Thu, 27 Apr 2023 09:07:10 +0200
> Tobias Hachmer wrote:
>
>> But we observed that max 50 snapshot are preserved. If a new snapshot is
>> created the oldest 51st is deleted.
>>
>> Is there a limit for
Hello,
we are running a 3-node ceph cluster with version 17.2.6.
For CephFS snapshots we have configured the following snap schedule with
retention:
/PATH 2h 72h15d6m
But we observed that max 50 snapshot are preserved. If a new snapshot is
created the oldest 51st is deleted.
Is there a