Hi Torkil,
Maybe im overlooking something, but how about just renaming the
datacenter buckets?
Best regards,
Gunnar
--- Original Nachricht ---
Betreff: [ceph-users] Re: Safe to move misplaced hosts between
failure domains in the crush tree?
Von: "Torkil Svensgaard"
An: "Matthias Grandl"
CC:
Hi Erich,
im not sure about this specific error message, but "ceph fs status"
sometimes did fail me end of last year/in the beginning of the year.
Restarting ALL mon, mgr AND mds fixed it at the time.
Best regards,
Gunnar
===
Gunnar
142126
> 0
> 5.47 220849 0
0 0 0
> 926233448448 0 0 5592
0 5592
> active+clean+scrubbing+deep 2024-03-12T08:10:39.413186+
> 128382'15653864 128383:20403071 [16,15,20,0,13,21]
16
> [16,15,20,0,13,
Hi,
i just wanted to mention, that i am running a cluster with reef 18.2.1
with the same issue.
4 PGs start to deepscrub but dont finish since mid february. In the pg
dump they are shown as scheduled for deep scrub. They sometimes change
their status from active+clean to
Hi Eugen,
thank you for your contribution. I will definitvely think about
leaving a number of spare hosts, very good point.
My main problem remains the Health Warning of "Too few PGs". This
implies that my PG Number in the pool is too low and i cant increase
it with an erasure profile.
I