[ceph-users] Re: Safe to move misplaced hosts between failure domains in the crush tree?

2024-06-13 Thread Bandelow, Gunnar
Hi Torkil, Maybe im overlooking something, but how about just renaming the datacenter buckets? Best regards, Gunnar --- Original Nachricht --- Betreff: [ceph-users] Re: Safe to move misplaced hosts between failure domains in the crush tree? Von: "Torkil Svensgaard" An: "Matthias Grandl" CC: 

[ceph-users] Re: 'ceph fs status' no longer works?

2024-05-02 Thread Bandelow, Gunnar
Hi Erich, im not sure about this specific error message, but "ceph fs status" sometimes did fail me end of last year/in the beginning of the year. Restarting ALL mon, mgr AND mds fixed it at the time. Best regards, Gunnar === Gunnar

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-22 Thread Bandelow, Gunnar
  142126 >    0 > 5.47  220849   0 0  0    0 >  926233448448    0   0  5592     0  5592 >  active+clean+scrubbing+deep  2024-03-12T08:10:39.413186+ >  128382'15653864  128383:20403071  [16,15,20,0,13,21]  16 >  [16,15,20,0,13,

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-20 Thread Bandelow, Gunnar
Hi, i just wanted to mention, that i am running a cluster with reef 18.2.1 with the same issue. 4 PGs start to deepscrub but dont finish since mid february. In the pg dump they are shown as scheduled for deep scrub. They sometimes change their status from active+clean to

[ceph-users] Re: Erasure Profile Pool caps at pg_num 1024

2020-02-17 Thread Bandelow, Gunnar
Hi Eugen, thank you for your contribution. I will definitvely think about leaving a number of spare hosts, very good point. My main problem remains the Health Warning of "Too few PGs". This implies that my PG Number in the pool is too low and i cant increase it with an erasure profile. I