[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-15 Thread Marco Pizzolo
Thanks Linh Vu, so it sounds like i should be prepared to bounce the OSDs and/or Hosts, but I haven't heard anyone yet say that it won't work, so I guess there's that... On Tue, Dec 14, 2021 at 7:48 PM Linh Vu wrote: > I haven't tested this in Nautilus 14.2.22 (or any nautilus) but in >

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-14 Thread Marco Pizzolo
Hi Joachim, Understood on the risks. Aside from the alt. cluster, we have 3 other copies of the data outside of Ceph, so I feel pretty confident that it's a question of time to repopulate and not data loss. That said, I would be interested in your experience on what I'm trying to do if you've

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-14 Thread Marco Pizzolo
Hi Martin, Agreed on the min_size of 2. I have no intention of worrying about uptime in event of a host failure. Once size of 2 is effectuated (and I'm unsure how long it will take), it is our intention to evacuate all OSDs in one of 4 hosts, in order to migrate the host to the new cluster,

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-11 Thread Martin Verges
Hello, avoid size 2 whenever you can. As long as you know that you might lose data, it can be an acceptable risk while migrating the cluster. We had that in the past multiple time and it is a valid use case in our opinion. However make sure to monitor the state and recover as fast as possible.

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-10 Thread Wesley Dillingham
I would avoid doing this. Size 2 is not where you want to be. Maybe you can give more details about your cluster size and shape and what you are trying to accomplish and another solution could be proposed. The contents of "ceph osd tree " and "ceph df" would help. Respectfully, *Wes Dillingham*