Hi Tyler, thanks for clarifying, it makes total sense now.

Hypothetically, if there are any failures and most stop, how can I
re-initialize the cluster in its current state or what can be done in this
kind of case?

Em qui., 3 de nov. de 2022 às 17:00, Tyler Brekke <tbre...@digitalocean.com>
escreveu:

> Hi Murilo,
>
> Since we need a majority to maintain a quorum when you lost 2 mons, you
> only had 50% available and lost quorum. This is why all recommendations
> specify having an odd number of mons. As you do not get any added
> availability with 4 instead of 3. If you had 5 mons, you can lose two
> without losing availability.
>
>
> On Thu, Nov 3, 2022, 2:55 PM Murilo Morais <mur...@evocorp.com.br> wrote:
>
>> Good afternoon everyone!
>>
>> I have a lab with 4 mons, I was testing the behavior in case a certain
>> amount of hosts went offline, as soon as the second one went offline
>> everything stopped. It would be interesting if there was a fifth node to
>> ensure that, if two fall, everything will work, but why did everything
>> stop
>> with only 2 nodes when if there were 3 nodes in the cluster and one fell,
>> everything would still be working? Is there no way to get this behavior
>> with 4 nodes?
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to