It would need to be set to 1
On 3 Jul 2016 8:17 a.m., "Willi Fehler" <willi.feh...@t-online.de> wrote:

> Hello David,
>
> so in a 3 node Cluster how should I set min_size if I want that 2 nodes
> could fail?
>
> Regards - Willi
>
> Am 28.06.16 um 13:07 schrieb David:
>
> Hi,
>
> This is probably the min_size on your cephfs data and/or metadata pool. I
> believe the default is 2, if you have less than 2 replicas available I/O
> will stop. See:
> http://docs.ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas
>
> On Tue, Jun 28, 2016 at 10:23 AM, willi.feh...@t-online.de <
> willi.feh...@t-online.de> wrote:
>
>> Hello,
>>
>> I'm still very new to Ceph. I've created a small test Cluster.
>>
>>
>>
>> ceph-node1
>>
>> osd0
>>
>> osd1
>>
>> osd2
>>
>> ceph-node2
>>
>> osd3
>>
>> osd4
>>
>> osd5
>>
>> ceph-node3
>>
>> osd6
>>
>> osd7
>>
>> osd8
>>
>>
>>
>> My pool for CephFS has a replication count of 3. I've powered of 2
>> nodes(6 OSDs went down) and my cluster status became critical and my ceph
>> clients(cephfs) run into a timeout. My data(I had only one file on my pool)
>> was still on one of the active OSDs. Is this the expected behaviour that
>> the Cluster status became critical and my Clients run into a timeout?
>>
>>
>>
>> Many thanks for your feedback.
>>
>>
>>
>> Regards - Willi
>>
>>
>> 
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to