Hi,

1) Will create the ceph cluster with two osd node by doing "osd pool default size = 2" after that we can add the third node in live cluster and change the replication factor of pool from 2 to 3 by doing # "ceph osd pool set <Pool> size 3".

I hope it's just a test cluster with size 2, don't do that in production if you value your data. There have been so many warnings about size 2 in this list. But yes, you could start with size 2 and later increase it. Keep in mind that you'll have to edit the crush rule (or apply your own) to distribute data evenly across both nodes. The default replicated_rule won't do that.

2) Can we create ceph cluster with 2 osd node by keeping default replication factor 3, and add third osd node in cluster whenever we receive it ?

yes, you can. The default crush rule for replicated pools simply spreads 3 copies to 3 different OSDs which can be on the same host. After the third node is online you'll have to either change that default crush rule to store only one copy per host or you add another rule that handles the distribution properly. For the latter you'll have to change the pool's crush rule.

Regards,
Eugen

Zitat von adhoba...@gmail.com:

Dear Team,

We are creating new ceph cluster with three OSD node each node will have 38TB disk space, unfortunately we are having two servers with us third server will get delivered after 3 weeks. I need your help to plan the cluster setup, please suggest which approach will be the right to initiate the setup.

1) Will create the ceph cluster with two osd node by doing "osd pool default size = 2" after that we can add the third node in live cluster and change the replication factor of pool from 2 to 3 by doing # "ceph osd pool set <Pool> size 3".

2) Can we create ceph cluster with 2 osd node by keeping default replication factor 3, and add third osd node in cluster whenever we receive it ?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to