Hello,

I have a cluster, which have this configuration:

osd pool default size = 3
osd pool default min size = 1

I have 5 monitor nodes and 7 OSD nodes.

I have changed a crush map to divide ceph cluster to two
datacenters - in the first one will be a part of cluster with 2
copies of data and in the second one will be part of cluster
with one copy - only emergency.

I still have this cluster in one 

This cluster have a 1 PiB of raw data capacity, thus it is very
expensive add a further 300TB capacity to have 2+2 data redundancy.

Will it works?

If I turn off the 1/3 location, will it be operational? I
believe, it is a better choose, it will. And what if "die" 2/3
location? On this cluster is pool with cephfs - this is a main
part of CEPH.

Many thanks for your notices.

Sincerely
Jan Marek
-- 
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to