Quorum can be achieved with one monitor node (for testing purposes this would 
be OK, but of course it is a single point of failure) however the default for 
the OSD nodes  is three way replication (can be changed) but easier to set up 
three OSD nodes to start with and one monitor node. For your case the monitor 
node would not need to be very powerful and a lower spec system could be used 
allowing your previously suggested mon node to be used instead as a third OSD 
node. 

-----Original Message-----
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Hermann Himmelbauer
Sent: Monday, October 26, 2015 12:17 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] 2-Node Cluster - possible scenario?

Hi,
In a little project of mine I plan to start ceph storage with a small setup and 
to be able to scale it up later. Perhaps someone can give me any advice if the 
following (two nodes with OSDs, third node with Monitor only):

- 2 Nodes (enough RAM + CPU), 6*3TB Harddisk for OSDs -> 9TB usable space in 
case of 3* redundancy, 1 Monitor on each of the nodes
- 1 extra node that has no OSDs but runs a third monitor.
- 10GBit Ethernet as storage backbone

Later I may add more nodes + OSDs to expand the cluster in case more storage / 
performance is needed.

Would this work / be stable? Or do I need to spread my OSDs to 3 ceph nodes 
(e.g. in order to achive quorum). In case one of the two OSD nodes fail, would 
the storage still be accessible?

The setup should be used for RBD/QEMU only, no cephfs or the like.

Any hints are appreciated!

Best Regards,
Hermann

--
herm...@qwer.tk
PGP/GPG: 299893C7 (on keyservers)
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to