Re: [ceph-users] 2-Node Cluster - possible scenario?

2015-10-25 Thread Alan Johnson
Quorum can be achieved with one monitor node (for testing purposes this would 
be OK, but of course it is a single point of failure) however the default for 
the OSD nodes  is three way replication (can be changed) but easier to set up 
three OSD nodes to start with and one monitor node. For your case the monitor 
node would not need to be very powerful and a lower spec system could be used 
allowing your previously suggested mon node to be used instead as a third OSD 
node. 

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Hermann Himmelbauer
Sent: Monday, October 26, 2015 12:17 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] 2-Node Cluster - possible scenario?

Hi,
In a little project of mine I plan to start ceph storage with a small setup and 
to be able to scale it up later. Perhaps someone can give me any advice if the 
following (two nodes with OSDs, third node with Monitor only):

- 2 Nodes (enough RAM + CPU), 6*3TB Harddisk for OSDs -> 9TB usable space in 
case of 3* redundancy, 1 Monitor on each of the nodes
- 1 extra node that has no OSDs but runs a third monitor.
- 10GBit Ethernet as storage backbone

Later I may add more nodes + OSDs to expand the cluster in case more storage / 
performance is needed.

Would this work / be stable? Or do I need to spread my OSDs to 3 ceph nodes 
(e.g. in order to achive quorum). In case one of the two OSD nodes fail, would 
the storage still be accessible?

The setup should be used for RBD/QEMU only, no cephfs or the like.

Any hints are appreciated!

Best Regards,
Hermann

--
herm...@qwer.tk
PGP/GPG: 299893C7 (on keyservers)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 2-Node Cluster - possible scenario?

2015-10-25 Thread Christian Balzer

Hello,

On Sun, 25 Oct 2015 16:17:02 +0100 Hermann Himmelbauer wrote:

> Hi,
> In a little project of mine I plan to start ceph storage with a small
> setup and to be able to scale it up later. Perhaps someone can give me
> any advice if the following (two nodes with OSDs, third node with
> Monitor only):
> 
> - 2 Nodes (enough RAM + CPU), 6*3TB Harddisk for OSDs -> 9TB usable
> space in case of 3* redundancy, 1 Monitor on each of the nodes

Just for the record, a monitor will be happy with 2GB RAM and 2GHz of CPU
(more is better), but does a LOT of time critical writes, so it running on
decent (also in the endurance sense) SSDs is recommended. 

Once you have SSDs in the game, using them for Ceph journals comes
naturally. 

Keep in mind that while you certainly can improve the performance by just
adding more OSDs later on, SSD journals are such a significant improvement
when it comes to writes that you may want to consider them.

> - 1 extra node that has no OSDs but runs a third monitor.

Ceph uses the MON with the lowest IP address as leader, which is busier
(sometimes a lot so) than the other MONs. 
Plan your nodes with that in mind.

> - 10GBit Ethernet as storage backbone
> 
Good for lower latency. 
I assume "storage backbone" is a single (the "public" network in Ceph
speak) network. Having 10GB for the Ceph private network in your case
would be a bit of a waste, though.


> Later I may add more nodes + OSDs to expand the cluster in case more
> storage / performance is needed.
> 
> Would this work / be stable? Or do I need to spread my OSDs to 3 ceph
> nodes (e.g. in order to achive quorum). In case one of the two OSD nodes
> fail, would the storage still be accessible?
> 
A monitor quorum of 3 is fine, OSDs don't enter that picture.

However 3 OSD storage nodes are highly advised, because with non-SSD
journal HDDs for OSDs your performance will already be low.
It also saves you from having to deal with a custom CRUSH map.

As for accessibility, yes, in theory. 
I certainly have tested this with a 2 storage node cluster and a
replication of 2 (min_size 1). 
With this setup (custom CRUSH map) you will need a min_size of 1 as well.

So again, 3 storage nodes will give you a lot less headaches.

> The setup should be used for RBD/QEMU only, no cephfs or the like.
>
Depending on what these VMs do and the amount of them, see my comments
about performance.

Christian
> Any hints are appreciated!
> 
> Best Regards,
> Hermann
> 



-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 2-Node Cluster - possible scenario?

2015-10-25 Thread Hermann Himmelbauer
Hi,
In a little project of mine I plan to start ceph storage with a small
setup and to be able to scale it up later. Perhaps someone can give me
any advice if the following (two nodes with OSDs, third node with
Monitor only):

- 2 Nodes (enough RAM + CPU), 6*3TB Harddisk for OSDs -> 9TB usable
space in case of 3* redundancy, 1 Monitor on each of the nodes
- 1 extra node that has no OSDs but runs a third monitor.
- 10GBit Ethernet as storage backbone

Later I may add more nodes + OSDs to expand the cluster in case more
storage / performance is needed.

Would this work / be stable? Or do I need to spread my OSDs to 3 ceph
nodes (e.g. in order to achive quorum). In case one of the two OSD nodes
fail, would the storage still be accessible?

The setup should be used for RBD/QEMU only, no cephfs or the like.

Any hints are appreciated!

Best Regards,
Hermann

-- 
herm...@qwer.tk
PGP/GPG: 299893C7 (on keyservers)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com