> We have application cluster and ceph as storage solution, cluster consists of
> six servers, so we've installed
> monitor on every one of them, to have ceph cluster sane (quorum) if server or
> two of them goes down.
You want an odd number for sure, to avoid the classic split-brain problem:
Hi
On 08.03.2015 04:32, Anthony D'Atri wrote:
> 1) That's an
awful lot of mons. Are they VM's or something? My sense is that mons >5
have diminishing returns at best.
We have application cluster and ceph
as storage solution, cluster consists of six servers, so we've installed
monitor on e
1) That's an awful lot of mons. Are they VM's or something? My sense is that
mons >5 have diminishing returns at best.
2) Only two OSD nodes? Assume you aren't running 3 copies of data or racks.
3) The new nodes will have fewer OSD's? Be careful with host / OSD weighting
to avoid a gro
Hi guys,
I have few question regarding adding another osd node to
cluster. I already have in production cluster with 7 mon, 72 osd, we are
using mainly librados to interact with saved objects in ceph. Our osds
are 3TB WD discs and they reside on two servers (36 osds per server) so
long story