Hi! In my organisation we are using OpenNebula as our Cloud Platform.
Currently we are testing High Availability(HA) feature with Ceph Cluster as
our storage backend. In our test setup we have 3 systems with front-end HA
already successfully setup and configured with a floating IP in between
them. We are having our ceph cluster(3 osds and 3 mons) on these very 3
machines. However, when we try to deploy a ceph cluster, we have a
successful quorum with the following issues on the OpenNebula 'LEADER' node

    1) The mon daemon successfully starts, but takes up the floating IP
rather than the actual IP.

    2) The osd daemon on the other hand goes down after a while giving an
error
    log_channel(cluster) log [ERR] : map e29 had wrong cluster addr
(192.x.x.20:6801/10821 != my 192.x.x.245:6801/10821)
    192.x.x.20 being the floating ip
    192.x.x.245 being the actual ip

Apart from that, we are getting HEALTH_WARN status on running ceph -s, with
many pgs in a degraded, unclean, undersized state

Also, if that matters, we have our osds on a seperate partition rather than
a disk.

We only need to get the cluster in a healthy state in our minimalistic
setup. Any idea on how to get past this?

Thanks and Regards,
Rahul S
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to