Hello, I'm trying to setup a pacemaker/corosync on Ubuntu Trusty to access a SAN to use with OpenNebula[1]:
- pacemaker .1.10+git20130802-1ubuntu2.1 - corosync 2.3.3-1ubuntu1 I have a dedicated VLAN for cluster communications. Each bare metal node have a dedicated interface eth0 on that VLAN, 3 other interfaces are used as a bond0 integrated to an Open vSwtich as VLAN trunk. One VM have two interfaces on this Open vSwitch: - one for cluster communication - one to provide services, with default route on it My 3 bare metal nodes are OK, with pacemaker up running dlm/cLVM/GFS2, but my VM is always isolated. I setup a dedicated quorum (standby=on) VM with a single interface plugged to the cluster communication VLAN and it works (corosync/pacemaker). I run ssmping to debug multicast communication and found that the VM can only make unicast ping to the bare metal nodes. I finish by adding a route for multicast: ip route add 224.0.0.0/4 dev eth1 src 192.168.1.111 But it does not work. I only manage to have my VM as corosync member like others when default the route is on the same interface as my multicast traffic. I'm sure there is something I do not understand in corosync and multicast communication, do you have any hints? Regards. Footnotes: [1] http://opennebula.org/ -- Daniel Dehennin Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
signature.asc
Description: PGP signature
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org