I want to setup a load balance to point to 2 VM in opennebula using keepalived, 
refer to ww.keepalived.org
I am having problem when implementing virtual ip or vip.
The vip which is actually an arbitrary ip in the same network  can bind to my 
VM eth0 (this is the master server) and if the master is down, the vip will 
float and bind to the standby VM eth0.

But the vip cannot be ping from other VM except the VM that is binding the vip 
as below.  Here my vip is 10.4.104.88 and my master and standby VM are 
10.4.104.28 and 10.4.104.91 respectively, available below.

[root@DW-LB01-253 ~]# ping 10.4.104.88
PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data.
64 bytes from 10.4.104.88: icmp_seq=1 ttl=64 time=0.127 ms
^C
--- 10.4.104.88 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 760ms
rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms

[root@DW-LB02-254 ~]# ping 10.4.104.88
PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data.
^C
--- 10.4.104.88 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 804ms

[root@DW-LB01-253 ~]# ip add sh eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP 
qlen 1000
    link/ether 02:00:0a:04:68:1c brd ff:ff:ff:ff:ff:ff
    inet 10.4.104.28/24 brd 10.4.104.255 scope global eth0
    inet 10.4.104.88/32 scope global eth0
    inet6 fe80::aff:fe04:681c/64 scope link
       valid_lft forever preferred_lft forever

[root@DW-LB02-254 ~]# ip add sh eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP 
qlen 1000
    link/ether 02:00:0a:04:68:5b brd ff:ff:ff:ff:ff:ff
    inet 10.4.104.91/24 brd 10.4.104.255 scope global eth0
    inet6 fe80::aff:fe04:685b/64 scope link
       valid_lft forever preferred_lft forever


Thanks and best regards.


Lim

_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to