[one-users] Datastores on different storage media
I've planned to use ssd and normal hdd together in datastores. Can I segregate like ssd on one datastores ID and normal hd on another datastores ID? Thanks and best regards. Lawrence ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] Ubuntu and Opennebula in a box
What is the disadvantage doing Opennebula in a box, I know for sure it will be performance but what other point? Thanks and best regards. Lim From: Lim Kean Meng Sent: Friday, 11 January, 2013 9:59 AM To: users@lists.opennebula.org Subject: Ubuntu and Opennebula in a box The stock version on Ubuntu precise is still Opennebula 3.2, any way to get the latest one thru PPA? And in what version does opennebula support clould in a box, frontend and worker node in 1 server? Thanks and best regards. Lim ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] Ubuntu and Opennebula in a box
The stock version on Ubuntu precise is still Opennebula 3.2, any way to get the latest one thru PPA? And in what version does opennebula support clould in a box, frontend and worker node in 1 server? Thanks and best regards. Lim -- - - DISCLAIMER: This e-mail (including any attachments) is for the addressee(s) only and may contain confidential information. If you are not the intended recipient, please note that any dealing, review, distribution, printing, copying or use of this e-mail is strictly prohibited. If you have received this email in error, please notify the sender immediately and delete the original message. MIMOS Berhad is a research and development institution under the purview of the Malaysian Ministry of Science, Technology and Innovation. Opinions, conclusions and other information in this e- mail that do not relate to the official business of MIMOS Berhad and/or its subsidiaries shall be understood as neither given nor endorsed by MIMOS Berhad and/or its subsidiaries and neither MIMOS Berhad nor its subsidiaries accepts responsibility for the same. All liability arising from or in connection with computer viruses and/or corrupted e-mails is excluded to the fullest extent permitted by law. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] NAT in Opennebula
I suppose Opennebula can't support NAT by default, is there any good reference that you come across to enable it? Thanks and best regards. Lim -- - - DISCLAIMER: This e-mail (including any attachments) is for the addressee(s) only and may contain confidential information. If you are not the intended recipient, please note that any dealing, review, distribution, printing, copying or use of this e-mail is strictly prohibited. If you have received this email in error, please notify the sender immediately and delete the original message. MIMOS Berhad is a research and development institution under the purview of the Malaysian Ministry of Science, Technology and Innovation. Opinions, conclusions and other information in this e- mail that do not relate to the official business of MIMOS Berhad and/or its subsidiaries shall be understood as neither given nor endorsed by MIMOS Berhad and/or its subsidiaries and neither MIMOS Berhad nor its subsidiaries accepts responsibility for the same. All liability arising from or in connection with computer viruses and/or corrupted e-mails is excluded to the fullest extent permitted by law. ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] keepalived: problem implementing virtual ip
I think I found why not any ip just bind to eth0, the ip must be defined in context.sh. In other words, the vm's network do not recognize any other ip except the one define in context.sh, see below: Q: since I'll need 2 ip to bind to the same interface on eth0, how do I force the 2nd ip to be registered into context.sh? root@one-dev04:/srv/cloud/one/var/262# more context.sh # Context variables generated by OpenNebula BROADCAST=10.4.104.255 DNS=192.228.137.100 FILES=/var/lib/one/vm-templates/ONE-centos/centos-init.sh GATEWAY=10.4.104.254 HOSTNAME=CentOS-6.2-x64 IP_PUBLIC=10.4.104.119 NETMASK=255.255.255.0 NETWORK=10.4.104.0 TARGET=hdb Thanks and best regards. Lim From: Lim Kean Meng Sent: Wednesday, 17 October, 2012 4:50 PM To: 'users@lists.opennebula.org' Subject: keepalived: problem implementing virtual ip I want to setup a load balance to point to 2 VM in opennebula using keepalived, refer to ww.keepalived.org I am having problem when implementing virtual ip or vip. The vip which is actually an arbitrary ip in the same network can bind to my VM eth0 (this is the master server) and if the master is down, the vip will float and bind to the standby VM eth0. But the vip cannot be ping from other VM except the VM that is binding the vip as below. Here my vip is 10.4.104.88 and my master and standby VM are 10.4.104.28 and 10.4.104.91 respectively, available below. [root@DW-LB01-253 ~]# ping 10.4.104.88 PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data. 64 bytes from 10.4.104.88: icmp_seq=1 ttl=64 time=0.127 ms ^C --- 10.4.104.88 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 760ms rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms [root@DW-LB02-254 ~]# ping 10.4.104.88 PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data. ^C --- 10.4.104.88 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 804ms [root@DW-LB01-253 ~]# ip add sh eth0 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:00:0a:04:68:1c brd ff:ff:ff:ff:ff:ff inet 10.4.104.28/24 brd 10.4.104.255 scope global eth0 inet 10.4.104.88/32 scope global eth0 inet6 fe80::aff:fe04:681c/64 scope link valid_lft forever preferred_lft forever [root@DW-LB02-254 ~]# ip add sh eth0 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:00:0a:04:68:5b brd ff:ff:ff:ff:ff:ff inet 10.4.104.91/24 brd 10.4.104.255 scope global eth0 inet6 fe80::aff:fe04:685b/64 scope link valid_lft forever preferred_lft forever Thanks and best regards. Lim ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[one-users] keepalived: problem implementing virtual ip
I want to setup a load balance to point to 2 VM in opennebula using keepalived, refer to ww.keepalived.org I am having problem when implementing virtual ip or vip. The vip which is actually an arbitrary ip in the same network can bind to my VM eth0 (this is the master server) and if the master is down, the vip will float and bind to the standby VM eth0. But the vip cannot be ping from other VM except the VM that is binding the vip as below. Here my vip is 10.4.104.88 and my master and standby VM are 10.4.104.28 and 10.4.104.91 respectively, available below. [root@DW-LB01-253 ~]# ping 10.4.104.88 PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data. 64 bytes from 10.4.104.88: icmp_seq=1 ttl=64 time=0.127 ms ^C --- 10.4.104.88 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 760ms rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms [root@DW-LB02-254 ~]# ping 10.4.104.88 PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data. ^C --- 10.4.104.88 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 804ms [root@DW-LB01-253 ~]# ip add sh eth0 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:00:0a:04:68:1c brd ff:ff:ff:ff:ff:ff inet 10.4.104.28/24 brd 10.4.104.255 scope global eth0 inet 10.4.104.88/32 scope global eth0 inet6 fe80::aff:fe04:681c/64 scope link valid_lft forever preferred_lft forever [root@DW-LB02-254 ~]# ip add sh eth0 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:00:0a:04:68:5b brd ff:ff:ff:ff:ff:ff inet 10.4.104.91/24 brd 10.4.104.255 scope global eth0 inet6 fe80::aff:fe04:685b/64 scope link valid_lft forever preferred_lft forever Thanks and best regards. Lim ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Re: [one-users] keepalived: problem implementing virtual ip
It is on opennebula VM, I am using 2 VM to load balance using keepalived, details in my previous post. Thanks and best regards. Lim From: Teik Hooi Beh [mailto:th...@thbeh.com] Sent: Thursday, 18 October, 2012 12:08 AM To: Lim Kean Meng Cc: users@lists.opennebula.org Subject: Re: [one-users] keepalived: problem implementing virtual ip Hi, Are you using virtual or physical load balancer? Beh On Wed, Oct 17, 2012 at 4:50 PM, Lim Kean Meng km@mimos.mymailto:km@mimos.my wrote: I want to setup a load balance to point to 2 VM in opennebula using keepalived, refer to ww.keepalived.orghttp://ww.keepalived.org I am having problem when implementing virtual ip or vip. The vip which is actually an arbitrary ip in the same network can bind to my VM eth0 (this is the master server) and if the master is down, the vip will float and bind to the standby VM eth0. But the vip cannot be ping from other VM except the VM that is binding the vip as below. Here my vip is 10.4.104.88 and my master and standby VM are 10.4.104.28 and 10.4.104.91 respectively, available below. [root@DW-LB01-253 ~]# ping 10.4.104.88 PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data. 64 bytes from 10.4.104.88http://10.4.104.88: icmp_seq=1 ttl=64 time=0.127 ms ^C --- 10.4.104.88 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 760ms rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms [root@DW-LB02-254 ~]# ping 10.4.104.88 PING 10.4.104.88 (10.4.104.88) 56(84) bytes of data. ^C --- 10.4.104.88 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 804ms [root@DW-LB01-253 ~]# ip add sh eth0 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:00:0a:04:68:1c brd ff:ff:ff:ff:ff:ff inet 10.4.104.28/24http://10.4.104.28/24 brd 10.4.104.255 scope global eth0 inet 10.4.104.88/32http://10.4.104.88/32 scope global eth0 inet6 fe80::aff:fe04:681c/64 scope link valid_lft forever preferred_lft forever [root@DW-LB02-254 ~]# ip add sh eth0 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:00:0a:04:68:5b brd ff:ff:ff:ff:ff:ff inet 10.4.104.91/24http://10.4.104.91/24 brd 10.4.104.255 scope global eth0 inet6 fe80::aff:fe04:685b/64 scope link valid_lft forever preferred_lft forever Thanks and best regards. Lim ___ Users mailing list Users@lists.opennebula.orgmailto:Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org