Hello Patrizio,
If I correctly understood, the master server is the OpenNebula server
and the slaves are the KVM hosts. The master is connected to Internet,
has a public ip address and is also connected to a private lan; slaves
(kvm servers) are connected only to private lan.
You use also the master server to masquerade traffic form VM to Internet
via iptables.
If it's correct, you have 2 possible solution:
1) configure the public IPs as secondary in master servers and set
iptalbles to DNAT traffic from public IP to the private one
2) install a second ethernet card on slave server, connect it to public
internet and create a second bridge (without a real public IP address).
The second solution for me is better.
Bye,
Alberto
On 04/07/2012 16:36, Patrizio Dazzi wrote:
Dear all OpenNebulers,
As a researcher of the HPCLab at ISTI-CNR I am working for the CONTRAIL
EU project (http://www.contrail-project.eu/), which main aim is to
conceive and develop
an holistic system for building cloud federation that can be managed
in an integrated and
seamless way.
For the reference implementation of CONTRAIL system, we decided to exploit
OpenNebula as Provider-level IaaS. As a consequence, CNR and a few other
project partners are setting up an OpenNebula Cloud each.
Unfortunately, at CNR we are experiencing some issues related to OpenNebula
configuration. These are mainly due to the lack of public IPs
availability on our side and
the consequent decision to reserve them for the VMs hence avoiding to
assign to each
Physical machine a public IP.
Let me describe what's going on on our side.
We have installed the tarball distribution of open nebula 3.4.1 for
running virtual machines on a (kvm based) cluster made of 5 computers:
a front-end machine and 4 slaves machines. Currently, the master has 2
network interfaces configured whereas the slaves have only a single
network interface configured each. All the nodes of the cluster are
running Ubuntu server 12.04 64 bit.
The slaves of the cluster are connected to the front-end via a gigabit
switch. The front-end uses the second network interface to connect to
Internet. Such front-end is the only machine having a public IP.
Indeed, the internal network exploits a class of private IPs
(192.168.100.X). The front-end iptables has been already properly
configured to forward and masquerade the connections from the slaves
to the internet. Indeed, we are able to connect to ubuntu update sites
directly from the slaves.
I also have a few public IPs that I would like to assign to certain
Virtual Machines that will be run on the cluster.
Unfortunately, the slaves are connected to a private network, hence
their virtual bridges, as far as I know, can receive only packets sent
to IPs having the same network address/mask. As a consequence
assigning them a public IP would result in a useless operation because
the packets won't be properly routed to the physical machine hosting
such a public IP.
Can you help me ? Do you have any suggestion ?
Best Regards,
-- Patrizio
Dr Patrizio Dazzi, Ph.D.
HPC Lab @ ISTI-CNR, Via Moruzzi, 1 - 56126, Pisa, Italy
Phone: +39 050 315 30 74 -- Fax: +39 050 315 20 40
"Genius is one percent inspiration, ninety-nine percent perspiration"
- Thomas Alva Edison
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
--
----------------------------
Alberto Zuin
via Mare, 36/A
36030 Lugo di Vicenza (VI)
Italy
P.I. 04310790284
Tel. +39.0499271575
Fax. +39.0492106654
Cell. +39.3286268626
www.azns.it - albe...@azns.it
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org