On 08/28/2015 02:08 PM, Serge Hallyn wrote:
Can you show the host and container network details and container
xml for your libvirt-lxc setup?  If machines A and B are on the
same LAN, with containers on A, are you saying that B can ping
the containers on A?

Yes, in our libvirt-LXC setup, containers on machine A can ping containers on machine B. They all have static IPs taken from the same subnet. This was easy to setup in libvirt-LXC. In fact, I just used the default behavior provided by libvirt.

Each server has a br0 bridge interface with a static IP assigned to it. This is independent of anything to do with libvirt per se, the bridge is setup using a standard CentOS 7 configuration file. For example, one of my servers has a ifcfg-br0 file that looks like this:

# cat /etc/sysconfig/network-scripts/ifcfg-br0:
DEVICE=br0
NAME=br0
BOOTPROTO=none
ONBOOT=yes
TYPE=Bridge
USERCTL=no
NM_CONTROLLED=no
IPADDR=172.16.110.202
NETMASK=255.255.0.0
GATEWAY=172.16.0.1
DOMAIN=local.localdomain
DEFROUTE=yes

The containers themselves are created using a command similar to this:

virt-install --connect=lxc:///  \
                                    --os-variant=rhel7 \
                                    --network bridge=br0,mac=RANDOM \
                                    --name=test1 \
                                    --vcpus=2 \
                                    --ram=4096 \
                                    --container \
                                    --nographics \
                                    --noreboot \
                                    --noautoconsole \
                                    --wait=60  \
                                    --filesystem /lxc/test1/rootfs/,/

The xml that this generates for the containers is pretty basic:

    <interface type='bridge'>
      <mac address='00:16:3e:e1:54:36'/>
      <source bridge='br0'/>
    </interface>

The container ends up with an eth0 interface with the specified mac address, bridged through br0. The br0 interface itself is not visible in the container, only lo and eth0.

I did not have to configure anything specifically on the server beyond the ifcfg-br0 file. I relied on the default behavior and configuration provided by libvirt-LXC. There *is* a network related configuration for libvirt, but it's only used if a container uses NAT instead of bridging:

# virsh net-dumpxml default
<network>
  <name>default</name>
  <uuid>43852829-3a0e-4b27-a365-72e48037020f</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:f9:cd:a3'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

I don't think the info in this xml plays any role in containers configured with bridged networking.

The command I use to create my LXC containers looks like this:

# lxc-create -t /bin/true -n test1 --dir=/lxc/test1/rootfs

I populate the rootfs manually using the same template that I use with libvirt-LXC, and subsequently customize the container with its own ifcfg-eth0 file, /etc/hosts, etc.

I'm clearly missing a configuration step that's needed to setup LXC containers with bridged networking like I have with libvirt -LXC...

Peter



_______________________________________________
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Reply via email to