[lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-28 Thread Peter Steele
We're currently using the CentOS libvirt-LXC tool set for creating and 
managing containers under CentOS 7.1. This tool set is being deprecated 
though so we plan to change our containers to run under the 
linuxcontainers.org framework instead. For simplicity I'll refer to this 
as simply LXC instead of libvirt-LXC.


Under libvirt-LXC, we have our containers configured to use host 
bridging and so are connected to the host network. Each container has 
their own static IP and appear as physical machines on the network. They 
can see each other as well as other systems running on the same network.


I've been unable so far to get host bridging to work with LXC. There is 
a fair amount of information available on networking for LXC but there 
seems to be a lot of different "flavors"--everyone has their own unique 
solution. I've tried various configurations and I am able to get the 
containers to see each other but have not been able to get them to see 
the host network nor the external internet. The config I am using for my 
containers looks like this:


lxc.utsname = test1
lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up

The br0 interface referenced here is the same bridge interface that I 
have configured for use with my libvirt-LXC containers. Some of the 
sites I've come across that discuss setting up host bridging for LXC say 
to configure rules in iptables. However, we do not need any such rules 
with libvirt-LXC, and in fact iptables (or more accurately, firewalld 
under CentOS 7) isn't even enabled on our servers.


In addition to this LXC config I'm using, I have also created 
/etc/sysconfig/network-scripts/ifcfg-eth0 with the following entries:


DEVICE=eth0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.110.222
NETMASK=255.255.0.0
GATEWAY=172.16.0.1

This is a pretty standard configuration for specifying static IPs. This 
is the exact same file that I use for my libvirt-LXC based containers. 
As I stated, the LXC containers I've created can see each other, but 
they cannot access the host network. They can't even ping their own host 
nor the gateway. The routing table though is the same for both my LXC 
and libvirt-LXC containers:


# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref
Use Iface

default 172.16.0.1  0.0.0.0 UG0 00 eth0
link-local  0.0.0.0 255.255.0.0 U 1021 0
0 eth0

172.16.0.0  0.0.0.0 255.255.0.0 U 0 00 eth0

I'm not sure what LXC magic I am missing to open the containers up to 
the outside network. I'm using the same container template for both my 
LXC and libvirt-LXC tests, and I am using the same host for both. What 
am I missing?


The output of "bridge link show br0" with one container running is:

# bridge link show br0
3: bond0 state UP :  mtu 
1500 master br0 state forwarding priority 32 cost 19
6: virbr0-nic state DOWN :  mtu 1500 master 
virbr0 state disabled priority 32 cost 100
22: veth5BJDXU state UP :  mtu 
1500 master virbr0 state forwarding priority 32 cost 2


The veth entry is present only when the container is running. In my 
equivalent setup using libvirt-LXC with one container, the output of 
this command produces essentially the same output except the generated 
name is veth0.


Any advise on how to resolve this issue would be appreciated.

Peter

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-28 Thread Serge Hallyn
Quoting Peter Steele (pwste...@gmail.com):
> We're currently using the CentOS libvirt-LXC tool set for creating
> and managing containers under CentOS 7.1. This tool set is being
> deprecated though so we plan to change our containers to run under
> the linuxcontainers.org framework instead. For simplicity I'll refer
> to this as simply LXC instead of libvirt-LXC.
> 
> Under libvirt-LXC, we have our containers configured to use host
> bridging and so are connected to the host network. Each container
> has their own static IP and appear as physical machines on the
> network. They can see each other as well as other systems running on
> the same network.

Can you show the host and container network details and container
xml for your libvirt-lxc setup?  If machines A and B are on the
same LAN, with containers on A, are you saying that B can ping
the containers on A?

-serge
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-28 Thread Peter Steele

On 08/28/2015 02:08 PM, Serge Hallyn wrote:

Can you show the host and container network details and container
xml for your libvirt-lxc setup?  If machines A and B are on the
same LAN, with containers on A, are you saying that B can ping
the containers on A?


Yes, in our libvirt-LXC setup, containers on machine A can ping 
containers on machine B. They all have static IPs taken from the same 
subnet. This was easy to setup in libvirt-LXC. In fact, I just used the 
default behavior provided by libvirt.


Each server has a br0 bridge interface with a static IP assigned to it. 
This is independent of anything to do with libvirt per se, the bridge is 
setup using a standard CentOS 7 configuration file. For example, one of 
my servers has a ifcfg-br0 file that looks like this:


# cat /etc/sysconfig/network-scripts/ifcfg-br0:
DEVICE=br0
NAME=br0
BOOTPROTO=none
ONBOOT=yes
TYPE=Bridge
USERCTL=no
NM_CONTROLLED=no
IPADDR=172.16.110.202
NETMASK=255.255.0.0
GATEWAY=172.16.0.1
DOMAIN=local.localdomain
DEFROUTE=yes

The containers themselves are created using a command similar to this:

virt-install --connect=lxc:///  \
--os-variant=rhel7 \
--network bridge=br0,mac=RANDOM \
--name=test1 \
--vcpus=2 \
--ram=4096 \
--container \
--nographics \
--noreboot \
--noautoconsole \
--wait=60  \
--filesystem /lxc/test1/rootfs/,/

The xml that this generates for the containers is pretty basic:


  
  


The container ends up with an eth0 interface with the specified mac 
address, bridged through br0. The br0 interface itself is not visible in 
the container, only lo and eth0.


I did not have to configure anything specifically on the server beyond 
the ifcfg-br0 file. I relied on the default behavior and configuration 
provided by libvirt-LXC. There *is* a network related configuration for 
libvirt, but it's only used if a container uses NAT instead of bridging:


# virsh net-dumpxml default

  default
  43852829-3a0e-4b27-a365-72e48037020f
  

  

  
  
  
  

  

  


I don't think the info in this xml plays any role in containers 
configured with bridged networking.


The command I use to create my LXC containers looks like this:

# lxc-create -t /bin/true -n test1 --dir=/lxc/test1/rootfs

I populate the rootfs manually using the same template that I use with 
libvirt-LXC, and subsequently customize the container with its own 
ifcfg-eth0 file, /etc/hosts, etc.


I'm clearly missing a configuration step that's needed to setup LXC 
containers with bridged networking like I have with libvirt -LXC...


Peter



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-29 Thread Neil Greenwood
Hi Peter, 

On 28 August 2015 23:11:51 BST, Peter Steele  wrote:
>On 08/28/2015 02:08 PM, Serge Hallyn wrote:
>> Can you show the host and container network details and container
>> xml for your libvirt-lxc setup?  If machines A and B are on the
>> same LAN, with containers on A, are you saying that B can ping
>> the containers on A?
>
>Yes, in our libvirt-LXC setup, containers on machine A can ping 
>containers on machine B. They all have static IPs taken from the same 
>subnet. This was easy to setup in libvirt-LXC. In fact, I just used the
>
>default behavior provided by libvirt.
>
>Each server has a br0 bridge interface with a static IP assigned to it.
>
>This is independent of anything to do with libvirt per se, the bridge
>is 
>setup using a standard CentOS 7 configuration file. For example, one of
>
>my servers has a ifcfg-br0 file that looks like this:
>
># cat /etc/sysconfig/network-scripts/ifcfg-br0:
>DEVICE=br0
>NAME=br0
>BOOTPROTO=none
>ONBOOT=yes
>TYPE=Bridge
>USERCTL=no
>NM_CONTROLLED=no
>IPADDR=172.16.110.202
>NETMASK=255.255.0.0
>GATEWAY=172.16.0.1
>DOMAIN=local.localdomain
>DEFROUTE=yes
>
>The containers themselves are created using a command similar to this:
>
>virt-install --connect=lxc:///  \
> --os-variant=rhel7 \
> --network bridge=br0,mac=RANDOM \
> --name=test1 \
> --vcpus=2 \
> --ram=4096 \
> --container \
> --nographics \
> --noreboot \
> --noautoconsole \
> --wait=60  \
> --filesystem /lxc/test1/rootfs/,/
>
>The xml that this generates for the containers is pretty basic:
>
> 
>   
>   
> 
>
>The container ends up with an eth0 interface with the specified mac 
>address, bridged through br0. The br0 interface itself is not visible
>in 
>the container, only lo and eth0.
>
>I did not have to configure anything specifically on the server beyond 
>the ifcfg-br0 file. I relied on the default behavior and configuration 
>provided by libvirt-LXC. There *is* a network related configuration for
>
>libvirt, but it's only used if a container uses NAT instead of
>bridging:
>
># virsh net-dumpxml default
>
>   default
>   43852829-3a0e-4b27-a365-72e48037020f
>   
> 
>   
> 
>   
>   
>   
>   
> 
>   
> 
>   
>
>
>I don't think the info in this xml plays any role in containers 
>configured with bridged networking.
>
>The command I use to create my LXC containers looks like this:
>
># lxc-create -t /bin/true -n test1 --dir=/lxc/test1/rootfs
>
>I populate the rootfs manually using the same template that I use with 
>libvirt-LXC, and subsequently customize the container with its own 
>ifcfg-eth0 file, /etc/hosts, etc.
>
>I'm clearly missing a configuration step that's needed to setup LXC 
>containers with bridged networking like I have with libvirt -LXC...
>

Do you have a ifcfg-br0 in your LXC configuration? If the VMs can see each 
other, I think most of the settings are correct apart from the bridge not being 
connected to the host's eth0. I'm not that familiar with Centos networking 
though, so I don't know which bit you need to change. 


Neil

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-29 Thread Peter Steele

On 08/29/2015 01:09 AM, Neil Greenwood wrote:

Hi Peter,

On 28 August 2015 23:11:51 BST, Peter Steele  wrote:
Do you have a ifcfg-br0 in your LXC configuration? If the VMs can see each 
other, I think most of the settings are correct apart from the bridge not being 
connected to the host's eth0. I'm not that familiar with Centos networking 
though, so I don't know which bit you need to change.


Neil



Yes, I have a ifcfg-br0 file in my LXC configuration. It's identical to 
what I use with my libvirt-LXC setup. In my particular case, I have the 
bridge connected to a bonded interface, but it works the same as if I 
was to connect it to a hosts eth0. This lets me bond say four nics 
eth0-3 into one bond0 interface. The is connected to the bond0 interface 
but from its perspective it acts the same as an eth0 interface. I need 
to of course maintain this configuration in moving from libvirt-LXC to LXC.


Ultimately, in the long term, I will want to remove the entire set of 
libvirt packages from my setup and only install the LXC rpms. I know 
that LXC was developed by Canonical but my impression was that it would 
work under CentOS/Fedora as well. Unfortunately I'm not that familiar 
with Ubuntu networking and a lot of the examples appear to be specific 
to Ubuntu.


For example, I see references to the file /etc/network/interfaces. Is 
this an LXC thing or is this a standard file in Ubuntu networking?


Mark Constable asked a related question stemming from my original post 
and commented on the file /etc/default/lxc-net. I assume this file is 
*not* specific to Ubuntu. Do I need to create this file in my CentOS setup?


It might be useful to configure a CentOS system without libvirt 
installed and do some LXC tests without libvirt-LXC getting in the way. 
I was hoping making the move to LXC wouldn't be too painful but it looks 
like I'm going to have to work for it a bit.


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-29 Thread Mark Constable

On 29/08/15 23:54, Peter Steele wrote:

For example, I see references to the file /etc/network/interfaces. Is this an
 LXC thing or is this a standard file in Ubuntu networking?


It's a standard pre-systemd debian/ubuntu network config file.
 

Mark Constable asked a related question stemming from my original post and
 commented on the file /etc/default/lxc-net. I assume this file is *not* 
specific
 to Ubuntu.


Aside from some ubuntu specific apparmor etc files these are what the ubuntu lxc
package installs (confusingly the lxd-client package install the "lxc" 
command)...

/etc/bash_completion.d/lxc
/etc/default/lxc
/etc/dnsmasq.d-available/lxc
/etc/init/lxc.conf
/etc/init/lxc-instance.conf
/etc/init/lxc-net.conf
/etc/lxc/default.conf
/lib/systemd/system/lxc-net.service
/lib/systemd/system/lxc.service

~ cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

Eek, lxc-net does not seem to be part of a package so I'm not sure how I got 
that
file!

~ dpkg -S /etc/default/lxc-net
dpkg-query: no path found matching pattern /etc/default/lxc-net

However this config file refers to it so maybe I copied it from some 
howto/tutorial...

~ egrep -v "^(#|$)" /etc/default/lxc
LXC_AUTO="true"
USE_LXC_BRIDGE="false"  # overridden in lxc-net
[ -f /etc/default/lxc-net ] && . /etc/default/lxc-net
LXC_SHUTDOWN_TIMEOUT=120

FWIW I only use the lxc command for unpriv containers via the lxd daemon as of 
the
last 4 or 5 months and, like you I think, have no interest in the default NAT'd
10.0.3.* lxcbr0 network. My main test honeypot container on my laptop is at
https://goldcoast.org. It and ma...@goldcoast.org seem to work most of the time.

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-29 Thread Peter Steele

On 08/29/2015 07:29 AM, Mark Constable wrote:

On 29/08/15 23:54, Peter Steele wrote:
For example, I see references to the file /etc/network/interfaces. Is 
this an

 LXC thing or is this a standard file in Ubuntu networking?


It's a standard pre-systemd debian/ubuntu network config file.


That's what I was beginning to suspect since creating this in my CentOS 
environment seemed to have no effect on LXC at all. Knowing this will 
help me filter out examples that talk about creating these files.


Do you suppose it's possible that Canonical LXC isn't entirely 
compatible with CentOS?


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-29 Thread Fajar A. Nugraha
On Sat, Aug 29, 2015 at 10:40 PM, Peter Steele  wrote:

> On 08/29/2015 07:29 AM, Mark Constable wrote:
>
>> On 29/08/15 23:54, Peter Steele wrote:
>>
>>> For example, I see references to the file /etc/network/interfaces. Is
>>> this an
>>>  LXC thing or is this a standard file in Ubuntu networking?
>>>
>>
>> It's a standard pre-systemd debian/ubuntu network config file.
>>
>
> That's what I was beginning to suspect since creating this in my CentOS
> environment seemed to have no effect on LXC at all. Knowing this will help
> me filter out examples that talk about creating these files.
>
> Do you suppose it's possible that Canonical LXC isn't entirely compatible
> with CentOS?
>
>
>

Actually there's no such thing as "canonical lxc".

While lxc's main developers are currently from canonical, the lxc project
itself isn't really tied to a specific distro. For example, since lxc-1.1.0
the bundled init script should function similarly on all distros, with
lxcbr0 (including dnsmasq) running by default.

The main advantage of ubuntu compared to other distros w.r.t lxc that I can
see is that:
- better apparmor integration, so (among others) it should be relatively
safer to run privileged container under ubuntu host
- better container/guest systemd support, where an ubuntu vivid/wily guest
should be able to run as privileged container out-of-the-box (and wily
should be able to run as unprivileged container)

If you only care about "having privileged containers running", then a
centos host should work fine.

Back to your original question, you need to have some basic understanding
of your distro's networking setup. For example, debian/ubuntu uses
/etc/network/interfaces (one file for all network interfaces) while centos
uses /etc/sysconfig/network-scripts/ifcfg-* (one file for each network
interface). To achieve what you want, basically you need to create a bridge
(e.g. br0) on top of your main network interface (e.g. eth0) that would be
used by containers. The instructions are specific to your distro (e.g.
centos and ubuntu are different), but not specific to lxc (i.e. the same
bridge setup can be used by kvm/xen).

One bridge setup example (from google):
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-networkscripts-interfaces_network-bridge.html

>From the snippets you posted, you created
"/etc/sysconfig/network-scripts/ifcfg-eth0", but you didn't mention where.
If it's on the host, then you get it wrong, since you seem to be using
"bond0" on the host. If it's in the container (which is correct), then the
easiest way to check where the problems lie is with tcpdump:
- on the container: "ping -n 172.16.0.1"
- on the host: "tcpdump -n -i bond0 172.16.0.1" and "tcpdump -n -i
veth5BJDXU 172.16.0.1" (substitute the veth name with whatever you have)

If all goes well, you should see both icmp reply and request on both
interfaces (bond0 and veth5BJDXU). If you have forwarding problems, you
will see packets on veth interface, but not on bond0.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-29 Thread Fajar A. Nugraha
On Sun, Aug 30, 2015 at 5:10 AM, Fajar A. Nugraha  wrote:

> - on the host: "tcpdump -n -i bond0 172.16.0.1" and "tcpdump -n -i
> veth5BJDXU 172.16.0.1" (substitute the veth name with whatever you have)
>
>
It should be "tcpdump -n -i bond0 host 172.16.0.1" and "tcpdump -n -i
veth5BJDXU host 172.16.0.1"
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-30 Thread Peter Steele

On 08/29/2015 03:26 PM, Fajar A. Nugraha wrote:


It should be "tcpdump -n -i bond0 host 172.16.0.1" and "tcpdump -n -i 
veth5BJDXU host 172.16.0.1"




Okay, I ran this test, plus a few others. This specific test generated 
no icmp traffic on either bond0 or the veth interface. After starting 
these tcpdump commands, I connected to the container and ran a ping to 
172.16.0.1. I got a "host unreachable error" so I'm not surprised 
nothing showed up in the tcpdump commands. I did the identical test with 
a libvirt container and got the expected icpm request and reply records:


10:44:05.379736 IP 172.16.0.1 > 172.16.110.204: ICMP echo reply, id 
2656, seq 3, length 64
10:44:06.390229 IP 172.16.110.204 > 172.16.0.1: ICMP echo request, id 
2656, seq 4, length 64
10:44:06.390689 IP 172.16.0.1 > 172.16.110.204: ICMP echo reply, id 
2656, seq 4, length 64
10:44:07.400236 IP 172.16.110.204 > 172.16.0.1: ICMP echo request, id 
2656, seq 5, length 64


It's pretty clear the LXC containers are not talking to the bridge. Once 
started, I can't even ping a container's IP address from the host, and 
likewise the container cannot ping its host. LXC containers can only 
ping each other, behaving exactly like I'd expect NAT to behave. The 
config I am using must not be correct. I'm using this config:


lxc.utsname = test1
lxc.network.type = veth
lxc.network.name = eth0
lxc.network.link = br0
lxc.network.flags = up

You'd think this would tell the container to link to the br0 bridge, but 
this isn't doing what I intend. The brctl command shows what's really 
going on:


# brctl show
bridge name bridge id   STP enabled interfaces
br0 8000.52540007b444   no bond0
vnet0
vnet1
virbr0  8000.525400d0df7b   yes veth5BJDXU
vethU3VLKX
virbr0-nic

The two vnet entries associated with the br0 bridge interface are the 
ones that get created when I start my libvirt-LXC containers. The two 
veth entries associated with virbr0 sre created when I start my LXC 
containers. The virbr0 bridge is created by libvirt to support 
containers (and VMs) that are configured to use NAT addressing. We've 
always used host bridging and so have never used this virbr0 interface. 
For whatever reason, the LXC containers want to link to virbr0 despite 
the fact br0 is specified in their config.


Clearly there is user error here on my part and I am not correctly 
specifying how to configure LXC containers to use host bridging under 
CentOS. I'll have to do some more digging.


Peter

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-30 Thread Peter Steele

On 08/30/2015 11:10 AM, Peter Steele wrote:
Clearly there is user error here on my part and I am not correctly 
specifying how to configure LXC containers to use host bridging under 
CentOS. I'll have to do some more digging.




I figured it out. I've been using commands similar to

lxc-create -t /bin/true -n test1 --dir=/lxc/test1/rootfs

I mistakenly assumed that by specifying the directory for the rootfs to 
be /lxc/test1/rootfs the config file for the container would be 
/lxc/test1/config. Stupid mistake. LXC still puts the config file in the 
default location /var/lib/lxc/test1/config, and this file was where 
virbr0 was being specified. My own config file was correct such as it 
was, except that it was being ignored. I changed the config file 
/var/lib/lxc/test1/config to use br0 instead of virbr0 and that solved 
my problem. My container can now see the full subnet.


A stupid beginner's mistake. What a waste of time... :-(

Thanks for all the feedback.

Peter



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Configuring LXC containers to use a host bridge under CentOS 7

2015-08-31 Thread Serge Hallyn
Quoting Peter Steele (pwste...@gmail.com):
> On 08/30/2015 11:10 AM, Peter Steele wrote:
> >Clearly there is user error here on my part and I am not correctly
> >specifying how to configure LXC containers to use host bridging
> >under CentOS. I'll have to do some more digging.
> >
> 
> I figured it out. I've been using commands similar to
> 
> lxc-create -t /bin/true -n test1 --dir=/lxc/test1/rootfs
> 
> I mistakenly assumed that by specifying the directory for the rootfs
> to be /lxc/test1/rootfs the config file for the container would be
> /lxc/test1/config. Stupid mistake. LXC still puts the config file in

D'oh.  fwiw [-P | --lxcpath] is that option.  (--dir specifically means
specify the container rootfs.)

> the default location /var/lib/lxc/test1/config, and this file was
> where virbr0 was being specified. My own config file was correct
> such as it was, except that it was being ignored. I changed the
> config file /var/lib/lxc/test1/config to use br0 instead of virbr0
> and that solved my problem. My container can now see the full
> subnet.
> 
> A stupid beginner's mistake. What a waste of time... :-(
> 
> Thanks for all the feedback.
> 
> Peter
> 
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users