Re: [lxc-users] LXD static IP in container

2020-02-11 Thread Joshua Schaeffer
Not sure this will help but I provided my configuration for LXD below. I use 
Ubuntu so you'd have to translate the configuration network configuration 
portions over to RedHat/CentOS. My containers' configure their own interfaces 
(static, dhcp, or whatever), LXD simply defines the interface. These are the 
basic steps that I do:

 1. On the LXD host I setup bridges based on the vlan's that I want a NIC to 
connect to. Those vlan interfaces use a bond in LACP mode. If you don't use 
vlan's or bond's in your setup then just create the bridge from a physical 
Ethernet device.
 2. I then create a profile for each bridge corresponding to a vlan.
 3. When I create a container I can assign those profiles (one or multiple) to 
create the network devices.
 4. Inside the container I configure the network device just like any other 
system; physical, VM, container, or otherwise.

I do not use LXD managed network devices. All my network devices are managed by 
the host operating system. Again, if you don't use vlan's or bond's then you 
can jump straight to creating a bridge.

Here's the details of the steps:

Step 1:
Create the network devices that the LXD containers will use.

lxcuser@blllxc02:~$ cat /etc/network/interfaces.d/01-physical-network.device
# This file contains the physical NIC definitions.


# PHYSICAL NETWORK DEVICES #


# Primary services interface.
auto enp3s0
iface enp3s0 inet manual
    bond-master bond-services

# Secondary services interface.
auto enp4s0
iface enp4s0 inet manual
    bond-master bond-services

lxcuser@blllxc02:~$ cat /etc/network/interfaces.d/02-bonded.device
# This file is used to create network bonds.

##
# BONDED DEVICES #
##

# Services bond device.
auto bond-services
iface bond-services inet manual
    bond-mode 4
    bond-miimon 100
    bond-lacp-rate 1
    bond-slaves enp3s0 enp4s0
    bond-downdelay 400
    bond-updelay 800

lxcuser@blllxc02:~$ cat /etc/network/interfaces.d/03-vlan-raw.device
# This file creates raw vlan devices.


# RAW VLAN DEVICES #


# Tagged traffic on bond-services for VLAN 28
auto vlan0028
iface vlan0028 inet manual
    vlan-raw-device bond-services

# Tagged traffic on bond-services for VLAN 36
auto vlan0036
iface vlan0036 inet manual
    vlan-raw-device bond-services

# Tagged traffic on bond-services for VLAN 40
auto vlan0040
iface vlan0040 inet manual
    vlan-raw-device bond-services
...

lxcuser@blllxc02:~$ cat /etc/network/interfaces.d/04-bridge.device
# This file creates network bridges.

##
# BRIDGE DEVICES #
##

# Bridged interface for VLAN 28.
auto vbridge-28
iface vbridge-28 inet manual
    bridge_ports vlan0028
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

# Bridged interface for VLAN 36.
auto vbridge-36
iface vbridge-36 inet manual
    bridge_ports vlan0036
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

# Bridged interface for VLAN 40.
auto vbridge-40
iface vbridge-40 inet manual
    bridge_ports vlan0040
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0

Step 2:
Create profiles for the network devices. Technically not required but helps to 
setup new containers much more quickly.

lxcuser@blllxc02:~$ lxc profile list
+--+-+
| NAME | USED BY |
+--+-+
| 1500_vlan_dns_dhcp   | 5   |
+--+-+
| 28_vlan_virt_mgmt    | 15  |
+--+-+
| 40_vlan_ext_core_svc | 0   |
+--+-+
| 44_vlan_ext_svc  | 4   |
+--+-+
| 48_vlan_ext_cloud    | 0   |
+--+-+
| 80_vlan_int_core_svc | 2   |
+--+-+
| 84_vlan_int_svc  | 4   |
+--+-+
| 88_vlan_int_cloud    | 0   |
+--+-+
| 92_vlan_storage  | 0   |
+--+-+
| default  | 15  |
+--+-+

lxcuser@blllxc02:~$ lxc profile show 28_vlan_virt_mgmt
config: {}
description: ""
devices:
  mgmt_net:
    name: veth-mgmt
    nictype: bridged
    parent: vbridge-28
    type: nic
name: 28_vlan_virt_mgmt

Step 3:
Create the container with the correct profile(s) to add the network device(s) 
to the container.

lxcuser@blllxc02:~$ lxc init -p default -p 28_vlan_virt_mgmt -p 44_vlan_ext_svc 
ubuntu:18.04 bllmail02

Step 4:
Connect to the container and setup the interface the same way you setup any 
other system. The example below is set to manual but just change to however you 
want to setup your device.

lxcuser@blllxc02:~$ lxc exec bllmail02 -- cat 
/etc/network/interfaces.d/51-container-network.device
auto veth-mgmt
iface veth-mgmt inet manual
...

auto veth-ext-svc
iface veth-ext-svc inet manual
...
   
lxcuser@blllxc02:~$ lxc exec bllmail02 

Re: [lxc-users] LXD static IP in container

2020-02-11 Thread Michael Eager

On 2/11/20 11:00 AM, Mike Wright wrote:

On 2/11/20 10:01 AM, Michael Eager wrote:

There's still a lot of confusion.  :-/


Yes, here too.  I'm experimenting with the nic types but a lot of the 
problems I'm running into have to do with me misunderstanding the LXD 
command syntax.  The docs are rather sparse and seem to be geared toward 
people who already understand this stuff, ie the Cliff Notes vs The Book.


I keep having the feeling I'm being told something, I just don't
know what.  :-(


If nictype=bridged is set in the profile, then a container gets two IP
addresses.  One from DHCP when the container is launched, the second is
a static IP when the container configures the NIC.


The DHCP address is created by lxd based on the profile.  The static 
address is being created by the container itself, so you have two 
separate events taking place.  Use the profile OR the container 
networking scripts, not both (unless you know exactly what you are 
trying to accomplish).


I removed the eth0 device from the profile and added it to the container
config.  I still get two IP addresses.

If I remove eth0 from both profile and container, it doesn't exist,
naturally, and the container has no IP address.


If nictype=routed, only the static IP is set.  eth0 is present in the
container, but there is no network connectivity.


My speculation is that something needs needs to set the route.  The 
simplest route would be between the host and container and could allow 
disparate networks to connect, e.g. 10.X to 192.Y.  Whether that is on 
the host, container, or both I've yet to figure out.



If nictype=macvlan, "lxc list" shows that the container has an IP
address from DHCP, but "nmcli connection show" does not display eth0
under DEVICE.  "ip addr" does show eth0, but "ifup eth0" says no device
exists.  (I'm really confused about this; dmesg shows "eth0 renamed from
mac...")


This one makes sense to me.  The container's utilities (nmcli & ilk) get 
their knowledge of the network from config files.  "ip" gets its 
information from inspection and/or specification.  Neither know about 
the other



If nictype=ipvlan, an IP address is obtained using DHCP, but no eth0
device appears in the container (i.e., nmcli shows no device, ifup
fails.)  There is network connectivity. >
See the comment about macvlan.  The way I see this is macvlan is L2 and 
ipvlan is L3.  Use whichever matches how you deal with network life, IPs 
or MACs.


To have the container handle NIC configuration, rather than LXD, the
container needs to see a device.  Neither ipvlan or macvlan do this.

If I set nictype:ipvlan in the container config, even if I set
ipv4.address, the IP is from DHCP, not the address I specified.  There
was a comment somewhere that ipvlan doesn't support DHCP, but that may
be for LXC, not LXD.

Go to the link to the docs and look for "bridged, macvlan or ipvlan for 
connection to physical network".  That sections explains the differences.


I did that, which is why I tried all the combinations above.  The docs
say you can set this or that option, but there's little description of
what happens, or at least, not in the detail needed.  "Sets up new
network device" is pretty general.

https://lxd.readthedocs.io/en/stable-3.0/networks/ mentions ipv4.dhcp,
but that apparently is only for LXD managed network device
configuration, not in a container configuration.

Now, for those who know more than I (almost everybody?) PLEASE feel free 
to contribute to this thread and share some knowledge and PLEASE correct 
any errors.


Yes, please.

BTW: I just came across 
https://discuss.linuxcontainers.org/t/using-static-ips-with-lxd/1291/5 
which suggests that I should create an LXD

managed bridge, rather than use the existing bridge which LXC is using.

-- Mike Eager
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] LXD static IP in container

2020-02-11 Thread Michael Eager

On 2/8/20 1:32 PM, Mike Wright wrote:

On 2/6/20 8:29 AM, Michael Eager wrote:

Thanks.  I had tried this, but it didn't appear to work.  I just tried
it again and got it to work.

I assume that I can move the eth0 definition back to the profile,
without the ipv4.address specification.

https://lxd.readthedocs.io/en/latest/instances/#type-nic

Do searches on dhcp and static.

When dealing with device type=nic address assignment depends on nic type:

if nic type=bridged ipv4.address is assigned via DHCP
if nic type=routed  ipv4.address is assigned as static

Maybe that will clear up some of the confusion.


I'm trying to configure LXD containers, not LXC.  LXC containers are
working correctly.

There's still a lot of confusion.  :-/

If nictype=bridged is set in the profile, then a container gets two IP
addresses.  One from DHCP when the container is launched, the second is
a static IP when the container configures the NIC.

If nictype=routed, only the static IP is set.  eth0 is present in the
container, but there is no network connectivity.

If nictype=macvlan, "lxc list" shows that the container has an IP
address from DHCP, but "nmcli connection show" does not display eth0
under DEVICE.  "ip addr" does show eth0, but "ifup eth0" says no device
exists.  (I'm really confused about this; dmesg shows "eth0 renamed from
mac...")

If nictype=ipvlan, an IP address is obtained using DHCP, but no eth0
device appears in the container (i.e., nmcli shows no device, ifup
fails.)  There is network connectivity.

[There's some deja vu here.  I had a similar problem using LXC about a
year ago, where the container was getting both DHCP and static IP.  I
don't recall how I fixed that problem.  I don't see anything in
lxc.conf or in the container configuration.]



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users