Re: [lxc-users] ?==?utf-8?q? OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-04 Thread Luis Michael Ibarra

My comments between lines.

> On Aug 3, 2017, at 10:22, Ron Kelley  wrote:
> 
> We have implemented something similar to this using VXLAN (outside the scope 
> of LXC).
> 
> Our setup: 6x servers colocated in the data center running LXD 2.15 - each 
> server with 2x NICs: nic(a) for management and nic(b) 
> 
> * nic(a) is strictly used for all server management tasks (lxd commands)
> * nic(b) is used for all VXLAN network segments
> 
> 
> On each server, we provision ethernet interface eth1 with a private IP 
> Address (i.e.: 172.20.0.x/24) and run the following script at boot to 
> provision the VXLAN interfaces (via multicast):
> ---
> #!/bin/bash
> 
> # Script to configure VxLAN networks
> ACTION="$1"
> 
> case $ACTION in
>  up)
>ip -4 route add 239.0.0.1 dev eth1
>for i in {1101..1130}; do ip link add vxlan.${i} type vxlan group 
> 239.0.0.1 dev eth1 dstport 0 id ${i} && ip link set vxlan.${i} up; done
>   ;;
>  down)
>  ip -4 route del 239.0.0.1 dev eth1
>  for i in {1101..1130}; do ip link set vxlan.${i} down && ip link del 
> vxlan.${i}; done
>   ;;
>   *)
> echo " ${0} up|down"; exit
>  ;;
> esac
> ---
> 
> To get the containers talking, we simply assign a container to a respective 
> VXLAN interface via the “lxc network attach” command like this:  
> /usr/bin/lxc network attach vxlan.${VXLANID} ${HOSTNAME} eth0 eth0.
> 
> We have single-armed (i.e.: eth0) containers that live exclusively behind a 
> VXLAN interface, and we have dual-armed servers (eth0 and eth1) hat act as 
> firewall/NAT containers for a VXLAN segment.
> 
> It took a while to get it all working, but it works great.  We can move 
> containers anywhere in our infrastructure without issue. 
> 
> Hope this helps!
> 
> 
> 
> -Ron
> 
> 
> 

Second VXLan. Check the RFC is pretty straightforward[1]. In summary, you need 
a key database to map your remote networks; etcd is a way to implement this, or 
you can use multicast as Ron explained.

[1] https://tools.ietf.org/pdf/rfc7348.pdf


> 
>> On Aug 3, 2017, at 8:05 AM, Tomasz Chmielewski  wrote:
>> 
>> I think fan is single server only and / or won't cross different networks.
>> 
>> You may also take a look at https://www.tinc-vpn.org/
>> 
>> Tomasz
>> https://lxadm.com
>> 
>>> On Thursday, August 03, 2017 20:51 JST, Félix Archambault 
>>>  wrote: 
>>> 
>>> Hi Amblard,
>>> 
>>> I have never used it, but this may be worth taking a look to solve your
>>> problem:
>>> 
>>> https://wiki.ubuntu.com/FanNetworking
>>> 
>>> On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" 
>>> wrote:
>>> 
>>> Hello,
>>> 
>>> I am deploying 10< bare metal servers to serve as hosts for containers
>>> managed through LXD.
>>> As the number of container grows, management of inter-container
>>> running on different hosts becomes difficult to manage and need to be
>>> streamlined.
>>> 
>>> The goal is to setup a 192.168.0.0/24 network over which containers
>>> could communicate regardless of their host. The solutions I looked at
>>> [1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of
>>> bridge.driver: openvswitch configuration for LXD.
>>> Note: baremetal servers are hosted on different physical networks and
>>> use of multicast was ruled out.
>>> 
>>> An illustration of the goal architecture is similar to the image visible on
>>> https://books.google.fr/books?id=vVMoDwAAQBAJ=PA168=
>>> 6aJRw15HSf=PA197#v=onepage=false
>>> Note: this extract is from a book about LXC, not LXD.
>>> 
>>> The point that is not clear is
>>> - whether each container needs to have as many veth as there are
>>> baremetal host, in which case [de]commission of a new baremetal would
>>> require configuration updated of all existing containers (and
>>> basically rule out this scenario)
>>> - or whether it is possible to "hide" this mesh network at the host
>>> level and have a single veth inside each container to access the
>>> private network and communicate with all the other containers
>>> regardless of their physical location and regardeless of the number of
>>> physical peers
>>> 
>>> Has anyone built such a setup?
>>> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
>>> automate part of the setup?
>>> Online documentation is scarce on the topic so any help would be
>>> appreciated.
>>> 
>>> Regards,
>>> Amaury
>>> 
>>> [1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
>>> [2] https://stackoverflow.com/questions/39094971/want-to-use
>>> -the-vlan-feature-of-openvswitch-with-lxd-lxc
>>> [3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne
>>> tworking-on-ubuntu-16-04-lts/
>>> 
>>> 
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>> 

Re: [lxc-users] ?==?utf-8?q? OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-03 Thread Ron Kelley
We have implemented something similar to this using VXLAN (outside the scope of 
LXC).

Our setup: 6x servers colocated in the data center running LXD 2.15 - each 
server with 2x NICs: nic(a) for management and nic(b) 

* nic(a) is strictly used for all server management tasks (lxd commands)
* nic(b) is used for all VXLAN network segments


On each server, we provision ethernet interface eth1 with a private IP Address 
(i.e.: 172.20.0.x/24) and run the following script at boot to provision the 
VXLAN interfaces (via multicast):
---
#!/bin/bash

# Script to configure VxLAN networks
ACTION="$1"

case $ACTION in
  up)
ip -4 route add 239.0.0.1 dev eth1
for i in {1101..1130}; do ip link add vxlan.${i} type vxlan group 
239.0.0.1 dev eth1 dstport 0 id ${i} && ip link set vxlan.${i} up; done
   ;;
  down)
  ip -4 route del 239.0.0.1 dev eth1
  for i in {1101..1130}; do ip link set vxlan.${i} down && ip link del 
vxlan.${i}; done
   ;;
   *)
 echo " ${0} up|down"; exit
  ;;
esac
---

To get the containers talking, we simply assign a container to a respective 
VXLAN interface via the “lxc network attach” command like this:  
/usr/bin/lxc network attach vxlan.${VXLANID} ${HOSTNAME} eth0 eth0.

We have single-armed (i.e.: eth0) containers that live exclusively behind a 
VXLAN interface, and we have dual-armed servers (eth0 and eth1) hat act as 
firewall/NAT containers for a VXLAN segment.

It took a while to get it all working, but it works great.  We can move 
containers anywhere in our infrastructure without issue. 

Hope this helps!



-Ron




> On Aug 3, 2017, at 8:05 AM, Tomasz Chmielewski  wrote:
> 
> I think fan is single server only and / or won't cross different networks.
> 
> You may also take a look at https://www.tinc-vpn.org/
> 
> Tomasz
> https://lxadm.com
> 
> On Thursday, August 03, 2017 20:51 JST, Félix Archambault 
>  wrote: 
> 
>> Hi Amblard,
>> 
>> I have never used it, but this may be worth taking a look to solve your
>> problem:
>> 
>> https://wiki.ubuntu.com/FanNetworking
>> 
>> On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" 
>> wrote:
>> 
>> Hello,
>> 
>> I am deploying 10< bare metal servers to serve as hosts for containers
>> managed through LXD.
>> As the number of container grows, management of inter-container
>> running on different hosts becomes difficult to manage and need to be
>> streamlined.
>> 
>> The goal is to setup a 192.168.0.0/24 network over which containers
>> could communicate regardless of their host. The solutions I looked at
>> [1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of
>> bridge.driver: openvswitch configuration for LXD.
>> Note: baremetal servers are hosted on different physical networks and
>> use of multicast was ruled out.
>> 
>> An illustration of the goal architecture is similar to the image visible on
>> https://books.google.fr/books?id=vVMoDwAAQBAJ=PA168=
>> 6aJRw15HSf=PA197#v=onepage=false
>> Note: this extract is from a book about LXC, not LXD.
>> 
>> The point that is not clear is
>> - whether each container needs to have as many veth as there are
>> baremetal host, in which case [de]commission of a new baremetal would
>> require configuration updated of all existing containers (and
>> basically rule out this scenario)
>> - or whether it is possible to "hide" this mesh network at the host
>> level and have a single veth inside each container to access the
>> private network and communicate with all the other containers
>> regardless of their physical location and regardeless of the number of
>> physical peers
>> 
>> Has anyone built such a setup?
>> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
>> automate part of the setup?
>> Online documentation is scarce on the topic so any help would be
>> appreciated.
>> 
>> Regards,
>> Amaury
>> 
>> [1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
>> [2] https://stackoverflow.com/questions/39094971/want-to-use
>> -the-vlan-feature-of-openvswitch-with-lxd-lxc
>> [3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne
>> tworking-on-ubuntu-16-04-lts/
>> 
>> 
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-03 Thread Tomasz Chmielewski
I think fan is single server only and / or won't cross different networks.

You may also take a look at https://www.tinc-vpn.org/

Tomasz
https://lxadm.com

On Thursday, August 03, 2017 20:51 JST, Félix Archambault 
 wrote: 
 
> Hi Amblard,
> 
> I have never used it, but this may be worth taking a look to solve your
> problem:
> 
> https://wiki.ubuntu.com/FanNetworking
> 
> On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" 
> wrote:
> 
> Hello,
> 
> I am deploying 10< bare metal servers to serve as hosts for containers
> managed through LXD.
> As the number of container grows, management of inter-container
> running on different hosts becomes difficult to manage and need to be
> streamlined.
> 
> The goal is to setup a 192.168.0.0/24 network over which containers
> could communicate regardless of their host. The solutions I looked at
> [1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of
> bridge.driver: openvswitch configuration for LXD.
> Note: baremetal servers are hosted on different physical networks and
> use of multicast was ruled out.
> 
> An illustration of the goal architecture is similar to the image visible on
> https://books.google.fr/books?id=vVMoDwAAQBAJ=PA168=
> 6aJRw15HSf=PA197#v=onepage=false
> Note: this extract is from a book about LXC, not LXD.
> 
> The point that is not clear is
> - whether each container needs to have as many veth as there are
> baremetal host, in which case [de]commission of a new baremetal would
> require configuration updated of all existing containers (and
> basically rule out this scenario)
> - or whether it is possible to "hide" this mesh network at the host
> level and have a single veth inside each container to access the
> private network and communicate with all the other containers
> regardless of their physical location and regardeless of the number of
> physical peers
> 
> Has anyone built such a setup?
> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
> automate part of the setup?
> Online documentation is scarce on the topic so any help would be
> appreciated.
> 
> Regards,
> Amaury
> 
> [1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
> [2] https://stackoverflow.com/questions/39094971/want-to-use
> -the-vlan-feature-of-openvswitch-with-lxd-lxc
> [3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne
> tworking-on-ubuntu-16-04-lts/
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users