Re: [lxc-users] BIND isn't recognizing container CPU limits

2017-08-03 Thread Serge E. Hallyn
Bind is probably looking at /sys/bus/cpu/drivers/processor/ ?

Quoting Joshua Schaeffer (jschaeffer0...@gmail.com):
> I saw this in my log file when I start BIND9 and was a little concerned, 
> since I limit the container to 2 CPU's and this is an unprivileged container:
> 
> Aug  2 16:04:39 blldns01 named[320]: found 32 CPUs, using 32 worker 
> threads
> Aug  2 16:04:39 blldns01 named[320]: using 16 UDP listeners per interface
> Aug  2 16:04:39 blldns01 named[320]: using up to 4096 sockets
> 
> >From the container:
> 
> root@blldns01:~# cat /proc/cpuinfo | grep -c processor
> 2
> 
> >From the host:
> 
> lxduser@blllxd01:~$ lxc config get blldns01 limits.cpu
> 2
> 
> Why would BIND be able to see all the cores of the host? I can certainly 
> limit BIND to using less threads, but it shouldn't be able to see that many 
> cores in the first place. I'm using LXD 2.15
> 
> Thanks,
> Joshua Schaeffer

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] BIND isn't recognizing container CPU limits

2017-08-03 Thread Joshua Schaeffer
I saw this in my log file when I start BIND9 and was a little concerned, since 
I limit the container to 2 CPU's and this is an unprivileged container:

Aug  2 16:04:39 blldns01 named[320]: found 32 CPUs, using 32 worker threads
Aug  2 16:04:39 blldns01 named[320]: using 16 UDP listeners per interface
Aug  2 16:04:39 blldns01 named[320]: using up to 4096 sockets

>From the container:

root@blldns01:~# cat /proc/cpuinfo | grep -c processor
2

>From the host:

lxduser@blllxd01:~$ lxc config get blldns01 limits.cpu
2

Why would BIND be able to see all the cores of the host? I can certainly limit 
BIND to using less threads, but it shouldn't be able to see that many cores in 
the first place. I'm using LXD 2.15

Thanks,
Joshua Schaeffer
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-03 Thread Ron Kelley
We have implemented something similar to this using VXLAN (outside the scope of 
LXC).

Our setup: 6x servers colocated in the data center running LXD 2.15 - each 
server with 2x NICs: nic(a) for management and nic(b) 

* nic(a) is strictly used for all server management tasks (lxd commands)
* nic(b) is used for all VXLAN network segments


On each server, we provision ethernet interface eth1 with a private IP Address 
(i.e.: 172.20.0.x/24) and run the following script at boot to provision the 
VXLAN interfaces (via multicast):
---
#!/bin/bash

# Script to configure VxLAN networks
ACTION="$1"

case $ACTION in
  up)
ip -4 route add 239.0.0.1 dev eth1
for i in {1101..1130}; do ip link add vxlan.${i} type vxlan group 
239.0.0.1 dev eth1 dstport 0 id ${i} && ip link set vxlan.${i} up; done
   ;;
  down)
  ip -4 route del 239.0.0.1 dev eth1
  for i in {1101..1130}; do ip link set vxlan.${i} down && ip link del 
vxlan.${i}; done
   ;;
   *)
 echo " ${0} up|down"; exit
  ;;
esac
---

To get the containers talking, we simply assign a container to a respective 
VXLAN interface via the “lxc network attach” command like this:  
/usr/bin/lxc network attach vxlan.${VXLANID} ${HOSTNAME} eth0 eth0.

We have single-armed (i.e.: eth0) containers that live exclusively behind a 
VXLAN interface, and we have dual-armed servers (eth0 and eth1) hat act as 
firewall/NAT containers for a VXLAN segment.

It took a while to get it all working, but it works great.  We can move 
containers anywhere in our infrastructure without issue. 

Hope this helps!



-Ron




> On Aug 3, 2017, at 8:05 AM, Tomasz Chmielewski  wrote:
> 
> I think fan is single server only and / or won't cross different networks.
> 
> You may also take a look at https://www.tinc-vpn.org/
> 
> Tomasz
> https://lxadm.com
> 
> On Thursday, August 03, 2017 20:51 JST, Félix Archambault 
>  wrote: 
> 
>> Hi Amblard,
>> 
>> I have never used it, but this may be worth taking a look to solve your
>> problem:
>> 
>> https://wiki.ubuntu.com/FanNetworking
>> 
>> On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" 
>> wrote:
>> 
>> Hello,
>> 
>> I am deploying 10< bare metal servers to serve as hosts for containers
>> managed through LXD.
>> As the number of container grows, management of inter-container
>> running on different hosts becomes difficult to manage and need to be
>> streamlined.
>> 
>> The goal is to setup a 192.168.0.0/24 network over which containers
>> could communicate regardless of their host. The solutions I looked at
>> [1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of
>> bridge.driver: openvswitch configuration for LXD.
>> Note: baremetal servers are hosted on different physical networks and
>> use of multicast was ruled out.
>> 
>> An illustration of the goal architecture is similar to the image visible on
>> https://books.google.fr/books?id=vVMoDwAAQBAJ&lpg=PA168&ots=
>> 6aJRw15HSf&pg=PA197#v=onepage&q&f=false
>> Note: this extract is from a book about LXC, not LXD.
>> 
>> The point that is not clear is
>> - whether each container needs to have as many veth as there are
>> baremetal host, in which case [de]commission of a new baremetal would
>> require configuration updated of all existing containers (and
>> basically rule out this scenario)
>> - or whether it is possible to "hide" this mesh network at the host
>> level and have a single veth inside each container to access the
>> private network and communicate with all the other containers
>> regardless of their physical location and regardeless of the number of
>> physical peers
>> 
>> Has anyone built such a setup?
>> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
>> automate part of the setup?
>> Online documentation is scarce on the topic so any help would be
>> appreciated.
>> 
>> Regards,
>> Amaury
>> 
>> [1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
>> [2] https://stackoverflow.com/questions/39094971/want-to-use
>> -the-vlan-feature-of-openvswitch-with-lxd-lxc
>> [3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne
>> tworking-on-ubuntu-16-04-lts/
>> 
>> 
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-03 Thread Fajar A. Nugraha
On Thu, Aug 3, 2017 at 9:05 PM, Fajar A. Nugraha  wrote:
> On Thu, Aug 3, 2017 at 11:46 AM, Amaury Amblard-Ladurantie
>  wrote:
>> Hello,
>>
>> I am deploying 10< bare metal servers to serve as hosts for containers
>> managed through LXD.
>> As the number of container grows, management of inter-container
>> running on different hosts becomes difficult to manage and need to be
>> streamlined.
>
>
>> Has anyone built such a setup?
>> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
>> automate part of the setup?
>> Online documentation is scarce on the topic so any help would be
>> appreciated.
>
>
> Short version: 
> https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
> Look for 'tunnel'.


Whoops, looks like you already read that :)

I wonder though, why did you ask 'Does the OVS+GRE setup need to be
build prior to LXD init or can LXD
automate part of the setup?', when the doc specifically mention tunnel
(including GRE) configuration in lxd.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-03 Thread Fajar A. Nugraha
On Thu, Aug 3, 2017 at 11:46 AM, Amaury Amblard-Ladurantie
 wrote:
> Hello,
>
> I am deploying 10< bare metal servers to serve as hosts for containers
> managed through LXD.
> As the number of container grows, management of inter-container
> running on different hosts becomes difficult to manage and need to be
> streamlined.


> Has anyone built such a setup?
> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
> automate part of the setup?
> Online documentation is scarce on the topic so any help would be
> appreciated.


Short version: https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
Look for 'tunnel'.

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] ?==?utf-8?q? OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-03 Thread Tomasz Chmielewski
I think fan is single server only and / or won't cross different networks.

You may also take a look at https://www.tinc-vpn.org/

Tomasz
https://lxadm.com

On Thursday, August 03, 2017 20:51 JST, Félix Archambault 
 wrote: 
 
> Hi Amblard,
> 
> I have never used it, but this may be worth taking a look to solve your
> problem:
> 
> https://wiki.ubuntu.com/FanNetworking
> 
> On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" 
> wrote:
> 
> Hello,
> 
> I am deploying 10< bare metal servers to serve as hosts for containers
> managed through LXD.
> As the number of container grows, management of inter-container
> running on different hosts becomes difficult to manage and need to be
> streamlined.
> 
> The goal is to setup a 192.168.0.0/24 network over which containers
> could communicate regardless of their host. The solutions I looked at
> [1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of
> bridge.driver: openvswitch configuration for LXD.
> Note: baremetal servers are hosted on different physical networks and
> use of multicast was ruled out.
> 
> An illustration of the goal architecture is similar to the image visible on
> https://books.google.fr/books?id=vVMoDwAAQBAJ&lpg=PA168&ots=
> 6aJRw15HSf&pg=PA197#v=onepage&q&f=false
> Note: this extract is from a book about LXC, not LXD.
> 
> The point that is not clear is
> - whether each container needs to have as many veth as there are
> baremetal host, in which case [de]commission of a new baremetal would
> require configuration updated of all existing containers (and
> basically rule out this scenario)
> - or whether it is possible to "hide" this mesh network at the host
> level and have a single veth inside each container to access the
> private network and communicate with all the other containers
> regardless of their physical location and regardeless of the number of
> physical peers
> 
> Has anyone built such a setup?
> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
> automate part of the setup?
> Online documentation is scarce on the topic so any help would be
> appreciated.
> 
> Regards,
> Amaury
> 
> [1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
> [2] https://stackoverflow.com/questions/39094971/want-to-use
> -the-vlan-feature-of-openvswitch-with-lxd-lxc
> [3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne
> tworking-on-ubuntu-16-04-lts/
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] OVS / GRE - guest-transparent mesh networking across multiple hosts

2017-08-03 Thread Félix Archambault
Hi Amblard,

I have never used it, but this may be worth taking a look to solve your
problem:

https://wiki.ubuntu.com/FanNetworking

On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" 
wrote:

Hello,

I am deploying 10< bare metal servers to serve as hosts for containers
managed through LXD.
As the number of container grows, management of inter-container
running on different hosts becomes difficult to manage and need to be
streamlined.

The goal is to setup a 192.168.0.0/24 network over which containers
could communicate regardless of their host. The solutions I looked at
[1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of
bridge.driver: openvswitch configuration for LXD.
Note: baremetal servers are hosted on different physical networks and
use of multicast was ruled out.

An illustration of the goal architecture is similar to the image visible on
https://books.google.fr/books?id=vVMoDwAAQBAJ&lpg=PA168&ots=
6aJRw15HSf&pg=PA197#v=onepage&q&f=false
Note: this extract is from a book about LXC, not LXD.

The point that is not clear is
- whether each container needs to have as many veth as there are
baremetal host, in which case [de]commission of a new baremetal would
require configuration updated of all existing containers (and
basically rule out this scenario)
- or whether it is possible to "hide" this mesh network at the host
level and have a single veth inside each container to access the
private network and communicate with all the other containers
regardless of their physical location and regardeless of the number of
physical peers

Has anyone built such a setup?
Does the OVS+GRE setup need to be build prior to LXD init or can LXD
automate part of the setup?
Online documentation is scarce on the topic so any help would be
appreciated.

Regards,
Amaury

[1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
[2] https://stackoverflow.com/questions/39094971/want-to-use
-the-vlan-feature-of-openvswitch-with-lxd-lxc
[3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne
tworking-on-ubuntu-16-04-lts/


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users