[lxc-users] Upgrading host and containers : in which order ?

2018-03-08 Thread phep

Hi,

Pretty much every thing's in the subject line : we've got a host running 
Debian Jessie and LXC 1.0 with a handful of containers in the same Debian 
version that we all need to upgrade to Debian Stretch with LXC 2.0. By the 
way, hosts and containers are using systemd as init system, if this matters.


I'm wondering which migration route I should take : migrate host first or 
containers first ?


Actually, I already had to upgrade a container and that did not bring any 
major problem, except for the presence of this systemd message in its logs:
systemd-udevd.service: Cannot add dependency job, ignoring: Unit 
systemd-udevd.service is masked.


Thanks in advance for every piece of advice,

phep
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Is the order of the option in the configuration file important ?

2017-03-09 Thread phep

Hi,

Well, everything is in the subject of this message, actually... ;-).

Are there any options that need to be set before others ? I did not find 
anything in the lxc.container.conf manpage but I'd like to be dead sure 
since we plan to modularise our container configuration files making a heavy 
use of lxc.include's.


Thanks in advance,

phep
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Routable host IPs for containers

2014-09-17 Thread phep

Hi,

Le 14/09/2014 20:33, Michael H. Warfield a écrit :

Another option is to configure a subnet on your laptop, set up your down
dhcp for it, and configure the wlan0 interface for "proxy arp".  That
should work but is also (cough) non-trivial.



You might find some useful elements in this old thread :

https://lists.linuxcontainers.org/pipermail/lxc-users/2014-April/006614.html

Regards,

Patrice
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Is it possible to create a wireless bridge with proxy_arp

2014-04-19 Thread phep

Hi,

Le 19/04/2014 19:50, Michael H. Warfield a écrit :

No.  Well, maybe.  And maybe looks pretty grim.  How much of a masochist
are you?  I looked into this off and on over several years and just


As I mentioned, I do this with KVM guests with few efforts. Don't have any 
inclination for suffering indeed!



should work as well.  You could manually set up a tap tunnel at each
end, even without OpenVPN, and manually tunnel it.  If you set up tap
devices between the host and access point, you're then tunneling
everything under WiFi client connection and the AP only sees the client
MAC address but the tap devices and tunnel deal with the other devices.


While I'm not an expert wrt networking, this is exactly how I understand 
what I'm doing.



I really need to read that referenced article to  comment further on


Sorry. I noticed the site was down only after sending my message. Actually, 
I made a short notice for myself about that blog entry some time ago, 
keeping the original URL. This is my summary (the title is misleading since 
there is no bridge involved, actually) in case it helps:



Bridging with a wireless link with proxy_arp


This is a ultra-short version of
http://blog.ericwhite.ca/articles/2011/04/creating-a-wireless-bridge/

This installation requires setting static IP for both host and guest.
We'll assume that:

- host has 192.168.0.153
- guest has 192.168.0.203

Keep the host's `/etc/network/interfaces` in a basic state::

  auto wlan0
  iface wlan0 inet static
wpa-driver wext
wpa-ssid SOMESSID
wpa-psk blahblahblah
address 192.168.0.153
netmask 255.255.255.0
broadcast 192.168.0.255
gateway 192.168.0.1

Then add the a tap interface::

  # ip tuntap add dev tap0 mode tap

Enable proxy_arp on both devices::

  # echo 1 > /proc/sys/net/ipv4/conf/wlan0/proxy_arp
  # echo 1 > /proc/sys/net/ipv4/conf/tap0/proxy_arp

Add the host IP address to the tap interface::

  # ip addr add 192.168.0.153 dev tap0

Finish configuring the tap interface::

  # ip link set tap0 up
  # ip link set tap0 promisc on

Then add a route from the host to the guest::

  # ip route add 192.168.0.203 dev tap0

There just need to start the guest now.


what they were doing but, regardless, that's not an LXC issue.  That's
an outer host issue to be set up.


Yes, it is also an LXC issue. What is not described in my summary is that 
the KVM guest is started with something like (yes, I avoid libvirt and 
superfluous layers):


  # kvm -net nic,model=virtio -net tap,script=no,downscript=no,ifname=tap0 
 blah blah blah


And this is how the guest interface is associated with the tap interface. 
And this is precisely the step I'm missing wrt LXC! ;-).



nicey nicey with bridges in general.  That means you're going to have to
manually deal with wpa_supplicant and iwconfig yourself before building
the bridge and adding the interface to it.  That's all before you can
even come close to LXC.


I don't play with NM. I use Debian's ifupdown. Moreover, I have a set of 
personal / ad hoc scripts that let me set up my network configuration (hosts 
and guests) with 2 or 3 commands according to where I am. Setting it up in a 
new place is generally nothing more than copying and adapting a set of 
configuration files.



Where it comes to WiFi, you're better off going with a NAT'ed
connection.


To be honest, 99% I'd be fine with a NAT'ed setup (wrt what I need to do 
with my KVM or LXC guests), but well, you know how it goes


Regards,

Patrice
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Is it possible to create a wireless bridge with proxy_arp

2014-04-19 Thread phep

Hi,

While it is easy to have the containers in the same net as the host using 
veth interfaces and a bridge on the host, this does not work when the host 
is connected to the network through a wireless device.


Using KVM guests, one may overcome the difficulties creating a tap device 
for the guest, enabling proxy_arp on both the tap and wireless devices then 
adding the host IP address to the tap interface then a route from the host 
to the guest (see e.g. 
http://blog.ericwhite.ca/articles/2011/04/creating-a-wireless-bridge/).


I've googled around for some time tonight to find if this would be possible 
to set up with LXC containers but it does not seem to. Am I wrong ?


If not are there any plan about something like this ?

This would be very handy on laptop.

Cheers,

Patrice
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Problems with ns_cgroup; container fails to start

2014-04-09 Thread phep

Thank you for your quick answer, Serge.

Le 09/04/2014 17:57, Serge Hallyn a écrit :

Quoting phep (phep-li...@teletopie.net):

I cannot start LXC containers on my Debian testing (jessie) laptop
anymore. This is how everything started:

   $ lxc-start -n test-lxc -f /var/lib/lxc/test-lxc/config
   lxc-start: no ns_cgroup option specified
   lxc-start: failed to spawn 'test-lxc'
   lxc-start: No such file or directory - failed to remove cgroup
'/sys/fs/cgroup/cpuset//lxc/test-lxc'


Try a newer lxc.  You don't actually need ns_cgroup, but in the version
you have it is objecting bc it finds neither ns cgroup *nor* the
cgroup.clone_children file.  The latter *should* exist (i.e.
/sys/fs/cgroup/cpuset/cgroup.clone_children), so it's probably a bug
in that particular lxc version.


Actually, my laptop is running 0.9.0-alpha3 and when running lxc-checkconfig 
I have a red output :

Cgroup namespace: required

While on a Debian stable server at work (0.8.0) I have a green :
Cgroup clone_children flag: enabled

I don't know why those differences show up.

1.0.0 should transition to Debian testing in a week or so and I will wait 
until then (albeit I did not see relevant bug reports or entries in the 
Debian changelog).


Thanks again,

Patrice

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Problems with ns_cgroup; container fails to start

2014-04-09 Thread phep

Hi,

I cannot start LXC containers on my Debian testing (jessie) laptop anymore. 
This is how everything started:


  $ lxc-start -n test-lxc -f /var/lib/lxc/test-lxc/config
  lxc-start: no ns_cgroup option specified
  lxc-start: failed to spawn 'test-lxc'
  lxc-start: No such file or directory - failed to remove cgroup 
'/sys/fs/cgroup/cpuset//lxc/test-lxc'


These are the mounted cgroups, according to mount:

  $ mount | grep cgroup
  cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
  cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
  cgroup on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
  cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
  cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
  cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
  cgroup_root on /sys/fs/cgroup type tmpfs (rw,relatime)
  cgroup on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
  cgroup_memory on /sys/fs/cgroup/memory type cgroup 
(rw,nosuid,nodev,noexec,relatime,memory)
  cgroup_devices on /sys/fs/cgroup/devices type cgroup 
(rw,nosuid,nodev,noexec,relatime,devices)


and this is /proc/cgroups content:

  $ cat /proc/cgroups
  #subsys_namehierarchy   num_cgroups enabled
  cpuset  2   1   1
  cpu 3   1   1
  cpuacct 4   1   1
  memory  5   1   1
  devices 6   1   1
  freezer 7   1   1
  blkio   0   1   1
  perf_event  0   1   1

and, to be complete :
  $ tree /sys/fs/cgroup/
  /sys/fs/cgroup/
  |-- devices
  |   |-- cgroup.clone_children
  |   |-- cgroup.event_control
  |   |-- cgroup.procs
  |   |-- cgroup.sane_behavior
  |   |-- devices.allow
  |   |-- devices.deny
  |   |-- devices.list
  |   |-- notify_on_release
  |   |-- release_agent
  |   `-- tasks
  `-- memory
  |-- cgroup.clone_children
  |-- cgroup.event_control
  |-- cgroup.procs
  |-- cgroup.sane_behavior
  |-- memory.failcnt
  |-- memory.force_empty
  |-- memory.limit_in_bytes
  |-- memory.max_usage_in_bytes
  |-- memory.move_charge_at_immigrate
  |-- memory.oom_control
  |-- memory.pressure_level
  |-- memory.soft_limit_in_bytes
  |-- memory.stat
  |-- memory.swappiness
  |-- memory.usage_in_bytes
  |-- memory.use_hierarchy
  |-- notify_on_release
  |-- release_agent
  `-- tasks

As for the kernel cmdline :

  $ cat /proc/cmdline
  BOOT_IMAGE=/boot/vmlinuz-3.13-1-686-pae 
root=UUID=984f719f-8c9c-4686-8218-ee9657c96204 ro cgroup_enable=memory quiet


/etc/fstab does not contains cgroup stuff any more since it tented to 
conflict with libvirtd (I'm using KVM virtual machine at times). The 
cgroups, AFAICT, are created through libvirt-bin and cgroupfs-mount packages 
init.d scripts.


Trying to create the ns cgroup manually (after reading libvirtd-bin init.d 
script) fails also :


  $ mkdir /sys/fs/cgroup/ns
  $ mount -t cgroup -o rw,nosuid,nodev,noexec,relatime,ns "cgroup_ns" 
"/sys/fs/cgroup/ns"

  mount: special device cgroup_ns does not exist

I made some googling to no avail.

I wonder if this could come from some conflicts between cgroups handling 
from lxc vs. libvirt & Co...


Would anybody have any clue ?

Thanks in advance,

Patrice
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] Cannot start container when lxc.cgroup.memory.* in config

2014-03-07 Thread phep

Le 07/03/2014 11:55, Jäkel, Guido a écrit :

Your lxc-checkconfig says:

>Cgroup memory controller: enabled

But in the output of the mount command, the "memory" keyword is missing. You 
have to look for the reason.


Thanks Guido.

Actually, the Debian maintainer's documentation is quite misleading as it 
states that « If you use a Debian wheezy kernel or newer, all of the 
features are enabled including the resource controller. ».


The fact is you won't have this cgroup if you don't add the 
"cgroup_enable=memory" to the kernel command line ! As stated in more recent 
version of the Debian LXC package.


This did the trick for the lxc.cgroup.memory.limit_in_bytes control. 
Unfortunately, the wheezy Debian kernel has not enabled CONFIG_MEMCG_SWAP 
(does not even show up in the kernel config, may be this was not yet 
implemented in 3.2 kernels) so memsw doesn't work :-(.


Anyway, my problem's solved, mostly ! Thanks.

phep
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] Cannot start container when lxc.cgroup.memory.* in config

2014-03-07 Thread phep

Hi,

My host machine is an (admittedly) ageing Athlon AMD Athlon(TM) XP 2200+ on 
an Asustek A7V8X-X MB. RAM is 1.5 G big and the swap partition is 3 Go. The 
box is running a freshly updated Debian wheezy 7.4.


The host fstab has the following line :
cgroup  /sys/fs/cgroup  cgroup  defaults0   0

The mount command displays this for cgroup :

cgroup on /sys/fs/cgroup type cgroup 
(rw,relatime,perf_event,blkio,net_cls,freezer,devices,cpuacct,cpu,cpuset,clone_children)


lxc-checkconfig displays :
Kernel config /proc/config.gz not found, looking in other places...
Found kernel config file /boot/config-3.2.0-4-686-pae
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled


I lxc-created a wheezy debian container with no problem using the "debian" 
template.I first run the container a couple of times with the default basic 
configuration being only augmented in order to have a bridged veth interface 
with a static adress. Everything went fine, then.


My problem is that if I have any or both of the following two lines in the 
configuration file :


lxc.cgroup.memory.limit_in_bytes= 500M
lxc.cgroup.memory.memsw.limit_in_bytes  = 3G

I cannot start the container anymore. This is what I get :

# lxc-start -n uc23 -f /var/lib/lxc/uc23/config
lxc-start: cgroup is not mounted
lxc-start: failed to setup the cgroups for 'uc23'
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn 'uc23'
lxc-start: Device or resource busy - failed to remove cgroup 
'/sys/fs/cgroup//lxc/uc23'


After that /sys/fs/cgroup/lxc/uc23 is actually left in place until I restart 
the container (with the default configuration).


As soon as I comment off both lines, the container happily starts anew... :-/

Could it be the hardware is too old ?

I noticed that if I list the contents of /sys/fs/cgroup I cannot find any 
"memory" pseudo-directory.


I made some googling to no avail, alas, and would welcome any pointer...

TIA

phep
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users