[lxc-users] LXC 1.0.3 has been released!

2014-04-08 Thread Stéphane Graber
Hello everyone,

The third LXC 1.0 bugfix release is now out!

The full announcement and changelog may be found at:
https://linuxcontainers.org/news/

And as usual our tarballs may be downloaded from:
https://linuxcontainers.org/downloads/


As a reminder, LXC upstream is planning on maintaining the LXC 1.0
branch with frequent bugfix and security updates until April 2019, this
is the first of many such releases.


Stéphane Graber
On behalf of the LXC development team


signature.asc
Description: Digital signature
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Nested containers (cont'd)

2014-04-08 Thread Nels Nelson
Greetings, Serge,-

> Please show us your minimal template

Certainly.  It is based on the lxc-sshd template, substituting a dummy
daemon for sshd.

https://gist.github.com/nelsnelson/10189332

I am only using a template at all because of this bug:
https://github.com/lxc/lxc/issues/179

-Nels
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Nested containers (cont'd)

2014-04-08 Thread Serge Hallyn
Quoting Nels Nelson (nels.n.nel...@gmail.com):
> So I attempted to follow Stephane's instructions here:
> https://www.stgraber.org/2013/12/21/lxc-1-0-advanced-container-usage/
> 
> In order to create some nested containers.
> 
> It is important to note that Stephane's blog does not indicate actually
> starting a nested container, but only creating it.
> 
> Also, I can create containers with and without using the apparmor
> configuration option that he specified as being necessary.
> 
> https://gist.github.com/nelsnelson/10174120
> 
> However, as you can see in the gist, I still get this error when trying to
> start the inner container with and without the apparmor option set:
> 
> lxc-start: cgroupfs failed to detect cgroup metadata
> lxc-start: failed initializing cgroup support
> lxc-start: failed to spawn 'inner'
> 
> Again, just to be clear:
> 
> $ uname -sr
> Linux 3.12.15
> $ cat /etc/os-release
> PRETTY_NAME="Debian GNU/Linux 7 (wheezy)"
> $ aptitude search apparmor
> i A libapparmor1   - changehat AppArmor library
> $ lxc-info --version
> 1.0.2
> 
> And my lxc has been compiled with apparmor enabled.

Please show us your minimal template.  I couldn't find it under

https://github.com/nelsnelson/lxc/tree/master/templates
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


[lxc-users] Nested containers (cont'd)

2014-04-08 Thread Nels Nelson
So I attempted to follow Stephane's instructions here:
https://www.stgraber.org/2013/12/21/lxc-1-0-advanced-container-usage/

In order to create some nested containers.

It is important to note that Stephane's blog does not indicate actually
starting a nested container, but only creating it.

Also, I can create containers with and without using the apparmor
configuration option that he specified as being necessary.

https://gist.github.com/nelsnelson/10174120

However, as you can see in the gist, I still get this error when trying to
start the inner container with and without the apparmor option set:

lxc-start: cgroupfs failed to detect cgroup metadata
lxc-start: failed initializing cgroup support
lxc-start: failed to spawn 'inner'

Again, just to be clear:

$ uname -sr
Linux 3.12.15
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 7 (wheezy)"
$ aptitude search apparmor
i A libapparmor1   - changehat AppArmor library
$ lxc-info --version
1.0.2

And my lxc has been compiled with apparmor enabled.

Thanks,
-Nels
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] Screen corruption when starting Ubuntu container

2014-04-08 Thread Serge Hallyn
Quoting Torste Aikio (zok...@gmail.com):
> Hello,
> 
> I'm trying out LXC containers on my desktop computer running Arch Linux.
> When I start a container with Ubuntu my screen flashes black and in some
> blinking artifacts appear to left-top corner which persist even after I

I think udevadm trigger --action=add in the container is causing the
host devices to be reset.  If you can't confine /sys and /proc access
through apparmor or selinux, then I'd edit /etc/init/udevtrigger.conf .

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


Re: [lxc-users] lxc_monitor exiting, but not cleaning monitor-fifo?

2014-04-08 Thread Dwight Engen
On Fri, 04 Apr 2014 22:22:05 +0200
Florian Klink  wrote:

> Am 02.04.2014 16:42, schrieb Dwight Engen:
> > On Tue, 01 Apr 2014 22:15:25 +0200
> > Florian Klink  wrote:
> > 
> >> Am 01.04.2014 01:49, schrieb Dwight Engen:
> >>> On Mon, 31 Mar 2014 23:18:13 +0200
> >>> Florian Klink  wrote:
> >>>
>  Am 31.03.2014 21:13, schrieb Dwight Engen:
> > On Mon, 31 Mar 2014 20:34:15 +0200
> > Florian Klink  wrote:
> >
> >> Am 31.03.2014 20:10, schrieb Dwight Engen:
> >>> On Sat, 29 Mar 2014 23:39:33 +0100
> >>> Florian Klink  wrote:
> >>>
>  Hi,
> 
>  when running multiple lxc actions in row using the command
>  line tools, I sometimes observe the following state:
> 
> 
>  - lxc-monitord is not running anymore
>  - /run/lxc/var/lib/lxc/monitor-fifo still exists, but is
>  "refusing connection"
> 
>  In the logs, I then see the following:
> 
> 
>  lxc-start 1395671045.703 ERRORlxc_monitor - connect :
>  backing off 10 lxc-start 1395671045.713 ERRORlxc_monitor
>  - connect : backing off 50 lxc-start 1395671045.763 ERROR
>  lxc_monitor - connect : backing off 100 lxc-start
>  1395671045.864 ERROR lxc_monitor - connect : Connection
>  refused
> 
> 
>  ... and the command fails.
> >>>  
> >>> The only time I've seen this happen is if lxc-monitord is hard
> >>> killed so it doesn't have a chance to clean up and remove the
> >>> socket.
> >>
> >> Here, it's happening quite frequently. However, the script
> >> never kills lxc-monitord on its own, it just tries to detect
> >> and fix this state by removing the socket file...
> >
> > Right, removing the socket file makes it so another lxc-monitord
> > will start, but the question is why is the first one exiting
> > without cleaning up? Can you reliably reproduce it at will? If
> > so then maybe you could attach an strace to lxc-monitord and
> > see why it is exiting.
> 
>  I was so far not successful in reproducing the bug while having
>  an strace running. :-( But I'll continue to try!
> >>
> >> Success :-) I managed to get an strace while trying to reproduce
> >> the bug. I gzipped and attached it to this mail.
> >>
> >> Its the output of strace -f -s 200 /usr/lib/lxc/lxc-monitord
> >> /var/lib/lxc /run/lxc/var/lib/lxc/monitor-fifo &> strace_output.txt
> >>
> >> I fired a bunch of lxc-starts and lxc-stops in row, then stopped my
> >> script and waited for lxc-monitord (and strace too) to stop.
> >>
> >> Then I started my script again and had the "leftover monitor-fifo
> >> state".
> > 
> > Unfortunately, I don't think that strace shows the problem. It
> > looks to me like a normal exit with a successful
> > unlink("/run/lxc//var/lib/lxc/monitor-fifo") = 0 right near the end.
> > 
> > You can't really run monitord by hand like that since it is
> > expecting a pipe fd as argv[2]. Thats why I was suggesting
> > attaching to it. So something like:
> > 
> > lxc-start 
> > lxc-monitor -n '.*'
> > 
> > in another terminal:
> > ps aux |grep monitord  -> find the pid of lxc-monitord
> > strace -v -t -o straceout.txt -p 
> > 
> > and then do whatever you do to make things fail :)
> 
> I was not able to get an strace of the bug. I think was is only
> triggered by a lot of lxc-monitord start/stop traffic ;-)
> 
> > 
> >
> >>>
> 
>  A possible workaround would be checking for non-running
>  lxc-monitord process but existing monitor-fifo file then
>  removing the fifo if it exists before running the next lxc
>  command, but thats ugly ;-)
> >>>
> >>> Is there a good non-racy way to do this? I guess monitord
> >>> could write its pid in $LXCPATH and we could kill(pid, 0) it. 
> 
>  I also think that lxc should be able to recover from this problem
>  automatically.
> >>>
> >>> I agree, though I would like to understand the root cause. Can you
> >>> try out the attached patch? I think it will cure your issues.
> >>>
> >>
> >> Thanks for the patch! Just tell me if you need more information for
> >> the strace above. If not, I'll happily apply the patch :-)
> > 
> > You can try the patch to see if it solves your issue, though I'd
> > still like to understand why its happening in the first place. I
> > may rework the patch based on Serge's suggestion, but it'd be nice
> > to know if the one I sent does fix what you are seeing. It worked
> > for all the hard-kill cases I tried.
> 
> Both patches, the pidfile version and the reworked version fixed my
> problem. So I'm very happy with it :-)
> 
> 
> Will this patch also go to the stable-1.0 branch?
> I'd really like to see this fixed in the 1.0.3 release ;-)

Looks like Stéphane did pull it onto stable so you should be good.
Thanks for trying to debug/strace it. I still don't know why this is
happe

[lxc-users] Screen corruption when starting Ubuntu container

2014-04-08 Thread Torste Aikio
Hello,

I'm trying out LXC containers on my desktop computer running Arch Linux.
When I start a container with Ubuntu my screen flashes black and in some
blinking artifacts appear to left-top corner which persist even after I
stop the container (screenshot: http://i.imgur.com/XgZE6uz.png ). I think
this might have something to do with the Ubuntu system trying to take over
VTs or something along those lines, but I'm not knowledgeable enough to
debug this further. Could someone help me prevent this from happening? Here
is the console output of lxc-start, some bluetooth related noise filtered
away:

# lxc-start --name=ubu_lpc
<4>init: hostname main process (4) terminated with status 1
<4>init: hwclock main process (7) terminated with status 77
<4>init: ureadahead main process (8) terminated with status 5
<4>init: udev-fallback-graphics main process (48) terminated with status 1
<30>udevd[153]: starting version 175
<4>init: failsafe main process (136) killed by TERM signal
TERM environment variable not set.
<4>init: plymouth-upstart-bridge main process (231) terminated with status 1
* Asking all remaining processes to terminate...
<4>init: upstart-udev-bridge main process ended, respawning
<4>init: plymouth main process (12) killed by TERM signal
...done.
<4>init: udev main process ended, respawning
<4>init: dbus main process ended, respawning
<4>init: rsyslog main process ended, respawning
<4>init: Disconnected from system bus
<30>udevd[318]: starting version 175
TERM environment variable not set.
<4>init: plymouth-upstart-bridge main process (380) terminated with status 1
* Killing all remaining processes...
<4>init: udev main process (318) killed by KILL signal
<4>init: udev main process ended, respawning
<4>init: upstart-udev-bridge main process (322) killed by KILL signal
<4>init: upstart-udev-bridge main process ended, respawning
...fail!
<4>init: rsyslog main process (334) killed by KILL signal
<4>init: rsyslog main process ended, respawning
<4>init: dbus main process (377) killed by KILL signal
<4>init: dbus main process ended, respawning
* Will now switch to single-user mode
<4>init: Disconnected from system bus
<30>udevd[506]: starting version 175
TERM environment variable not set.
<4>init: plymouth-stop pre-start process (514) terminated with status 1
<4>init: plymouth-upstart-bridge main process (522) terminated with status 1
groups: cannot find name for group ID 19
root@ubu_lpc:~#

I'm starting to single-user mode (DEFAULT_RUNLEVEL=1 in
/etc/init/rc-sysinit.conf) to minimize moving parts, the same happens with
the default runlevel (2/multi-user).

/Torste Aikio
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users