Re: [lxc-users] lxc-start fails with "no systemd controller mountpoint found"

2016-04-21 Thread Serge Hallyn
Quoting F Dave (felda...@gmail.com):
> Turns out that it was cgroup was mounted under /cgroup, so I was able to

Yes the point of the cgfsng driver was to simplify the cgroup code by
requiring now-standard cgroup mountpoints, rather than searching the
system for cgroup mounts.  So $controller must be mounted under
/sys/fs/cgroup/$controller.

THe bug (which should be fixed in git head) was that cgfsng was not
failing over to cgfs.c when it did not find the required mountpoints
where it expected them.

> create the folder there and mount it. However the container shows the same
> error:
> 
> [root@devhost fs]# mount -t cgroup -o none,name=systemd cgroup
> /cgroup/systemd/
> 
> [root@devhost fs]# grep cg /proc/mounts
> cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
> cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
> cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
> cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
> cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
> cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
> cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0
> cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0
> cgroup /cgroup/systemd cgroup rw,relatime,name=systemd 0 0
> 
> [root@devhost fs]# lxc-start -F -n node2
> lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller
> mountpoint found
> lxc-start: start.c: lxc_spawn: 1079 failed initializing cgroup support
> lxc-start: start.c: __lxc_start: 1329 failed to spawn 'node2'
> lxc-start: lxc_start.c: main: 344 The container failed to start.
> lxc-start: lxc_start.c: main: 348 Additional information can be obtained by
> setting the --logfile and --logpriority options.
> 
> 
> On Thu, Apr 21, 2016 at 11:38 AM, F Dave  wrote:
> 
> > I cloned the master branch and rebuilt it. Did 'make uninstall' for the
> > previous lxc-2.0.0 and installed new version. However it still throws the
> > same error. Also tried to create the systemd folder:
> >
> > [root@devhost lxc]# mkdir -p /sys/fs/cgroup/systemd
> > mkdir: cannot create directory `/sys/fs/cgroup/systemd': No such file or
> > directory
> >
> > [root@devhost lxc]# ll /sys/fs/cgroup/
> > total 0
> >
> > [root@devhost lxc]# ll /sys/fs/
> > total 0
> > drwxr-xr-x. 2 root root 0 Apr 21 07:07 btrfs
> > drwxr-xr-x. 2 root root 0 Apr 20 09:07 cgroup
> > drwxr-xr-x. 4 root root 0 Apr 21 07:07 ext4
> > drwxr-xr-x. 2 root root 0 Apr 21 07:07 selinux
> >
> >
> > On Thu, Apr 21, 2016 at 9:14 AM, Serge Hallyn 
> > wrote:
> >
> >> > cgfsng.c:all_controllers_found:431 - no systemd controller mountpoint
> >> found
> >> > lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller
> >> > mountpoint found
> >>
> >> This is known.  You can either run from git head to get the patch which
> >> fixes this, or just make sure to have a name=systemd cgroup controller
> >> mounted on the host.
> >> ___
> >> lxc-users mailing list
> >> lxc-users@lists.linuxcontainers.org
> >> http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> >
> >

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails with "no systemd controller mountpoint found"

2016-04-20 Thread Fajar A. Nugraha
Have you ever start lxc on that host succesfully before? Or is this a
first attempt?

IIRC on my centos 6.6:
- I need newer kernel. Went with kernel ml
(http://elrepo.org/linux/kernel/el6/x86_64/RPMS/) which had 4.4 back
then.
It had /sys/fs/cgroup, so I can mount tmpfs there (with fstab), and
mount cgroups on top of it with cgconfig. I can't remember whether lxc
works when it was mounted on /cgroup, but it works correctly with
/sys/fs/cgroup
- For each cgroup, I need cgroup.clone_children=1 on
/etc/cgconfig.conf. Otherwise I can't start the container (forgot the
exact error message, something about "resource" or "quota", I think)
- lxc-net (the one that creates lxcbr0) would fail to start with
original centos kernel. It works fine with 4.4 though.
- need to install lxcfs as well

That was with 2.0.0-rc5, haven't had time to upgrade yet.

-- 
Fajar

On Thu, Apr 21, 2016 at 7:54 AM, F Dave  wrote:
> Turns out that it was cgroup was mounted under /cgroup, so I was able to
> create the folder there and mount it. However the container shows the same
> error:
>
> [root@devhost fs]# mount -t cgroup -o none,name=systemd cgroup
> /cgroup/systemd/
>
> [root@devhost fs]# grep cg /proc/mounts
> cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
> cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
> cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
> cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
> cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
> cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
> cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0
> cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0
> cgroup /cgroup/systemd cgroup rw,relatime,name=systemd 0 0
>
> [root@devhost fs]# lxc-start -F -n node2
> lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller
> mountpoint found
> lxc-start: start.c: lxc_spawn: 1079 failed initializing cgroup support
> lxc-start: start.c: __lxc_start: 1329 failed to spawn 'node2'
> lxc-start: lxc_start.c: main: 344 The container failed to start.
> lxc-start: lxc_start.c: main: 348 Additional information can be obtained by
> setting the --logfile and --logpriority options.
>
>
> On Thu, Apr 21, 2016 at 11:38 AM, F Dave  wrote:
>>
>> I cloned the master branch and rebuilt it. Did 'make uninstall' for the
>> previous lxc-2.0.0 and installed new version. However it still throws the
>> same error. Also tried to create the systemd folder:
>>
>> [root@devhost lxc]# mkdir -p /sys/fs/cgroup/systemd
>> mkdir: cannot create directory `/sys/fs/cgroup/systemd': No such file or
>> directory
>>
>> [root@devhost lxc]# ll /sys/fs/cgroup/
>> total 0
>>
>> [root@devhost lxc]# ll /sys/fs/
>> total 0
>> drwxr-xr-x. 2 root root 0 Apr 21 07:07 btrfs
>> drwxr-xr-x. 2 root root 0 Apr 20 09:07 cgroup
>> drwxr-xr-x. 4 root root 0 Apr 21 07:07 ext4
>> drwxr-xr-x. 2 root root 0 Apr 21 07:07 selinux
>>
>>
>> On Thu, Apr 21, 2016 at 9:14 AM, Serge Hallyn 
>> wrote:
>>>
>>> > cgfsng.c:all_controllers_found:431 - no systemd controller mountpoint
>>> > found
>>> > lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller
>>> > mountpoint found
>>>
>>> This is known.  You can either run from git head to get the patch which
>>> fixes this, or just make sure to have a name=systemd cgroup controller
>>> mounted on the host.
>>> ___
>>> lxc-users mailing list
>>> lxc-users@lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
>>
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails with "no systemd controller mountpoint found"

2016-04-20 Thread F Dave
Turns out that it was cgroup was mounted under /cgroup, so I was able to
create the folder there and mount it. However the container shows the same
error:

[root@devhost fs]# mount -t cgroup -o none,name=systemd cgroup
/cgroup/systemd/

[root@devhost fs]# grep cg /proc/mounts
cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0
cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0
cgroup /cgroup/systemd cgroup rw,relatime,name=systemd 0 0

[root@devhost fs]# lxc-start -F -n node2
lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller
mountpoint found
lxc-start: start.c: lxc_spawn: 1079 failed initializing cgroup support
lxc-start: start.c: __lxc_start: 1329 failed to spawn 'node2'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by
setting the --logfile and --logpriority options.


On Thu, Apr 21, 2016 at 11:38 AM, F Dave  wrote:

> I cloned the master branch and rebuilt it. Did 'make uninstall' for the
> previous lxc-2.0.0 and installed new version. However it still throws the
> same error. Also tried to create the systemd folder:
>
> [root@devhost lxc]# mkdir -p /sys/fs/cgroup/systemd
> mkdir: cannot create directory `/sys/fs/cgroup/systemd': No such file or
> directory
>
> [root@devhost lxc]# ll /sys/fs/cgroup/
> total 0
>
> [root@devhost lxc]# ll /sys/fs/
> total 0
> drwxr-xr-x. 2 root root 0 Apr 21 07:07 btrfs
> drwxr-xr-x. 2 root root 0 Apr 20 09:07 cgroup
> drwxr-xr-x. 4 root root 0 Apr 21 07:07 ext4
> drwxr-xr-x. 2 root root 0 Apr 21 07:07 selinux
>
>
> On Thu, Apr 21, 2016 at 9:14 AM, Serge Hallyn 
> wrote:
>
>> > cgfsng.c:all_controllers_found:431 - no systemd controller mountpoint
>> found
>> > lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller
>> > mountpoint found
>>
>> This is known.  You can either run from git head to get the patch which
>> fixes this, or just make sure to have a name=systemd cgroup controller
>> mounted on the host.
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails with "no systemd controller mountpoint found"

2016-04-20 Thread F Dave
I cloned the master branch and rebuilt it. Did 'make uninstall' for the
previous lxc-2.0.0 and installed new version. However it still throws the
same error. Also tried to create the systemd folder:

[root@devhost lxc]# mkdir -p /sys/fs/cgroup/systemd
mkdir: cannot create directory `/sys/fs/cgroup/systemd': No such file or
directory

[root@devhost lxc]# ll /sys/fs/cgroup/
total 0

[root@devhost lxc]# ll /sys/fs/
total 0
drwxr-xr-x. 2 root root 0 Apr 21 07:07 btrfs
drwxr-xr-x. 2 root root 0 Apr 20 09:07 cgroup
drwxr-xr-x. 4 root root 0 Apr 21 07:07 ext4
drwxr-xr-x. 2 root root 0 Apr 21 07:07 selinux


On Thu, Apr 21, 2016 at 9:14 AM, Serge Hallyn 
wrote:

> > cgfsng.c:all_controllers_found:431 - no systemd controller mountpoint
> found
> > lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller
> > mountpoint found
>
> This is known.  You can either run from git head to get the patch which
> fixes this, or just make sure to have a name=systemd cgroup controller
> mounted on the host.
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails with "no systemd controller mountpoint found"

2016-04-20 Thread Serge Hallyn
> cgfsng.c:all_controllers_found:431 - no systemd controller mountpoint found
> lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller
> mountpoint found

This is known.  You can either run from git head to get the patch which
fixes this, or just make sure to have a name=systemd cgroup controller
mounted on the host.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-start fails with "no systemd controller mountpoint found"

2016-04-20 Thread F Dave
Hello,

I have a host system running Oracle Linux 6.7 and installed lxc 2.0.0 from
source tarball.

I created a node using:

# lxc-create -n node2 -t download

Distribution: Oracle Linux 6, amd64

I cannot start the node:

[root@devhost ~]# lxc-start -n node2 -l debug -F -o /dev/stdout
  lxc-start 20160420092731.509 INFO lxc_start_ui -
lxc_start.c:main:264 - using rcfile /usr/local/var/lib/lxc/node2/config
  lxc-start 20160420092731.510 WARN lxc_confile -
confile.c:config_pivotdir:1877 - lxc.pivotdir is ignored.  It will soon
become an error.
  lxc-start 20160420092731.512 DEBUGlxc_start -
start.c:setup_signal_fd:289 - sigchild handler set
  lxc-start 20160420092731.514 DEBUGlxc_console -
console.c:lxc_console_peer_default:437 - opening /dev/tty for console peer
  lxc-start 20160420092731.514 DEBUGlxc_console -
console.c:lxc_console_peer_default:443 - using '/dev/tty' as console
  lxc-start 20160420092731.514 DEBUGlxc_console -
console.c:lxc_console_sigwinch_init:142 - 27949 got SIGWINCH fd 9
  lxc-start 20160420092731.514 DEBUGlxc_console -
console.c:lxc_console_winsz:72 - set winsz dstfd:6 cols:238 rows:62
  lxc-start 20160420092731.514 INFO lxc_start -
start.c:lxc_init:488 - 'node2' is initialized
  lxc-start 20160420092731.528 DEBUGlxc_start -
start.c:__lxc_start:1302 - Not dropping cap_sys_boot or watching utmp
  lxc-start 20160420092731.540 DEBUGlxc_conf -
conf.c:instantiate_veth:2613 - instantiated veth 'vethB7WWF4/vethBH76EC',
index is '29'
  lxc-start 20160420092731.540 INFO lxc_cgroup -
cgroup.c:cgroup_init:68 - cgroup driver cgroupfs-ng initing for node2
  lxc-start 20160420092731.549 ERRORlxc_cgfsng -
cgfsng.c:all_controllers_found:431 - no systemd controller mountpoint found
lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller
mountpoint found
  lxc-start 20160420092731.551 ERRORlxc_start -
start.c:lxc_spawn:1079 - failed initializing cgroup support
lxc-start: start.c: lxc_spawn: 1079 failed initializing cgroup support
  lxc-start 20160420092731.576 ERRORlxc_start -
start.c:__lxc_start:1329 - failed to spawn 'node2'
lxc-start: start.c: __lxc_start: 1329 failed to spawn 'node2'
  lxc-start 20160420092731.576 ERRORlxc_start_ui -
lxc_start.c:main:344 - The container failed to start.
lxc-start: lxc_start.c: main: 344 The container failed to start.
  lxc-start 20160420092731.576 ERRORlxc_start_ui -
lxc_start.c:main:348 - Additional information can be obtained by setting
the --logfile and --logpriority options.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by
setting the --logfile and --logpriority options.


I have cgroup mounted:

[root@devhost ~]# cat /proc/mounts  | grep cgroup
cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0
cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0


[root@devhost ~]# uname -a
Linux devhost 3.8.13-68.3.4.el6uek.x86_64 #2 SMP Tue Jul 14 15:03:36 PDT
2015 x86_64 x86_64 x86_64 GNU/Linux


I read that this may be a bug, and has been fixed:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1556447

Could someone please assist?

Thanks
Davis
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails

2015-05-20 Thread SIVA SUBRAMANIAN.P
No, I'm trying to add it.

On Wed, May 20, 2015 at 4:27 PM, Fajar A. Nugraha  wrote:

> Does your busybox include "init" applet? Does it print "init" (among
> lots of other things) when you run "busybox" without arguments?
>
> --
> Fajar
>
> On Thu, May 21, 2015 at 1:23 AM, SIVA SUBRAMANIAN.P 
> wrote:
> > Thanks Fajar,
> > Now I'm able to configure statically linked busybox to get rid of
> warnings,
> > but init applet error exists.
> >
> > On Mon, May 18, 2015 at 4:44 PM, Fajar A. Nugraha 
> wrote:
> >>
> >> Read the messages
> >>
> >> (1)
> >> > /usr/share/lxc/templates/lxc-busybox: line 182: file: command not
> found
> >>
> >> you don't have the necessary programs on your host (i.e. you don't
> >> have "file", usually installed as "/usr/bin/file").
> >>
> >> (2)
> >> > warning : busybox is not statically linked.
> >> > warning : The template script may not correctly
> >>
> >> your busybox is not statically linked. On ubuntu, you'd need the
> >> "busybox-static" package.
> >>
> >> (3)
> >> > # lxc-start --name u1
> >> > init: applet not found
> >>
> >> This might be the result of (2), your busybox does not recognize
> >> "init" as an applet that it handles. Again, on ubuntu, using
> >> "busybox-static" should work.
> >>
> >> Are you using your own self-compiled busybox? If yes, there are
> >> options during compile time to enable static linking as well as
> >> selecting which applets to enable.
> >>
> >> --
> >> Fajar
> >>
> >> On Tue, May 19, 2015 at 3:02 AM, Sivasubramanian Patchaiperumal
> >>  wrote:
> >> > Hi,
> >> > I'm trying to create and run busybox container, but facing below
> error.
> >> > # lxc-create --template=busybox --name=u1
> >> > /usr/share/lxc/templates/lxc-busybox: line 182: file: command not
> found
> >> > warning : busybox is not statically linked.
> >> > warning : The template script may not correctly
> >> > warning : setup the container environment.
> >> > chmod: /var/lib/lxc/u1/rootfs/bin/passwd: No such file or directory
> >> > setting root password to "root"
> >> > Failed to change root password
> >> > 'dropbear' ssh utility installed
> >> >
> >> > # lxc-start --name u1
> >> > init: applet not found
> >> > lxc-start: lxc_start.c: main: 342 The container failed to start.
> >> > lxc-start: lxc_start.c: main: 346 Additional information can be
> obtained
> >> > by
> >> > setting the --logfile and --logpriority options.
> >> >
> >> > Regards,
> >> > Sivasubramanian
> >> > Skype: sivasubramanian.patchaiperumal
> >> >
> >> > L&T Technology Services Ltd
> >> >
> >> > www.LntTechservices.com
> >> >
> >> > This Email may contain confidential or privileged information for the
> >> > intended recipient (s). If you are not the intended recipient, please
> do
> >> > not
> >> > use or disseminate the information, notify the sender and delete it
> from
> >> > your system.
> >> >
> >> >
> >> > ___
> >> > lxc-users mailing list
> >> > lxc-users@lists.linuxcontainers.org
> >> > http://lists.linuxcontainers.org/listinfo/lxc-users
> >> ___
> >> lxc-users mailing list
> >> lxc-users@lists.linuxcontainers.org
> >> http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> >
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails

2015-05-20 Thread Fajar A. Nugraha
Does your busybox include "init" applet? Does it print "init" (among
lots of other things) when you run "busybox" without arguments?

-- 
Fajar

On Thu, May 21, 2015 at 1:23 AM, SIVA SUBRAMANIAN.P  wrote:
> Thanks Fajar,
> Now I'm able to configure statically linked busybox to get rid of warnings,
> but init applet error exists.
>
> On Mon, May 18, 2015 at 4:44 PM, Fajar A. Nugraha  wrote:
>>
>> Read the messages
>>
>> (1)
>> > /usr/share/lxc/templates/lxc-busybox: line 182: file: command not found
>>
>> you don't have the necessary programs on your host (i.e. you don't
>> have "file", usually installed as "/usr/bin/file").
>>
>> (2)
>> > warning : busybox is not statically linked.
>> > warning : The template script may not correctly
>>
>> your busybox is not statically linked. On ubuntu, you'd need the
>> "busybox-static" package.
>>
>> (3)
>> > # lxc-start --name u1
>> > init: applet not found
>>
>> This might be the result of (2), your busybox does not recognize
>> "init" as an applet that it handles. Again, on ubuntu, using
>> "busybox-static" should work.
>>
>> Are you using your own self-compiled busybox? If yes, there are
>> options during compile time to enable static linking as well as
>> selecting which applets to enable.
>>
>> --
>> Fajar
>>
>> On Tue, May 19, 2015 at 3:02 AM, Sivasubramanian Patchaiperumal
>>  wrote:
>> > Hi,
>> > I'm trying to create and run busybox container, but facing below error.
>> > # lxc-create --template=busybox --name=u1
>> > /usr/share/lxc/templates/lxc-busybox: line 182: file: command not found
>> > warning : busybox is not statically linked.
>> > warning : The template script may not correctly
>> > warning : setup the container environment.
>> > chmod: /var/lib/lxc/u1/rootfs/bin/passwd: No such file or directory
>> > setting root password to "root"
>> > Failed to change root password
>> > 'dropbear' ssh utility installed
>> >
>> > # lxc-start --name u1
>> > init: applet not found
>> > lxc-start: lxc_start.c: main: 342 The container failed to start.
>> > lxc-start: lxc_start.c: main: 346 Additional information can be obtained
>> > by
>> > setting the --logfile and --logpriority options.
>> >
>> > Regards,
>> > Sivasubramanian
>> > Skype: sivasubramanian.patchaiperumal
>> >
>> > L&T Technology Services Ltd
>> >
>> > www.LntTechservices.com
>> >
>> > This Email may contain confidential or privileged information for the
>> > intended recipient (s). If you are not the intended recipient, please do
>> > not
>> > use or disseminate the information, notify the sender and delete it from
>> > your system.
>> >
>> >
>> > ___
>> > lxc-users mailing list
>> > lxc-users@lists.linuxcontainers.org
>> > http://lists.linuxcontainers.org/listinfo/lxc-users
>> ___
>> lxc-users mailing list
>> lxc-users@lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails

2015-05-20 Thread SIVA SUBRAMANIAN.P
Thanks Fajar,
Now I'm able to configure statically linked busybox to get rid of warnings,
but init applet error exists.

On Mon, May 18, 2015 at 4:44 PM, Fajar A. Nugraha  wrote:

> Read the messages
>
> (1)
> > /usr/share/lxc/templates/lxc-busybox: line 182: file: command not found
>
> you don't have the necessary programs on your host (i.e. you don't
> have "file", usually installed as "/usr/bin/file").
>
> (2)
> > warning : busybox is not statically linked.
> > warning : The template script may not correctly
>
> your busybox is not statically linked. On ubuntu, you'd need the
> "busybox-static" package.
>
> (3)
> > # lxc-start --name u1
> > init: applet not found
>
> This might be the result of (2), your busybox does not recognize
> "init" as an applet that it handles. Again, on ubuntu, using
> "busybox-static" should work.
>
> Are you using your own self-compiled busybox? If yes, there are
> options during compile time to enable static linking as well as
> selecting which applets to enable.
>
> --
> Fajar
>
> On Tue, May 19, 2015 at 3:02 AM, Sivasubramanian Patchaiperumal
>  wrote:
> > Hi,
> > I'm trying to create and run busybox container, but facing below error.
> > # lxc-create --template=busybox --name=u1
> > /usr/share/lxc/templates/lxc-busybox: line 182: file: command not found
> > warning : busybox is not statically linked.
> > warning : The template script may not correctly
> > warning : setup the container environment.
> > chmod: /var/lib/lxc/u1/rootfs/bin/passwd: No such file or directory
> > setting root password to "root"
> > Failed to change root password
> > 'dropbear' ssh utility installed
> >
> > # lxc-start --name u1
> > init: applet not found
> > lxc-start: lxc_start.c: main: 342 The container failed to start.
> > lxc-start: lxc_start.c: main: 346 Additional information can be obtained
> by
> > setting the --logfile and --logpriority options.
> >
> > Regards,
> > Sivasubramanian
> > Skype: sivasubramanian.patchaiperumal
> >
> > L&T Technology Services Ltd
> >
> > www.LntTechservices.com
> >
> > This Email may contain confidential or privileged information for the
> > intended recipient (s). If you are not the intended recipient, please do
> not
> > use or disseminate the information, notify the sender and delete it from
> > your system.
> >
> >
> > ___
> > lxc-users mailing list
> > lxc-users@lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails

2015-05-18 Thread Fajar A. Nugraha
Read the messages

(1)
> /usr/share/lxc/templates/lxc-busybox: line 182: file: command not found

you don't have the necessary programs on your host (i.e. you don't
have "file", usually installed as "/usr/bin/file").

(2)
> warning : busybox is not statically linked.
> warning : The template script may not correctly

your busybox is not statically linked. On ubuntu, you'd need the
"busybox-static" package.

(3)
> # lxc-start --name u1
> init: applet not found

This might be the result of (2), your busybox does not recognize
"init" as an applet that it handles. Again, on ubuntu, using
"busybox-static" should work.

Are you using your own self-compiled busybox? If yes, there are
options during compile time to enable static linking as well as
selecting which applets to enable.

-- 
Fajar

On Tue, May 19, 2015 at 3:02 AM, Sivasubramanian Patchaiperumal
 wrote:
> Hi,
> I'm trying to create and run busybox container, but facing below error.
> # lxc-create --template=busybox --name=u1
> /usr/share/lxc/templates/lxc-busybox: line 182: file: command not found
> warning : busybox is not statically linked.
> warning : The template script may not correctly
> warning : setup the container environment.
> chmod: /var/lib/lxc/u1/rootfs/bin/passwd: No such file or directory
> setting root password to "root"
> Failed to change root password
> 'dropbear' ssh utility installed
>
> # lxc-start --name u1
> init: applet not found
> lxc-start: lxc_start.c: main: 342 The container failed to start.
> lxc-start: lxc_start.c: main: 346 Additional information can be obtained by
> setting the --logfile and --logpriority options.
>
> Regards,
> Sivasubramanian
> Skype: sivasubramanian.patchaiperumal
>
> L&T Technology Services Ltd
>
> www.LntTechservices.com
>
> This Email may contain confidential or privileged information for the
> intended recipient (s). If you are not the intended recipient, please do not
> use or disseminate the information, notify the sender and delete it from
> your system.
>
>
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-start fails

2015-05-18 Thread Sivasubramanian Patchaiperumal
Hi,
I'm trying to create and run busybox container, but facing below error.
# lxc-create --template=busybox --name=u1
/usr/share/lxc/templates/lxc-busybox: line 182: file: command not found
warning : busybox is not statically linked.
warning : The template script may not correctly
warning : setup the container environment.
chmod: /var/lib/lxc/u1/rootfs/bin/passwd: No such file or directory
setting root password to "root"
Failed to change root password
'dropbear' ssh utility installed

# lxc-start --name u1
init: applet not found
lxc-start: lxc_start.c: main: 342 The container failed to start.
lxc-start: lxc_start.c: main: 346 Additional information can be obtained by 
setting the --logfile and --logpriority options.

Regards,
Sivasubramanian
Skype: sivasubramanian.patchaiperumal

L&T Technology Services Ltd

www.LntTechservices.com

This Email may contain confidential or privileged information for the intended 
recipient (s). If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails with invalid pid on arm

2015-04-20 Thread Serge Hallyn
Quoting Praveen Kumar Verma (praveen.ve...@lnttechservices.com):
> 
> Since both host name(dra7xx) were same, I was not able to differentiate 
> between them.
> Now after changing the host name(prav), I can see the container is running.
> 
> But I am seeing that there is continuous switching between the host busybox 
> filesystem & busybox filesystem running in container.
>
> /* host busybox, not running in container */
> root@dra7xx:~#
> /* running in container */
> root@prav:~#
> root@dra7xx:~#
> root@prav:~#
> root@dra7xx:~#
> root@prav:~#
> 
> So due to above, i am not able to enter the commands.

Your container's getty is running on a host device.  You'll want
to either stop getty from running in the container or give it a
separate device.  Perhaps 'lxc.autodev = 1' in your container
config file would help.  Otherwise, you could create a script

lxc-start -n container -d
sleep 5s
ps -ef > outfile
lxc-stop -n container -k

and look at outfile to see which device you need to handle.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails with invalid pid on arm

2015-04-20 Thread Praveen Kumar Verma

Since both host name(dra7xx) were same, I was not able to differentiate between 
them.
Now after changing the host name(prav), I can see the container is running.

But I am seeing that there is continuous switching between the host busybox 
filesystem & busybox filesystem running in container.

/* host busybox, not running in container */
root@dra7xx:~#
/* running in container */
root@prav:~#
root@dra7xx:~#
root@prav:~#
root@dra7xx:~#
root@prav:~#

So due to above, i am not able to enter the commands.


From: lxc-users [lxc-users-boun...@lists.linuxcontainers.org] on behalf of 
Serge Hallyn [serge.hal...@ubuntu.com]
Sent: Saturday, April 18, 2015 12:01 AM
To: LXC users mailing-list
Subject: Re: [lxc-users] lxc-start fails with invalid pid on arm

Quoting Praveen Kumar Verma (praveen.ve...@lnttechservices.com):
> Hi Serge,
>
> Sorry I am not cleared; Please see my answers pointwise:
>
> >>i don't understand what you've done, what you wanted to have happen, or 
> >>what actually happened.
>
> I took a busybox based filesystem which doesn't have any lxc-container 
> support.

Ah, I see.

> So I downloaded the lxc-1.1.0 package & After cross compiling the lxc package 
> I installed the binaries in busybox file system.
> Our aim is to support linux container on busybox based filesystem. So that we 
> can boot android or busybox over linux container.
>
> >> When you say "I am login as a root in buysbox file system", what do you 
> >> mean exactly
> I use "root" as a login to busybox file system. Till now I have not created 
> any container. I logged in as a root into busybox file system.
>
> >>what do you mean exactly, is that logged in over lxc-console with the 
> >>container running?
> No, I havenot created any container till now. After login into busybox 
> filesystem, Now I create a container named "bb" using lxc-create. Now after 
> creating the container, I can see the rootfs present in /var/lib/lxc/bb 
> directory, that means container successfully created.
>
> >> Why did you create the 'root' cgroups by hand?
> We didn't have cgmanger to handle the cgroup so I created a root entry in 
> cgroup.
>
> >>  or what actually happened?
> After doing all above, when I started a container using "lxc-start" it fails 
> with invalid pid. As follows:
>
>lxc-start 1414573214.442 DEBUGlxc_conf - conf.c:setup_caps:2139 - 
> capabilities have been setup
>lxc-start 1414573214.442 NOTICE   lxc_conf - conf.c:lxc_setup:3921 - 
> 'bb' is setup.
>lxc-start 1414573214.442 NOTICE   lxc_start - start.c:start:1232 - 
> exec'ing '/sbin/init'
>lxc-start 1414573214.482 NOTICE   lxc_start - start.c:post_start:1243 
> - '/sbin/init' started with pid '1896'
>lxc-start 1414573214.482 WARN lxc_start - 
> start.c:signal_handler:307 - invalid pid for SIGCHLD

This here looks good - the container started.  (The invalid pid msg is 
innocuous)

Are you after this able to 'lxc-attach -n containername' ?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
L&T Technology Services Ltd

www.LntTechservices.com<http://www.lnttechservices.com/>

This Email may contain confidential or privileged information for the intended 
recipient (s). If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails with invalid pid on arm

2015-04-17 Thread Serge Hallyn
Quoting Praveen Kumar Verma (praveen.ve...@lnttechservices.com):
> Hi Serge,
> 
> Sorry I am not cleared; Please see my answers pointwise:
> 
> >>i don't understand what you've done, what you wanted to have happen, or 
> >>what actually happened.
> 
> I took a busybox based filesystem which doesn't have any lxc-container 
> support.

Ah, I see.

> So I downloaded the lxc-1.1.0 package & After cross compiling the lxc package 
> I installed the binaries in busybox file system.
> Our aim is to support linux container on busybox based filesystem. So that we 
> can boot android or busybox over linux container.
> 
> >> When you say "I am login as a root in buysbox file system", what do you 
> >> mean exactly
> I use "root" as a login to busybox file system. Till now I have not created 
> any container. I logged in as a root into busybox file system.
> 
> >>what do you mean exactly, is that logged in over lxc-console with the 
> >>container running?
> No, I havenot created any container till now. After login into busybox 
> filesystem, Now I create a container named "bb" using lxc-create. Now after 
> creating the container, I can see the rootfs present in /var/lib/lxc/bb 
> directory, that means container successfully created.
> 
> >> Why did you create the 'root' cgroups by hand?
> We didn't have cgmanger to handle the cgroup so I created a root entry in 
> cgroup.
> 
> >>  or what actually happened?
> After doing all above, when I started a container using "lxc-start" it fails 
> with invalid pid. As follows:
> 
>lxc-start 1414573214.442 DEBUGlxc_conf - conf.c:setup_caps:2139 - 
> capabilities have been setup
>lxc-start 1414573214.442 NOTICE   lxc_conf - conf.c:lxc_setup:3921 - 
> 'bb' is setup.
>lxc-start 1414573214.442 NOTICE   lxc_start - start.c:start:1232 - 
> exec'ing '/sbin/init'
>lxc-start 1414573214.482 NOTICE   lxc_start - start.c:post_start:1243 
> - '/sbin/init' started with pid '1896'
>lxc-start 1414573214.482 WARN lxc_start - 
> start.c:signal_handler:307 - invalid pid for SIGCHLD

This here looks good - the container started.  (The invalid pid msg is 
innocuous)

Are you after this able to 'lxc-attach -n containername' ?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails with invalid pid on arm

2015-04-06 Thread Praveen Kumar Verma
Hi Serge,

Sorry I am not cleared; Please see my answers pointwise:

>>i don't understand what you've done, what you wanted to have happen, or what 
>>actually happened.

I took a busybox based filesystem which doesn't have any lxc-container support.
So I downloaded the lxc-1.1.0 package & After cross compiling the lxc package I 
installed the binaries in busybox file system.
Our aim is to support linux container on busybox based filesystem. So that we 
can boot android or busybox over linux container.

>> When you say "I am login as a root in buysbox file system", what do you mean 
>> exactly
I use "root" as a login to busybox file system. Till now I have not created any 
container. I logged in as a root into busybox file system.

>>what do you mean exactly, is that logged in over lxc-console with the 
>>container running?
No, I havenot created any container till now. After login into busybox 
filesystem, Now I create a container named "bb" using lxc-create. Now after 
creating the container, I can see the rootfs present in /var/lib/lxc/bb 
directory, that means container successfully created.

>> Why did you create the 'root' cgroups by hand?
We didn't have cgmanger to handle the cgroup so I created a root entry in 
cgroup.

>>  or what actually happened?
After doing all above, when I started a container using "lxc-start" it fails 
with invalid pid. As follows:

   lxc-start 1414573214.442 DEBUGlxc_conf - conf.c:setup_caps:2139 - 
capabilities have been setup
   lxc-start 1414573214.442 NOTICE   lxc_conf - conf.c:lxc_setup:3921 - 
'bb' is setup.
   lxc-start 1414573214.442 NOTICE   lxc_start - start.c:start:1232 - 
exec'ing '/sbin/init'
   lxc-start 1414573214.482 NOTICE   lxc_start - start.c:post_start:1243 - 
'/sbin/init' started with pid '1896'
   lxc-start 1414573214.482 WARN lxc_start - start.c:signal_handler:307 
- invalid pid for SIGCHLD


Please let me know if I unclear on any point.

Regards,
Praveen



From: lxc-users [lxc-users-boun...@lists.linuxcontainers.org] on behalf of 
Serge Hallyn [serge.hal...@ubuntu.com]
Sent: Saturday, April 04, 2015 7:44 AM
To: LXC users mailing-list
Subject: Re: [lxc-users] lxc-start fails with invalid pid on arm

Quoting Praveen Kumar Verma (praveen.ve...@lnttechservices.com):
> Greetings,
>
> I am trying containers on busybox based file-system with kernel 3.12.
>
> I have successfully cross-compiled the lxc-1.1.0 package, & able to run 
> lxc-execute without any problem.
>
> Now I want to run linux with busybox template in container.
>
> I created a container named "bb" under /var/lib/lxc/bb. I can see config, 
> fstab & rootfs file present there.
>
> But when I tried to start the container, I am getting the following error: 
> (The full log is attached in a file)
>
>   lxc-start 1414573214.442 DEBUGlxc_conf - conf.c:setup_caps:2139 - 
> capabilities have been setup
>   lxc-start 1414573214.442 NOTICE   lxc_conf - conf.c:lxc_setup:3921 - 
> 'bb' is setup.
>   lxc-start 1414573214.442 NOTICE   lxc_start - start.c:start:1232 - 
> exec'ing '/sbin/init'
>   lxc-start 1414573214.482 NOTICE   lxc_start - start.c:post_start:1243 - 
> '/sbin/init' started with pid '1896'
>   lxc-start 1414573214.482 WARN lxc_start - 
> start.c:signal_handler:307 - invalid pid for SIGCHLD
>
>
> I am login as a root in buysbox file system. and i have manually added the 
> root into cgroups using following:
>
>for d in /sys/fs/cgroup/*; do
> mkdir $d/root
> chown -R root: $d/root
> echo $$ > $d/root/tasks
>done
>
>   root@evm:/sys/fs/cgroup/cpuset# cat /proc/self/cgroup
>
> 7:freezer:/root
> 6:devices:/root
> 5:memory:/root
> 4:cpuacct:/root
> 3:cpu:/root
> 2:cpuset:/root
>
> Please help me in the issue.

I'm sorry, i don't understand what you've done, what you wanted to have
happen, or what actually happened.

Exactly how did you create the container?  When you say "I am login as a root
in buysbox file system", what do you mean exactly, is that logged in over
lxc-console with the container running?  Why did you create the 'root' cgroups
by hand?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users
L&T Technology Services Ltd

www.LntTechservices.com<http://www.lnttechservices.com/>

This Email may contain confidential or privileged information for the intended 
recipient (s). If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails with invalid pid on arm

2015-04-03 Thread Serge Hallyn
Quoting Praveen Kumar Verma (praveen.ve...@lnttechservices.com):
> Greetings,
> 
> I am trying containers on busybox based file-system with kernel 3.12.
> 
> I have successfully cross-compiled the lxc-1.1.0 package, & able to run 
> lxc-execute without any problem.
> 
> Now I want to run linux with busybox template in container.
> 
> I created a container named "bb" under /var/lib/lxc/bb. I can see config, 
> fstab & rootfs file present there.
> 
> But when I tried to start the container, I am getting the following error: 
> (The full log is attached in a file)
> 
>   lxc-start 1414573214.442 DEBUGlxc_conf - conf.c:setup_caps:2139 - 
> capabilities have been setup
>   lxc-start 1414573214.442 NOTICE   lxc_conf - conf.c:lxc_setup:3921 - 
> 'bb' is setup.
>   lxc-start 1414573214.442 NOTICE   lxc_start - start.c:start:1232 - 
> exec'ing '/sbin/init'
>   lxc-start 1414573214.482 NOTICE   lxc_start - start.c:post_start:1243 - 
> '/sbin/init' started with pid '1896'
>   lxc-start 1414573214.482 WARN lxc_start - 
> start.c:signal_handler:307 - invalid pid for SIGCHLD
> 
> 
> I am login as a root in buysbox file system. and i have manually added the 
> root into cgroups using following:
> 
>for d in /sys/fs/cgroup/*; do
> mkdir $d/root
> chown -R root: $d/root
> echo $$ > $d/root/tasks
>done
> 
>   root@evm:/sys/fs/cgroup/cpuset# cat /proc/self/cgroup
> 
> 7:freezer:/root
> 6:devices:/root
> 5:memory:/root
> 4:cpuacct:/root
> 3:cpu:/root
> 2:cpuset:/root
> 
> Please help me in the issue.

I'm sorry, i don't understand what you've done, what you wanted to have
happen, or what actually happened.

Exactly how did you create the container?  When you say "I am login as a root
in buysbox file system", what do you mean exactly, is that logged in over
lxc-console with the container running?  Why did you create the 'root' cgroups
by hand?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-start fails with invalid pid

2015-03-30 Thread Praveen Kumar Verma
Greetings,

I am trying containers on busybox based file-system with kernel 3.12.

I have successfully cross-compiled the lxc-1.1.0 package, & able to run 
lxc-execute without any problem.

Now I want to run linux with busybox template in container.

I created a container under /var/lib/lxc/busybox_container. I can see config, 
fstab & rootfs file present there.

But when I tried to start the container, I am getting the following error: (The 
full log is attached in a file)

  lxc-start 1414573214.442 DEBUGlxc_conf - conf.c:setup_caps:2139 - 
capabilities have been setup
  lxc-start 1414573214.442 NOTICE   lxc_conf - conf.c:lxc_setup:3921 - 'bb' 
is setup.
  lxc-start 1414573214.442 NOTICE   lxc_start - start.c:start:1232 - 
exec'ing '/sbin/init'
  lxc-start 1414573214.482 NOTICE   lxc_start - start.c:post_start:1243 - 
'/sbin/init' started with pid '1896'
  lxc-start 1414573214.482 WARN lxc_start - start.c:signal_handler:307 
- invalid pid for SIGCHLD




I am login as a root in buysbox file system. and i have manually added the root 
into cgroups using following:

   for d in /sys/fs/cgroup/*; do
mkdir $d/root
chown -R root: $d/root
echo $$ > $d/root/tasks
   done

  root@evm:/sys/fs/cgroup/cpuset# cat /proc/self/cgroup

7:freezer:/root
6:devices:/root
5:memory:/root
4:cpuacct:/root
3:cpu:/root
2:cpuset:/root

Please help me in the issue.

Thanks & Regards,
Praveen
L&T Technology Services Ltd

www.LntTechservices.com

This Email may contain confidential or privileged information for the intended 
recipient (s). If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.
  lxc-start 1414572847.554 WARN lxc_log - log.c:lxc_log_init:316 - 
lxc_log_init called with log already initialized
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup cpuset 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup cpu unknown 
to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup cpuacct 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup memory 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup devices 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup freezer 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.558 INFO lxc_start - 
start.c:lxc_check_inherited:221 - closed inherited fd 4
  lxc-start 1414572847.599 DEBUGlxc_start - start.c:setup_signal_fd:259 
- sigchild handler set
  lxc-start 1414572847.600 INFO lxc_start - 
start.c:lxc_check_inherited:221 - closed inherited fd 4
  lxc-start 1414572847.607 DEBUGlxc_console - 
console.c:lxc_console_peer_default:536 - no console peer
  lxc-start 1414572847.638 INFO lxc_start - start.c:lxc_init:451 - 'bb' 
is initialized
  lxc-start 1414572847.671 INFO lxc_monitor - 
monitor.c:lxc_monitor_sock_name:177 - using monitor sock name 
lxc/ad055575fe28ddd5//var/lib/lxc
  lxc-start 1414572847.671 ERRORlxc_monitor - 
monitor.c:lxc_monitor_open:208 - connect : backing off 10
  lxc-start 1414572847.681 ERRORlxc_monitor - 
monitor.c:lxc_monitor_open:208 - connect : backing off 50
  lxc-start 1414572847.751 DEBUGlxc_start - start.c:__lxc_start:1130 - 
Not dropping cap_sys_boot or watching utmp
  lxc-start 1414572847.751 INFO lxc_cgroup - cgroup.c:cgroup_init:65 - 
cgroup driver cgroupfs initing for bb
  lxc-start 1414572847.752 ERRORlxc_cgfs - 
cgfs.c:handle_cgroup_settings:2077 - Device or resource busy - failed to set 
memory.use_hierarchy to 1; continuing
  lxc-start 1414572847.770 ERRORlxc_monitor - 
monitor.c:lxc_monitor_open:208 - connect : backing off 100
  lxc-start 1414572847.846 DEBUGlxc_conf - conf.c:setup_rootfs:1267 - 
mounted '/var/lib/lxc/bb/rootfs' on '/usr/local/lib/lxc/rootfs'
  lxc-start 1414572847.846 INFO lxc_conf - conf.c:setup_utsname:902 - 
'bb' hostname has been setup
  lxc-start 1414572847.846 INFO lxc_conf - conf.c:mount_autodev:1131 - 
Mounting /dev under /usr/local/lib/lxc/rootfs
  lxc-start 1414572847.847 INFO lxc_conf - conf.c:mount_autodev:1152 - 
Mounted tmpfs onto /usr/local/lib/lxc/rootfs/dev
  lxc-start 1414572847.847 INFO lxc_conf - conf.c:mount_autodev:1170 - 
Mounted /dev under /usr/local/lib/lxc/rootfs
  lxc-start 1414572847.871 ERRORlxc_mo

[lxc-users] lxc-start fails with invalid pid on arm

2015-03-27 Thread Praveen Kumar Verma
Greetings,

I am trying containers on busybox based file-system with kernel 3.12.

I have successfully cross-compiled the lxc-1.1.0 package, & able to run 
lxc-execute without any problem.

Now I want to run linux with busybox template in container.

I created a container named "bb" under /var/lib/lxc/bb. I can see config, fstab 
& rootfs file present there.

But when I tried to start the container, I am getting the following error: (The 
full log is attached in a file)

  lxc-start 1414573214.442 DEBUGlxc_conf - conf.c:setup_caps:2139 - 
capabilities have been setup
  lxc-start 1414573214.442 NOTICE   lxc_conf - conf.c:lxc_setup:3921 - 'bb' 
is setup.
  lxc-start 1414573214.442 NOTICE   lxc_start - start.c:start:1232 - 
exec'ing '/sbin/init'
  lxc-start 1414573214.482 NOTICE   lxc_start - start.c:post_start:1243 - 
'/sbin/init' started with pid '1896'
  lxc-start 1414573214.482 WARN lxc_start - start.c:signal_handler:307 
- invalid pid for SIGCHLD


I am login as a root in buysbox file system. and i have manually added the root 
into cgroups using following:

   for d in /sys/fs/cgroup/*; do
mkdir $d/root
chown -R root: $d/root
echo $$ > $d/root/tasks
   done

  root@evm:/sys/fs/cgroup/cpuset# cat /proc/self/cgroup

7:freezer:/root
6:devices:/root
5:memory:/root
4:cpuacct:/root
3:cpu:/root
2:cpuset:/root

Please help me in the issue.

Thanks & Regards,
Praveen
L&T Technology Services Ltd

www.LntTechservices.com

This Email may contain confidential or privileged information for the intended 
recipient (s). If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.
  lxc-start 1414572847.554 WARN lxc_log - log.c:lxc_log_init:316 - 
lxc_log_init called with log already initialized
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup cpuset 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup cpu unknown 
to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup cpuacct 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup memory 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup devices 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.556 WARN lxc_cgfs - 
cgfs.c:lxc_cgroup_get_container_info:1100 - Not attaching to cgroup freezer 
unknown to /var/lib/lxc bb
  lxc-start 1414572847.558 INFO lxc_start - 
start.c:lxc_check_inherited:221 - closed inherited fd 4
  lxc-start 1414572847.599 DEBUGlxc_start - start.c:setup_signal_fd:259 
- sigchild handler set
  lxc-start 1414572847.600 INFO lxc_start - 
start.c:lxc_check_inherited:221 - closed inherited fd 4
  lxc-start 1414572847.607 DEBUGlxc_console - 
console.c:lxc_console_peer_default:536 - no console peer
  lxc-start 1414572847.638 INFO lxc_start - start.c:lxc_init:451 - 'bb' 
is initialized
  lxc-start 1414572847.671 INFO lxc_monitor - 
monitor.c:lxc_monitor_sock_name:177 - using monitor sock name 
lxc/ad055575fe28ddd5//var/lib/lxc
  lxc-start 1414572847.671 ERRORlxc_monitor - 
monitor.c:lxc_monitor_open:208 - connect : backing off 10
  lxc-start 1414572847.681 ERRORlxc_monitor - 
monitor.c:lxc_monitor_open:208 - connect : backing off 50
  lxc-start 1414572847.751 DEBUGlxc_start - start.c:__lxc_start:1130 - 
Not dropping cap_sys_boot or watching utmp
  lxc-start 1414572847.751 INFO lxc_cgroup - cgroup.c:cgroup_init:65 - 
cgroup driver cgroupfs initing for bb
  lxc-start 1414572847.752 ERRORlxc_cgfs - 
cgfs.c:handle_cgroup_settings:2077 - Device or resource busy - failed to set 
memory.use_hierarchy to 1; continuing
  lxc-start 1414572847.770 ERRORlxc_monitor - 
monitor.c:lxc_monitor_open:208 - connect : backing off 100
  lxc-start 1414572847.846 DEBUGlxc_conf - conf.c:setup_rootfs:1267 - 
mounted '/var/lib/lxc/bb/rootfs' on '/usr/local/lib/lxc/rootfs'
  lxc-start 1414572847.846 INFO lxc_conf - conf.c:setup_utsname:902 - 
'bb' hostname has been setup
  lxc-start 1414572847.846 INFO lxc_conf - conf.c:mount_autodev:1131 - 
Mounting /dev under /usr/local/lib/lxc/rootfs
  lxc-start 1414572847.847 INFO lxc_conf - conf.c:mount_autodev:1152 - 
Mounted tmpfs onto /usr/local/lib/lxc/rootfs/dev
  lxc-start 1414572847.847 INFO lxc_conf - conf.c:mount_autodev:1170 - 
Mounted /dev under /usr/local/lib/lxc/rootfs
  lxc-start 1414572847.871 ERRORlxc_monitor 

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-07 Thread Tom Weber
Am Donnerstag, den 07.08.2014, 17:23 -0400 schrieb Dwight Engen:
> On Tue, 05 Aug 2014 13:53:58 +0200
> Tom Weber  wrote:

> > Oh, and a little log message wether lxc-start detected apparmor or not
> > and activates it would be _very_ helpfull :)
> 
> lsm_init() INFO()s which lsm backend was detected, and
> apparmor_process_label_set() INFO()s which profile its setting so you
> should see those in the log if your --logpriority is set accordingly.

yes, but only if it activates apparmor (which would have been only the
case if that mount patch is in the kernel). It silently ignored my
apparmor settings completely - how should I know what should have been
in the log if I only see these messages when everything works? :)

The problem with Serge's patch, which turns the failed detection of a
mount patched kernel into a WARN(), is that these WARN()s don't appear
anywhere.

Regards,
  Tom


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-07 Thread Dwight Engen
On Tue, 05 Aug 2014 13:53:58 +0200
Tom Weber  wrote:

> Hello,
> 
> my setup: 
> debian7 
> lxc-1.0.4 from debian testing
> vanilla kernel.org kernel 3.14.14
> 
> i'm new to lxc and apparmor, so this took me a couple of hours to
> figure:
> lxc-start won't assign an apparmor-profile to a container since it's
> test for apparmor will always fail on my setup:
> in src/lxc/lsm/apparmor:
> the apparmor_enabled() tests for AA_MOUNT_RESTR
> (/sys/kernel/security/apparmor/features/mount/mask) first, which will
> never exist without that apparmor mount patch in the kernel. 
> 
> commenting out that test gives me apparmor functionality (except for
> that mount feature of course).
> 
> Is that intentional or just an ancient relict? 
> I'd prefer to have apparmor profile support without mount restrictions
> over no apparmor profile support at all. apparmor gives me warnings
> like: 
> 
> Warning from /etc/apparmor.d/lxc-containers
> (/etc/apparmor.d/lxc-containers line 8): profile
> lxc-container-default mount rules not enforced
> 
> when starting up, which is what I expect and something I can deal with
> as admin. I think lxc-start should activate the requested profile
> anyway.
> 
> Oh, and a little log message wether lxc-start detected apparmor or not
> and activates it would be _very_ helpfull :)

lsm_init() INFO()s which lsm backend was detected, and
apparmor_process_label_set() INFO()s which profile its setting so you
should see those in the log if your --logpriority is set accordingly.

> related question: dropping sys_admin cap for the container should
> render all the mount protections from apparmor unnecessary, right?
> 
> Regards,
>   Tom
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-07 Thread Tom Weber
Am Mittwoch, den 06.08.2014, 17:47 + schrieb Serge Hallyn:
> Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> > Am Mittwoch, den 06.08.2014, 14:51 + schrieb Serge Hallyn:
> > > Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> > > > Am Dienstag, den 05.08.2014, 23:34 + schrieb Serge Hallyn:
> > > > > Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> > > > >  
> > > > > > The patch works in the regard that the container starts and the 
> > > > > > apparmor
> > > > > > profile is set. 
> > > > > > But I can't find the Warning message anywhere (tried lxc-start -n 
> > > > > > webv1
> > > > > > -d -l DEBUG) - but maybe thats a more general problem. Oh, and 
> > > > > > there is
> > > > > > a typo: Apparmor ount
> > > > > > 
> > > > > > My opinion as an admin is that this check isn't needed in lxc 
> > > > > > itself.
> > > > > > Apparmor spits a warning during aa lxc-profile loading - sane admins
> > > > > > wouldn't ignore this.
> > > > > 
> > > > > We're not just talking about "sane admins" though.  We're talking 
> > > > > about
> > > > > everyday users using containers.  And they're not building their own
> > > > > misconfigured kernels.  It happens, certainly while using the 
> > > > > development
> > > > > release, that you get a kernel for which the apparmor set wasn't ready
> > > > > yet and mount restrictions weren't ready.
> > > > > 
> > > > > Maybe the patch should be modified to only allow the container to
> > > > > proceed if cap_sys_admin is being dropped.
> > > > 
> > > > So if I _want_ an insecure container with cap_sys_admin (for whatever
> > > > reason like testing or development - and yes sometimes I might want
> > > 
> > > Well in the end it's open source and you can comment out two lines and
> > > build your own :)
> > > 
> > > We could also add a set of security flags to specify what the container
> > > considers required.  So it might include 'selinux must be enabled', and
> > > 'mount restrictions in apparmor must be enabled.'
> > 
> > none of these would be usefull if you don't also check for a correctly
> > configured apparmor profile.
> > So you tell me that i can always recompile if i don't want your security
> > checks, but everyone can render the security you check for completely
> > useless by editing an apparmor profile - and none of your checks would
> > even notify.
> > 
> > > > this!) you'd force me to install an apparmor mount supported kernel
> > > > where i'd comment out the mount rules in the apparmor profile? Just to
> > > > make that thing start?
> > > 
> > > There are many many more people who might adversely affect their
> > > system by not having that.  The *defaults* should be safe, and the
> > > burden should be on the one who wants to run insecurely to enable
> > > that.  I admit the burden shouldn't necessarily be "rebuild the
> > > package".  I'm liking the idea of security flags.  I think they
> > > merit some more discussion first, to make sure we get an API that'll
> > > continue to be useful.  Hopefully Dwight (cc:d) wlil have some input.
> > > But worst case, we could just make it an explicit
> > 
> > I agree that the default setting should be safe. 3 days ago your default
> > on a debian or vanilla kernel built system was to _silently_ ignore
> > apparmor _completely_ - not even the slightest protection through
> 
> Yeah that shouldn't be the case, they should fail to start.

They should do what I told them - use the apparmor profile i specified,
not rely on a silly security check and ignore it completely.

> > apparmor was possible _at all_! I wonder how many people run lxc and
> > think that apparmor is active while it isn't at all due to this.
> > 
> > But it should not be a bourdon to me to do things the way I want, even
> 
> But it should!  It just should be a minimal burden.

No. Software should help me to do things the right way, not get into my
way if i prefer to do things the author didn't think of.

> > if I decide to have an insecure setup. I'd buy/use Macs if i'd want the
> > developer to limit me to the things he thinks I should be allowed to do.
> 
> I don't want to limit you.  I want to provide you a configurable option
> to do what you want.  We're just still discussing what that should look
> like.

And I'm trying to explain you why you can't and shouldn't check other
components this way.

> But you come back to my earlier point nicely - on a mac you wouldn't
> have to option to recompile when they've failed to provide you the
> option.  I fully agree you shouldn't have to recompile.  I'm just
> saying that until we work this out, you have that option.
> 
> > > lxc.apparmor = [safe|unsafe|off]
> > 
> > that sounds nice, but the problem is that you can't say anything about
> > apparmor security for a container if you don't check the apparmor
> > profile for correctness. And thats out of the scope for lxc.
> 
> Here I think you are missing the point.  I don't want to guarantee that
> your container is safe.  I want to g

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-06 Thread Serge Hallyn
Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> Am Mittwoch, den 06.08.2014, 14:51 + schrieb Serge Hallyn:
> > Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> > > Am Dienstag, den 05.08.2014, 23:34 + schrieb Serge Hallyn:
> > > > Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> > > >  
> > > > > The patch works in the regard that the container starts and the 
> > > > > apparmor
> > > > > profile is set. 
> > > > > But I can't find the Warning message anywhere (tried lxc-start -n 
> > > > > webv1
> > > > > -d -l DEBUG) - but maybe thats a more general problem. Oh, and there 
> > > > > is
> > > > > a typo: Apparmor ount
> > > > > 
> > > > > My opinion as an admin is that this check isn't needed in lxc itself.
> > > > > Apparmor spits a warning during aa lxc-profile loading - sane admins
> > > > > wouldn't ignore this.
> > > > 
> > > > We're not just talking about "sane admins" though.  We're talking about
> > > > everyday users using containers.  And they're not building their own
> > > > misconfigured kernels.  It happens, certainly while using the 
> > > > development
> > > > release, that you get a kernel for which the apparmor set wasn't ready
> > > > yet and mount restrictions weren't ready.
> > > > 
> > > > Maybe the patch should be modified to only allow the container to
> > > > proceed if cap_sys_admin is being dropped.
> > > 
> > > So if I _want_ an insecure container with cap_sys_admin (for whatever
> > > reason like testing or development - and yes sometimes I might want
> > 
> > Well in the end it's open source and you can comment out two lines and
> > build your own :)
> > 
> > We could also add a set of security flags to specify what the container
> > considers required.  So it might include 'selinux must be enabled', and
> > 'mount restrictions in apparmor must be enabled.'
> 
> none of these would be usefull if you don't also check for a correctly
> configured apparmor profile.
> So you tell me that i can always recompile if i don't want your security
> checks, but everyone can render the security you check for completely
> useless by editing an apparmor profile - and none of your checks would
> even notify.
> 
> > > this!) you'd force me to install an apparmor mount supported kernel
> > > where i'd comment out the mount rules in the apparmor profile? Just to
> > > make that thing start?
> > 
> > There are many many more people who might adversely affect their
> > system by not having that.  The *defaults* should be safe, and the
> > burden should be on the one who wants to run insecurely to enable
> > that.  I admit the burden shouldn't necessarily be "rebuild the
> > package".  I'm liking the idea of security flags.  I think they
> > merit some more discussion first, to make sure we get an API that'll
> > continue to be useful.  Hopefully Dwight (cc:d) wlil have some input.
> > But worst case, we could just make it an explicit
> 
> I agree that the default setting should be safe. 3 days ago your default
> on a debian or vanilla kernel built system was to _silently_ ignore
> apparmor _completely_ - not even the slightest protection through

Yeah that shouldn't be the case, they should fail to start.

> apparmor was possible _at all_! I wonder how many people run lxc and
> think that apparmor is active while it isn't at all due to this.
> 
> But it should not be a bourdon to me to do things the way I want, even

But it should!  It just should be a minimal burden.

> if I decide to have an insecure setup. I'd buy/use Macs if i'd want the
> developer to limit me to the things he thinks I should be allowed to do.

I don't want to limit you.  I want to provide you a configurable option
to do what you want.  We're just still discussing what that should look
like.

But you come back to my earlier point nicely - on a mac you wouldn't
have to option to recompile when they've failed to provide you the
option.  I fully agree you shouldn't have to recompile.  I'm just
saying that until we work this out, you have that option.

> > lxc.apparmor = [safe|unsafe|off]
> 
> that sounds nice, but the problem is that you can't say anything about
> apparmor security for a container if you don't check the apparmor
> profile for correctness. And thats out of the scope for lxc.

Here I think you are missing the point.  I don't want to guarantee that
your container is safe.  I want to guarantee that when you use the
defaults, you are getting a certain level of safety.  If you don't use
the defaults, you should know what you're doing of course.

> so people will turn on lxc.apparmor = safe.
> lxc won't start and complain about security and apparmor.
> so they'll try with lxc.apparmor = unsafe and it will start.
> now people will start messing up apparmor profiles until they figure
> that they need a kernel with aa-mount support.
> they turn on lxc.apparmor = safe and the aa profiles are still messed
> up.
> but lxc won't complain about security anymore and they'll end up feeling
> safe with a mes

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-06 Thread Tom Weber
Am Mittwoch, den 06.08.2014, 14:51 + schrieb Serge Hallyn:
> Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> > Am Dienstag, den 05.08.2014, 23:34 + schrieb Serge Hallyn:
> > > Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> > >  
> > > > The patch works in the regard that the container starts and the apparmor
> > > > profile is set. 
> > > > But I can't find the Warning message anywhere (tried lxc-start -n webv1
> > > > -d -l DEBUG) - but maybe thats a more general problem. Oh, and there is
> > > > a typo: Apparmor ount
> > > > 
> > > > My opinion as an admin is that this check isn't needed in lxc itself.
> > > > Apparmor spits a warning during aa lxc-profile loading - sane admins
> > > > wouldn't ignore this.
> > > 
> > > We're not just talking about "sane admins" though.  We're talking about
> > > everyday users using containers.  And they're not building their own
> > > misconfigured kernels.  It happens, certainly while using the development
> > > release, that you get a kernel for which the apparmor set wasn't ready
> > > yet and mount restrictions weren't ready.
> > > 
> > > Maybe the patch should be modified to only allow the container to
> > > proceed if cap_sys_admin is being dropped.
> > 
> > So if I _want_ an insecure container with cap_sys_admin (for whatever
> > reason like testing or development - and yes sometimes I might want
> 
> Well in the end it's open source and you can comment out two lines and
> build your own :)
> 
> We could also add a set of security flags to specify what the container
> considers required.  So it might include 'selinux must be enabled', and
> 'mount restrictions in apparmor must be enabled.'

none of these would be usefull if you don't also check for a correctly
configured apparmor profile.
So you tell me that i can always recompile if i don't want your security
checks, but everyone can render the security you check for completely
useless by editing an apparmor profile - and none of your checks would
even notify.

> > this!) you'd force me to install an apparmor mount supported kernel
> > where i'd comment out the mount rules in the apparmor profile? Just to
> > make that thing start?
> 
> There are many many more people who might adversely affect their
> system by not having that.  The *defaults* should be safe, and the
> burden should be on the one who wants to run insecurely to enable
> that.  I admit the burden shouldn't necessarily be "rebuild the
> package".  I'm liking the idea of security flags.  I think they
> merit some more discussion first, to make sure we get an API that'll
> continue to be useful.  Hopefully Dwight (cc:d) wlil have some input.
> But worst case, we could just make it an explicit

I agree that the default setting should be safe. 3 days ago your default
on a debian or vanilla kernel built system was to _silently_ ignore
apparmor _completely_ - not even the slightest protection through
apparmor was possible _at all_! I wonder how many people run lxc and
think that apparmor is active while it isn't at all due to this.

But it should not be a bourdon to me to do things the way I want, even
if I decide to have an insecure setup. I'd buy/use Macs if i'd want the
developer to limit me to the things he thinks I should be allowed to do.

> lxc.apparmor = [safe|unsafe|off]

that sounds nice, but the problem is that you can't say anything about
apparmor security for a container if you don't check the apparmor
profile for correctness. And thats out of the scope for lxc.

so people will turn on lxc.apparmor = safe.
lxc won't start and complain about security and apparmor.
so they'll try with lxc.apparmor = unsafe and it will start.
now people will start messing up apparmor profiles until they figure
that they need a kernel with aa-mount support.
they turn on lxc.apparmor = safe and the aa profiles are still messed
up.
but lxc won't complain about security anymore and they'll end up feeling
safe with a messed up container.
you're giving a false sense of security here.

Or think about someone who creates (maybe by mistake) and uses an all
allow (or partly insecure) apparmor profile.
lxc.apparmor = safe would spit in his face on a debian system
lxc.apparmor = safe wouldn't complain at all on an ubuntu system
that makes no sense for me.

if you implement a 'safe' flag that's supposed to prevent an unsafe
container from starting, you better implement it correctly or not at
all. 
It's like a button labeled: make this container apparmor safe. people
push it and expect the container to be apparmor safe while it doesn't
need to be.

> Maybe I'll send a patch for that later today.
> 
> It's worth considering also whether there is anything we require from
> apparmor for the unprivileged containers.  If there is then we cannot
> allow an unprivileged container to set lxc.apparmor = unsafe|off.  The
> only thing I can think of is if there happens to be an 0-day in the
> handling of some namespaced procfile that results in privilege
> escalati

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-06 Thread Serge Hallyn
Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> Am Dienstag, den 05.08.2014, 23:34 + schrieb Serge Hallyn:
> > Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> >  
> > > The patch works in the regard that the container starts and the apparmor
> > > profile is set. 
> > > But I can't find the Warning message anywhere (tried lxc-start -n webv1
> > > -d -l DEBUG) - but maybe thats a more general problem. Oh, and there is
> > > a typo: Apparmor ount
> > > 
> > > My opinion as an admin is that this check isn't needed in lxc itself.
> > > Apparmor spits a warning during aa lxc-profile loading - sane admins
> > > wouldn't ignore this.
> > 
> > We're not just talking about "sane admins" though.  We're talking about
> > everyday users using containers.  And they're not building their own
> > misconfigured kernels.  It happens, certainly while using the development
> > release, that you get a kernel for which the apparmor set wasn't ready
> > yet and mount restrictions weren't ready.
> > 
> > Maybe the patch should be modified to only allow the container to
> > proceed if cap_sys_admin is being dropped.
> 
> So if I _want_ an insecure container with cap_sys_admin (for whatever
> reason like testing or development - and yes sometimes I might want

Well in the end it's open source and you can comment out two lines and
build your own :)

We could also add a set of security flags to specify what the container
considers required.  So it might include 'selinux must be enabled', and
'mount restrictions in apparmor must be enabled.'

> this!) you'd force me to install an apparmor mount supported kernel
> where i'd comment out the mount rules in the apparmor profile? Just to
> make that thing start?

There are many many more people who might adversely affect their
system by not having that.  The *defaults* should be safe, and the
burden should be on the one who wants to run insecurely to enable
that.  I admit the burden shouldn't necessarily be "rebuild the
package".  I'm liking the idea of security flags.  I think they
merit some more discussion first, to make sure we get an API that'll
continue to be useful.  Hopefully Dwight (cc:d) wlil have some input.
But worst case, we could just make it an explicit

lxc.apparmor = [safe|unsafe|off]

Maybe I'll send a patch for that later today.

It's worth considering also whether there is anything we require from
apparmor for the unprivileged containers.  If there is then we cannot
allow an unprivileged container to set lxc.apparmor = unsafe|off.  The
only thing I can think of is if there happens to be an 0-day in the
handling of some namespaced procfile that results in privilege
escalation...  I'm cc:ing jjohansen for input from him.

> Just because there's a feature in the kernel (and it's nothing else your
> stat does) doesn't mean that the other end of the system that's
> responsible for enforcing/using it does really use it.  This test
> implies security where no security is.

Are you arguing that I shouldn't check whether the feature is
enabled bc it might be buggy and not working anyway?

> I dont think a readable /proc/kcore inside a container or access to
> dmesg is very secure either - as in the default config.
> I could mount proc on /proc_insecure and create whatever /dev/ nodes I

Not from inside the container.

> like anywhere I want and lxc wouldn't warn me about this at all.

Yes, you can configure it however you want.  The difference is that
the kernel not having the mount restrictions support means it has
*incomplete* apparmor support.  You've requested an apparmor profile,
and the kernel cannot completely implement it.

Really the solution to all this is to get the mount restrictions into
the upstream kernel.

> But you wouldn't allow me to start a container if the _kernel_ lacks
> aa-mount support and i don't drop cap_sys_admin? Really?

Yes, because the container was designed a certain way to be safe,
and a piece making up that design is missing.

If you simply want to disable apparmor, you can always set

lxc.aa_profile = unconfined.

> This test belongs in lxc-checkconfig and should print out a big fat
> warning - right now it's not even mentioned there.

Agreed.  Patches welcome.

-serge
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-06 Thread Tom Weber
Am Dienstag, den 05.08.2014, 23:34 + schrieb Serge Hallyn:
> Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
>  
> > The patch works in the regard that the container starts and the apparmor
> > profile is set. 
> > But I can't find the Warning message anywhere (tried lxc-start -n webv1
> > -d -l DEBUG) - but maybe thats a more general problem. Oh, and there is
> > a typo: Apparmor ount
> > 
> > My opinion as an admin is that this check isn't needed in lxc itself.
> > Apparmor spits a warning during aa lxc-profile loading - sane admins
> > wouldn't ignore this.
> 
> We're not just talking about "sane admins" though.  We're talking about
> everyday users using containers.  And they're not building their own
> misconfigured kernels.  It happens, certainly while using the development
> release, that you get a kernel for which the apparmor set wasn't ready
> yet and mount restrictions weren't ready.
> 
> Maybe the patch should be modified to only allow the container to
> proceed if cap_sys_admin is being dropped.

So if I _want_ an insecure container with cap_sys_admin (for whatever
reason like testing or development - and yes sometimes I might want
this!) you'd force me to install an apparmor mount supported kernel
where i'd comment out the mount rules in the apparmor profile? Just to
make that thing start?

Just because there's a feature in the kernel (and it's nothing else your
stat does) doesn't mean that the other end of the system that's
responsible for enforcing/using it does really use it.  This test
implies security where no security is.

I dont think a readable /proc/kcore inside a container or access to
dmesg is very secure either - as in the default config.
I could mount proc on /proc_insecure and create whatever /dev/ nodes I
like anywhere I want and lxc wouldn't warn me about this at all.
But you wouldn't allow me to start a container if the _kernel_ lacks
aa-mount support and i don't drop cap_sys_admin? Really?

This test belongs in lxc-checkconfig and should print out a big fat
warning - right now it's not even mentioned there.

Regards,
  Tom

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-05 Thread Serge Hallyn
Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> Am Dienstag, den 05.08.2014, 16:07 + schrieb Serge Hallyn:
> 
> > What you say makes sense.  What do you think of the following (untested)
> > patch?
> > 
> > From 05864ae7f8b42724fb15ddea8a6d3d3ea9cf8749 Mon Sep 17 00:00:00 2001
> > From: Serge Hallyn 
> > Date: Tue, 5 Aug 2014 11:01:55 -0500
> > Subject: [PATCH 1/1] apparmor: only warn if mount restrictions lacking
> > 
> > Up to now we've refused to load apparmor profiles if mount
> > restrictions are missing.  With this patch, we'll only warn
> > but continue loading the profile.
> > 
> > Lack of mount restrictions allows malicious container users
> > to work around file restrictions by say remounting /proc.
> > However, as Tom points out containers with no cap_sys_admin
> > are not vulnerable to this.  So it doesn't make sense to not
> > allow them to use apparmor as well.
> > 
> > Reported-by: Tom Weber 
> > Signed-off-by: Serge Hallyn 
> > ---
> >  src/lxc/lsm/apparmor.c | 6 --
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/src/lxc/lsm/apparmor.c b/src/lxc/lsm/apparmor.c
> > index f4c8d26..e730aba 100644
> > --- a/src/lxc/lsm/apparmor.c
> > +++ b/src/lxc/lsm/apparmor.c
> > @@ -48,8 +48,10 @@ static int apparmor_enabled(void)
> > int ret;
> >  
> > ret = stat(AA_MOUNT_RESTR, &statbuf);
> > -   if (ret != 0)
> > -   return 0;
> > +   if (ret != 0) {
> > +   WARN("WARNING: Apparmor ount restrictions missing from kernel");
> > +   WARN("WARNING: mount restrictions will not be enforced");
> > +   }
> > fin = fopen(AA_ENABLED_FILE, "r");
> > if (!fin)
> > return 0;
> 
> The patch works in the regard that the container starts and the apparmor
> profile is set. 
> But I can't find the Warning message anywhere (tried lxc-start -n webv1
> -d -l DEBUG) - but maybe thats a more general problem. Oh, and there is
> a typo: Apparmor ount
> 
> My opinion as an admin is that this check isn't needed in lxc itself.
> Apparmor spits a warning during aa lxc-profile loading - sane admins
> wouldn't ignore this.

We're not just talking about "sane admins" though.  We're talking about
everyday users using containers.  And they're not building their own
misconfigured kernels.  It happens, certainly while using the development
release, that you get a kernel for which the apparmor set wasn't ready
yet and mount restrictions weren't ready.

Maybe the patch should be modified to only allow the container to
proceed if cap_sys_admin is being dropped.

Or maybe it's fine as is.  I'm feeling undecided.

> If one messes with the aa lxc-profiles and disables the mount
> restrictions there, your check wont help (or report) anything - even on
> a kernel with mount restriction patch.
> All you can do is provide sane aa profiles in the lxc package - the rest
> is aa related business, not lxc related.
> But thats just my oponion.
> 
> Thanks alot for the quick patch!
>   Tom
> 
> 
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-05 Thread Tom Weber
Am Dienstag, den 05.08.2014, 16:07 + schrieb Serge Hallyn:

> What you say makes sense.  What do you think of the following (untested)
> patch?
> 
> From 05864ae7f8b42724fb15ddea8a6d3d3ea9cf8749 Mon Sep 17 00:00:00 2001
> From: Serge Hallyn 
> Date: Tue, 5 Aug 2014 11:01:55 -0500
> Subject: [PATCH 1/1] apparmor: only warn if mount restrictions lacking
> 
> Up to now we've refused to load apparmor profiles if mount
> restrictions are missing.  With this patch, we'll only warn
> but continue loading the profile.
> 
> Lack of mount restrictions allows malicious container users
> to work around file restrictions by say remounting /proc.
> However, as Tom points out containers with no cap_sys_admin
> are not vulnerable to this.  So it doesn't make sense to not
> allow them to use apparmor as well.
> 
> Reported-by: Tom Weber 
> Signed-off-by: Serge Hallyn 
> ---
>  src/lxc/lsm/apparmor.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/src/lxc/lsm/apparmor.c b/src/lxc/lsm/apparmor.c
> index f4c8d26..e730aba 100644
> --- a/src/lxc/lsm/apparmor.c
> +++ b/src/lxc/lsm/apparmor.c
> @@ -48,8 +48,10 @@ static int apparmor_enabled(void)
>   int ret;
>  
>   ret = stat(AA_MOUNT_RESTR, &statbuf);
> - if (ret != 0)
> - return 0;
> + if (ret != 0) {
> + WARN("WARNING: Apparmor ount restrictions missing from kernel");
> + WARN("WARNING: mount restrictions will not be enforced");
> + }
>   fin = fopen(AA_ENABLED_FILE, "r");
>   if (!fin)
>   return 0;

The patch works in the regard that the container starts and the apparmor
profile is set. 
But I can't find the Warning message anywhere (tried lxc-start -n webv1
-d -l DEBUG) - but maybe thats a more general problem. Oh, and there is
a typo: Apparmor ount

My opinion as an admin is that this check isn't needed in lxc itself.
Apparmor spits a warning during aa lxc-profile loading - sane admins
wouldn't ignore this.
If one messes with the aa lxc-profiles and disables the mount
restrictions there, your check wont help (or report) anything - even on
a kernel with mount restriction patch.
All you can do is provide sane aa profiles in the lxc package - the rest
is aa related business, not lxc related.
But thats just my oponion.

Thanks alot for the quick patch!
  Tom



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] lxc-start fails at apparmor detection

2014-08-05 Thread Serge Hallyn
Quoting Tom Weber (l_lxc-us...@mail2news.4t2.com):
> Hello,
> 
> my setup: 
> debian7 
> lxc-1.0.4 from debian testing
> vanilla kernel.org kernel 3.14.14
> 
> i'm new to lxc and apparmor, so this took me a couple of hours to
> figure:
> lxc-start won't assign an apparmor-profile to a container since it's
> test for apparmor will always fail on my setup:
> in src/lxc/lsm/apparmor:
> the apparmor_enabled() tests for AA_MOUNT_RESTR
> (/sys/kernel/security/apparmor/features/mount/mask) first, which will
> never exist without that apparmor mount patch in the kernel. 
> 
> commenting out that test gives me apparmor functionality (except for
> that mount feature of course).
> 
> Is that intentional or just an ancient relict? 
> I'd prefer to have apparmor profile support without mount restrictions
> over no apparmor profile support at all. apparmor gives me warnings
> like: 
> 
> Warning from /etc/apparmor.d/lxc-containers (/etc/apparmor.d/lxc-containers 
> line 8): profile lxc-container-default mount rules not enforced
> 
> when starting up, which is what I expect and something I can deal with
> as admin. I think lxc-start should activate the requested profile
> anyway.
> 
> Oh, and a little log message wether lxc-start detected apparmor or not
> and activates it would be _very_ helpfull :)
> 
> related question: dropping sys_admin cap for the container should render
> all the mount protections from apparmor unnecessary, right?

What you say makes sense.  What do you think of the following (untested)
patch?

From 05864ae7f8b42724fb15ddea8a6d3d3ea9cf8749 Mon Sep 17 00:00:00 2001
From: Serge Hallyn 
Date: Tue, 5 Aug 2014 11:01:55 -0500
Subject: [PATCH 1/1] apparmor: only warn if mount restrictions lacking

Up to now we've refused to load apparmor profiles if mount
restrictions are missing.  With this patch, we'll only warn
but continue loading the profile.

Lack of mount restrictions allows malicious container users
to work around file restrictions by say remounting /proc.
However, as Tom points out containers with no cap_sys_admin
are not vulnerable to this.  So it doesn't make sense to not
allow them to use apparmor as well.

Reported-by: Tom Weber 
Signed-off-by: Serge Hallyn 
---
 src/lxc/lsm/apparmor.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/lxc/lsm/apparmor.c b/src/lxc/lsm/apparmor.c
index f4c8d26..e730aba 100644
--- a/src/lxc/lsm/apparmor.c
+++ b/src/lxc/lsm/apparmor.c
@@ -48,8 +48,10 @@ static int apparmor_enabled(void)
int ret;
 
ret = stat(AA_MOUNT_RESTR, &statbuf);
-   if (ret != 0)
-   return 0;
+   if (ret != 0) {
+   WARN("WARNING: Apparmor ount restrictions missing from kernel");
+   WARN("WARNING: mount restrictions will not be enforced");
+   }
fin = fopen(AA_ENABLED_FILE, "r");
if (!fin)
return 0;
-- 
2.0.1

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

[lxc-users] lxc-start fails at apparmor detection

2014-08-05 Thread Tom Weber
Hello,

my setup: 
debian7 
lxc-1.0.4 from debian testing
vanilla kernel.org kernel 3.14.14

i'm new to lxc and apparmor, so this took me a couple of hours to
figure:
lxc-start won't assign an apparmor-profile to a container since it's
test for apparmor will always fail on my setup:
in src/lxc/lsm/apparmor:
the apparmor_enabled() tests for AA_MOUNT_RESTR
(/sys/kernel/security/apparmor/features/mount/mask) first, which will
never exist without that apparmor mount patch in the kernel. 

commenting out that test gives me apparmor functionality (except for
that mount feature of course).

Is that intentional or just an ancient relict? 
I'd prefer to have apparmor profile support without mount restrictions
over no apparmor profile support at all. apparmor gives me warnings
like: 

Warning from /etc/apparmor.d/lxc-containers (/etc/apparmor.d/lxc-containers 
line 8): profile lxc-container-default mount rules not enforced

when starting up, which is what I expect and something I can deal with
as admin. I think lxc-start should activate the requested profile
anyway.

Oh, and a little log message wether lxc-start detected apparmor or not
and activates it would be _very_ helpfull :)

related question: dropping sys_admin cap for the container should render
all the mount protections from apparmor unnecessary, right?

Regards,
  Tom





___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users