Re: [lxc-users] using cgroups

2016-07-28 Thread Rob Edgerton


 On Thursday, 30 June 2016, 10:36, Serge E. Hallyn  wrote:
 

 Quoting Rob Edgerton (redger...@yahoo.com.au):
> hi,I have the same problem (cgroups not working as expected) on a clean 
> Xenial build (lxc PPA NOT installed, LXD not installed)In my case I have some 
> Ubuntu Trusty containers I really need to use on Xenial, but they won't start 
> because I use cgroups.If I change the existing containers to remove the 
> "lxc.cgroup" clauses from config they start, but not otherwise.Similarly, I 
> created a new Xenial container for testing. It works, until I add 
> "lxc.cgroups" clauses at which point it also fails to start.@virt-host:~$ 
> lxc-start -n trusty_unp_ibvpn -F -l debug -o lxc.log
> lxc-start: cgfsng.c: cgfsng_setup_limits: 1662 No such file or directory - 
> Error setting cpuset.cpus to 1-3 for trusty_unp_ibvpn
> lxc-start: start.c: lxc_spawn: 1180 failed to setup the cgroup limits for 
> 'trusty_unp_ibvpn'
> lxc-start: start.c: __lxc_start: 1353 failed to spawn 'trusty_unp_ibvpn'
> lxc-start: lxc_start.c: main: 344 The container failed to start.
> lxc-start: lxc_start.c: main: 348 Additional information can be obtained by 
> setting the --logfile and --logpriority  options.
> 
> Logfile Contents=
>   lxc-start 20160628155820.562 INFO lxc_start_ui - 
> lxc_start.c:main:264 - using rcfile 
> /mnt/lxc_images/containers/trusty_unp_ibvpn/config
>   lxc-start 20160628155820.562 WARN lxc_confile - 
> confile.c:config_pivotdir:1879 - lxc.pivotdir is ignored.  It will soon 
> become an error.
>   lxc-start 20160628155820.562 INFO lxc_confile - 
> confile.c:config_idmap:1500 - read uid map: type u nsid 0 hostid 10 range 
> 65536
>   lxc-start 20160628155820.562 INFO lxc_confile - 
> confile.c:config_idmap:1500 - read uid map: type g nsid 0 hostid 10 range 
> 65536
>   lxc-start 20160628155820.564 INFO lxc_lsm - lsm/lsm.c:lsm_init:48 - 
> LSM security driver AppArmor
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .reject_force_umount  # comment 
> this to allow umount -f;  not recommended.
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for reject_force_umount 
> action 0
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force 
> umounts
> 
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for reject_force_umount 
> action 0
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force 
> umounts
> 
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .[all].
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .kexec_load errno 1.
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for kexec_load action 
> 327681
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for kexec_load action 
> 327681
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .open_by_handle_at errno 1.
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for open_by_handle_at 
> action 327681
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for open_by_handle_at 
> action 327681
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .init_module errno 1.
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for init_module action 
> 327681
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for init_module action 
> 327681
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .finit_module errno 1.
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for finit_module action 
> 327681
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for finit_module action 
> 327681
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:342 - processing: .delete_module errno 1.
>   lxc-start 20160628155820.564 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:446 - Adding native rule for delete_module action 
> 327681
>   lxc-start 20160628155820.565 INFO lxc_seccomp - 
> seccomp.c:parse_config_v2:449 - Adding compat rule for 

Re: [lxc-users] using cgroups

2016-07-01 Thread rob e

On 02/07/16 13:40, Serge E. Hallyn wrote:

On Sat, Jul 02, 2016 at 01:24:44PM +1000, rob e wrote:

On 02/07/16 12:41, Serge E. Hallyn wrote:

Quoting rob e (redger...@yahoo.com.au):

On 02/07/16 12:14, Serge E. Hallyn wrote:

hi Serge,
with JUST those clauses (and no cgroup set clauses) ... it sort of
works. Initial messages are cleared from the console(?) leaving just
the shutdown messages. But it does get to a login prompt

D'oh.  Thanks for your patience.  I see the bug.  I'll post a
PR for a fix.  I'm surprised so few people run into this.  But
as a workaround just add ",devices" to the end of the pam_cgfs
line in /etc/pam.d/common-session.


sorry about this ... didn't work. Tried 2 forms of Pam clause & 2
forms of config

--
PAM line
session optionalpam_cgfs.so -c
freezer,memory,name=systemd,cpuset,devices

Jus to make sure, did you log back in after this?  what does /proc/self/cgroup
look like?



hmmm ... Now I tried the TAP TUN device (for openvpn & proxy server)
 FAILED .. on CPUSET

Nope, cpu and cpuset are actually two different controllers.  It's failing on
cpu.shares in the cpu controller.

Note, I think you'll be happiest if you just drop the "-c x" from
/etc/pam.d/common-session.  That will tell pam_cgfs to use all controllers.

-serge

ok, tried to pass through USB-DVB devices. This worked in Trusty using 
the same config, but not on Xenial. Again, Apparmor is intervening. The 
container starts ok, but doesn't map the /dev/dvb devices in (even tho I 
had previously bind mounted /dev/dvb into the container, as was working 
in Trusty)


sudo mount --bind /dev/dvb 
/mnt/lxc_images/containers/trusty-mythserver/rootfs/dev/dvb/
sudo chown -R xxx:xxx 
/mnt/lxc_images/containers/trusty-mythserver/rootfs/dev/dvb/


then look for devices in the container - nothing found :(

$ lxc-start -n trusty-mythserver
$ lxc-attach -n trusty-mythserver

root@trusty-mythserver:~#
root@trusty-mythserver:~# ls /dev/dvb
root@trusty-mythserver:~#

---
Syslog elements

Jul  2 15:09:17 virt-host libvirtd[32021]: Failed to open file 
'/sys/class/net/veth1XDS50p/operstate': No such file or directory
Jul  2 15:09:17 virt-host libvirtd[32021]: unable to read: 
/sys/class/net/veth1XDS50p/operstate: No such file or directory
Jul  2 15:09:17 virt-host kernel: [114010.904958] audit_printk_skb: 47 
callbacks suppressed
Jul  2 15:09:17 virt-host kernel: [114010.904960] audit: type=1400 
audit(1467436157.402:1273): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" name="/run/rpc_pipefs/" 
pid=28339 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs"
Jul  2 15:09:17 virt-host kernel: [114010.904994] audit: type=1400 
audit(1467436157.402:1274): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" name="/run/rpc_pipefs/" 
pid=28339 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs" flags="ro"
Jul  2 15:09:17 virt-host kernel: [114011.015576] audit: type=1400 
audit(1467436157.514:1275): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" name="/run/rpc_pipefs/" 
pid=28498 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs"
Jul  2 15:09:17 virt-host kernel: [114011.015604] audit: type=1400 
audit(1467436157.514:1276): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" name="/run/rpc_pipefs/" 
pid=28498 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs" flags="ro"
Jul  2 15:09:17 virt-host kernel: [114011.053063] audit: type=1400 
audit(1467436157.550:1277): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" name="/run/rpc_pipefs/" 
pid=28552 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs"
Jul  2 15:09:17 virt-host kernel: [114011.053100] audit: type=1400 
audit(1467436157.550:1278): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" name="/run/rpc_pipefs/" 
pid=28552 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs" flags="ro"
Jul  2 15:09:17 virt-host kernel: [114011.077650] audit: type=1400 
audit(1467436157.574:1279): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" name="/run/rpc_pipefs/" 
pid=28584 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs"
Jul  2 15:09:17 virt-host kernel: [114011.077686] audit: type=1400 
audit(1467436157.574:1280): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" name="/run/rpc_pipefs/" 
pid=28584 comm="mount" fstype="rpc_pipefs" srcname="rpc_pipefs" flags="ro"
Jul  2 15:09:17 virt-host 

Re: [lxc-users] using cgroups

2016-07-01 Thread rob e



On 02/07/16 13:40, Serge E. Hallyn wrote:

On Sat, Jul 02, 2016 at 01:24:44PM +1000, rob e wrote:

On 02/07/16 12:41, Serge E. Hallyn wrote:

Quoting rob e (redger...@yahoo.com.au):

On 02/07/16 12:14, Serge E. Hallyn wrote:

hi Serge,
with JUST those clauses (and no cgroup set clauses) ... it sort of
works. Initial messages are cleared from the console(?) leaving just
the shutdown messages. But it does get to a login prompt

D'oh.  Thanks for your patience.  I see the bug.  I'll post a
PR for a fix.  I'm surprised so few people run into this.  But
as a workaround just add ",devices" to the end of the pam_cgfs
line in /etc/pam.d/common-session.


sorry about this ... didn't work. Tried 2 forms of Pam clause & 2
forms of config

--
PAM line
session optionalpam_cgfs.so -c
freezer,memory,name=systemd,cpuset,devices

Jus to make sure, did you log back in after this?  what does /proc/self/cgroup
look like?



hmmm ... Now I tried the TAP TUN device (for openvpn & proxy server)
 FAILED .. on CPUSET

Nope, cpu and cpuset are actually two different controllers.  It's failing on
cpu.shares in the cpu controller.

Note, I think you'll be happiest if you just drop the "-c x" from
/etc/pam.d/common-session.  That will tell pam_cgfs to use all controllers.

-serge


That was Better !  CPU and Memory constraints now don't cause failure :)

---
Tried VPN ... TAP / TUN   FAILED. Container starts, but unable to create 
device (where this worked on Trusty)


openvpn will not start ... looks like an AppArmor issue. Is this your 
department ?


messages on host syslog.log

Jul  2 14:21:35 virt-host kernel: [48.961739] IPv6: 
ADDRCONF(NETDEV_CHANGE): vethS3C86K: link becomes ready
Jul  2 14:21:35 virt-host kernel: [48.961777] lxcbr0: port 
3(vethS3C86K) entered forwarding state
Jul  2 14:21:35 virt-host kernel: [48.961785] lxcbr0: port 
3(vethS3C86K) entered forwarding state
Jul  2 14:21:35 virt-host kernel: [49.061396] audit: type=1400 
audit(1467433295.584:1118): apparmor="DENIED" operation="mount" 
info="failed flags match" error=-13 
profile="lxc-container-default-with-mounting" name="/" pid=25762 
comm="cgmanager" flags="rw, rprivate"
Jul  2 14:21:35 virt-host kernel: [49.061437] audit: type=1400 
audit(1467433295.584:1119): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" 
name="/run/cgmanager/fs/blkio/" pid=25762 comm="cgmanager" 
fstype="cgroup" srcname="blkio"
Jul  2 14:21:35 virt-host kernel: [49.061447] audit: type=1400 
audit(1467433295.584:1120): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" 
name="/run/cgmanager/fs/cpu/" pid=25762 comm="cgmanager" fstype="cgroup" 
srcname="cpu"
Jul  2 14:21:35 virt-host kernel: [49.061457] audit: type=1400 
audit(1467433295.584:1121): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" 
name="/run/cgmanager/fs/cpuacct/" pid=25762 comm="cgmanager" 
fstype="cgroup" srcname="cpuacct"
Jul  2 14:21:35 virt-host kernel: [49.061466] audit: type=1400 
audit(1467433295.584:1122): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" 
name="/run/cgmanager/fs/cpuset/" pid=25762 comm="cgmanager" 
fstype="cgroup" srcname="cpuset"
Jul  2 14:21:35 virt-host kernel: [49.061475] audit: type=1400 
audit(1467433295.584:1123): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" 
name="/run/cgmanager/fs/devices/" pid=25762 comm="cgmanager" 
fstype="cgroup" srcname="devices"
Jul  2 14:21:35 virt-host kernel: [49.061484] audit: type=1400 
audit(1467433295.584:1124): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" 
name="/run/cgmanager/fs/freezer/" pid=25762 comm="cgmanager" 
fstype="cgroup" srcname="freezer"
Jul  2 14:21:35 virt-host kernel: [49.061492] audit: type=1400 
audit(1467433295.584:1125): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" 
name="/run/cgmanager/fs/hugetlb/" pid=25762 comm="cgmanager" 
fstype="cgroup" srcname="hugetlb"
Jul  2 14:21:35 virt-host kernel: [49.061501] audit: type=1400 
audit(1467433295.584:1126): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 
profile="lxc-container-default-with-mounting" 
name="/run/cgmanager/fs/memory/" pid=25762 comm="cgmanager" 
fstype="cgroup" srcname="memory"
Jul  2 14:21:35 virt-host kernel: [49.061510] audit: type=1400 
audit(1467433295.584:1127): apparmor="DENIED" operation="mount" 
info="failed type match" error=-13 

Re: [lxc-users] using cgroups

2016-07-01 Thread Serge E. Hallyn
On Sat, Jul 02, 2016 at 01:24:44PM +1000, rob e wrote:
> On 02/07/16 12:41, Serge E. Hallyn wrote:
> >Quoting rob e (redger...@yahoo.com.au):
> >>On 02/07/16 12:14, Serge E. Hallyn wrote:
> hi Serge,
> with JUST those clauses (and no cgroup set clauses) ... it sort of
> works. Initial messages are cleared from the console(?) leaving just
> the shutdown messages. But it does get to a login prompt
> >>>D'oh.  Thanks for your patience.  I see the bug.  I'll post a
> >>>PR for a fix.  I'm surprised so few people run into this.  But
> >>>as a workaround just add ",devices" to the end of the pam_cgfs
> >>>line in /etc/pam.d/common-session.
> >>>
> >>sorry about this ... didn't work. Tried 2 forms of Pam clause & 2
> >>forms of config
> >>
> >>--
> >>PAM line
> >>session optionalpam_cgfs.so -c
> >>freezer,memory,name=systemd,cpuset,devices
> >Jus to make sure, did you log back in after this?  what does 
> >/proc/self/cgroup
> >look like?
> >
> >
> 
> hmmm ... Now I tried the TAP TUN device (for openvpn & proxy server)
>  FAILED .. on CPUSET

Nope, cpu and cpuset are actually two different controllers.  It's failing on
cpu.shares in the cpu controller.

Note, I think you'll be happiest if you just drop the "-c x" from
/etc/pam.d/common-session.  That will tell pam_cgfs to use all controllers.

-serge
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-07-01 Thread Serge E. Hallyn
Quoting rob e (redger...@yahoo.com.au):
> 
> On 02/07/16 12:14, Serge E. Hallyn wrote:
> >>hi Serge,
> >>with JUST those clauses (and no cgroup set clauses) ... it sort of
> >>works. Initial messages are cleared from the console(?) leaving just
> >>the shutdown messages. But it does get to a login prompt
> >D'oh.  Thanks for your patience.  I see the bug.  I'll post a
> >PR for a fix.  I'm surprised so few people run into this.  But
> >as a workaround just add ",devices" to the end of the pam_cgfs
> >line in /etc/pam.d/common-session.
> >
> sorry about this ... didn't work. Tried 2 forms of Pam clause & 2
> forms of config
> 
> --
> PAM line
> session optionalpam_cgfs.so -c
> freezer,memory,name=systemd,cpuset,devices

Jus to make sure, did you log back in after this?  what does /proc/self/cgroup
look like?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-07-01 Thread Serge E. Hallyn
Quoting Serge E. Hallyn (se...@hallyn.com):
> > hi Serge,
> > with JUST those clauses (and no cgroup set clauses) ... it sort of
> > works. Initial messages are cleared from the console(?) leaving just
> > the shutdown messages. But it does get to a login prompt
> 
> D'oh.  Thanks for your patience.  I see the bug.  I'll post a
> PR for a fix.  I'm surprised so few people run into this.  But
> as a workaround just add ",devices" to the end of the pam_cgfs
> line in /etc/pam.d/common-session.

https://github.com/lxc/lxc/pull/1070
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-07-01 Thread Serge E. Hallyn
> hi Serge,
> with JUST those clauses (and no cgroup set clauses) ... it sort of
> works. Initial messages are cleared from the console(?) leaving just
> the shutdown messages. But it does get to a login prompt

D'oh.  Thanks for your patience.  I see the bug.  I'll post a
PR for a fix.  I'm surprised so few people run into this.  But
as a workaround just add ",devices" to the end of the pam_cgfs
line in /etc/pam.d/common-session.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-07-01 Thread Serge E. Hallyn
Quoting rob e (redger...@yahoo.com.au):
> On 02/07/16 01:02, Serge E. Hallyn wrote:
> >Quoting rob e (redger...@yahoo.com.au):
> >>On 01/07/16 10:58, Serge E. Hallyn wrote:
> >>>Quoting rob e (redger...@yahoo.com.au):
> >>>Let's address them one at a time.  For starters,
> >>>
> >>>if you only leave in the
> >>>   lxc.cgroup.cpuset.cpus = 1-3
> >>>does that now work?  If not, please post the log output to show exactly
> >>>how it fails.
> >>>And if you only have
> >>>   lxc.cgroup.memory.limit_in_bytes = 4G
> >>>how does that fail, exactly?
> >>>
> >>>Also, what is /proc/self/cgroup now when you login?
> >>>
> >>>___
> >>>lxc-users mailing list
> >>>lxc-users@lists.linuxcontainers.org
> >>>http://lists.linuxcontainers.org/listinfo/lxc-users
> >>hi Serge,
> >>thanks for the response, data follows
> >>
> >Wait, why is it still showing this error?  You don't
> >have any lxc.cgroup.deivces in the above config!
> >
> >Can you please show
> >
> >/usr/share/lxc/config/ubuntu.common.conf
> >/usr/share/lxc/config/ubuntu.userns.conf
> >
> >?
> >___
> >lxc-users mailing list
> >lxc-users@lists.linuxcontainers.org
> >http://lists.linuxcontainers.org/listinfo/lxc-users
> okey dokes, here they are (plus the direct "include" elements)

Thanks.  Yeah, this is making no sense.  There should be no
lxc.cgroup.devices.*.  Can you add

lxc.cgroup.devices.allow =
lxc.cgroup.devices.deny =

to the end of your config and try again?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-07-01 Thread Serge E. Hallyn
Quoting rob e (redger...@yahoo.com.au):
> On 01/07/16 10:58, Serge E. Hallyn wrote:
> >Quoting rob e (redger...@yahoo.com.au):
> >Let's address them one at a time.  For starters,
> >
> >if you only leave in the
> > lxc.cgroup.cpuset.cpus = 1-3
> >does that now work?  If not, please post the log output to show exactly
> >how it fails.
> >And if you only have
> > lxc.cgroup.memory.limit_in_bytes = 4G
> >how does that fail, exactly?
> >
> >Also, what is /proc/self/cgroup now when you login?
> >
> >___
> >lxc-users mailing list
> >lxc-users@lists.linuxcontainers.org
> >http://lists.linuxcontainers.org/listinfo/lxc-users
> hi Serge,
> thanks for the response, data follows
> 
> --
> From "my" session
> $ cat /proc/self/cgroup
> 11:blkio:/user.slice
> 10:hugetlb:/
> 9:freezer:/user/redger/2
> 8:pids:/user.slice/user-1000.slice
> 7:memory:/user/redger/2
> 6:cpu,cpuacct:/user.slice
> 5:net_cls,net_prio:/
> 4:perf_event:/
> 3:cpuset:/user/redger/2
> 2:devices:/user.slice
> 1:name=systemd:/user.slice/user-1000.slice/session-11.scope
> 
> --
> Lines from PAM (original commented and new line inserted)
> #sessionoptionalpam_cgfs.so -c freezer,memory,name=systemd
> session optionalpam_cgfs.so -c freezer,memory,name=systemd,cpuset
> 
> --
> current config for test system - Note Memory limit (only), no other
> cgroup usage
> # Template used to create this container:
> /usr/share/lxc/templates/lxc-download
> # Parameters passed to the template: -d ubuntu -r trusty -a amd64
> # For additional config options, please look at lxc.container.conf(5)
> 
> # Distribution configuration
> lxc.include = /usr/share/lxc/config/ubuntu.common.conf
> lxc.include = /usr/share/lxc/config/ubuntu.userns.conf
> lxc.arch = x86_64
> 
> # Container specific configuration
> lxc.id_map = u 0 10 65536
> lxc.id_map = g 0 10 65536
> lxc.rootfs = /mnt/lxc_images/containers/xenial_test_01/rootfs
> lxc.rootfs.backend = dir
> lxc.utsname = xenial_test_01
> 
> # Network configuration
> lxc.network.type = veth
> lxc.network.link = lxcbr0
> lxc.network.flags = up
> lxc.network.hwaddr = 00:16:3e:19:3c:15
> 
> ## Set resource limits   - Cause problems in Xenial
> lxc.cgroup.memory.limit_in_bytes = 4G
> 
> --
> And the result of starting (copied and pasted from konsole)
> $ lxc-start -n xenial_test_01 -F -o lxc_test.log -l debug
> lxc-start: cgfsng.c: cgfsng_setup_limits: 1645 No devices cgroup
> setup for xenial_test_01

Wait, why is it still showing this error?  You don't
have any lxc.cgroup.deivces in the above config!

Can you please show

/usr/share/lxc/config/ubuntu.common.conf
/usr/share/lxc/config/ubuntu.userns.conf

?
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-06-30 Thread rob e

On 01/07/16 10:58, Serge E. Hallyn wrote:

Quoting rob e (redger...@yahoo.com.au):


thanks Serge,
I tried that. Same result. Additionally, even when I comment out the
CPU controls, leaving only Memory limits, it still fails.

To confirm, I have 3 uses for cgroups -
1)  Resource control on CPU, Memory, Disk, Network etc eg.
 lxc.cgroup.cpuset.cpus = 1-3
 lxc.cgroup.memory.limit_in_bytes = 4G

Let's address them one at a time.  For starters,

if you only leave in the
lxc.cgroup.cpuset.cpus = 1-3
does that now work?  If not, please post the log output to show exactly
how it fails.
And if you only have
lxc.cgroup.memory.limit_in_bytes = 4G
how does that fail, exactly?

Also, what is /proc/self/cgroup now when you login?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

hi Serge,
thanks for the response, data follows - CPU limits set this time

--
From "my" session
$ cat /proc/self/cgroup
11:blkio:/user.slice
10:hugetlb:/
9:freezer:/user/redger/2
8:pids:/user.slice/user-1000.slice
7:memory:/user/redger/2
6:cpu,cpuacct:/user.slice
5:net_cls,net_prio:/
4:perf_event:/
3:cpuset:/user/redger/2
2:devices:/user.slice
1:name=systemd:/user.slice/user-1000.slice/session-11.scope

--
Lines from PAM (original commented and new line inserted)
#sessionoptionalpam_cgfs.so -c freezer,memory,name=systemd
session optionalpam_cgfs.so -c freezer,memory,name=systemd,cpuset

--
current config for test system - Note CPU limit (only), no other cgroup 
usage
# Template used to create this container: 
/usr/share/lxc/templates/lxc-download

# Parameters passed to the template: -d ubuntu -r trusty -a amd64
# For additional config options, please look at lxc.container.conf(5)

# Distribution configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.userns.conf
lxc.arch = x86_64

# Container specific configuration
lxc.id_map = u 0 10 65536
lxc.id_map = g 0 10 65536
lxc.rootfs = /mnt/lxc_images/containers/xenial_test_01/rootfs
lxc.rootfs.backend = dir
lxc.utsname = xenial_test_01

# Network configuration
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:19:3c:15

## Set resource limits   - Cause problems in Xenial
lxc.cgroup.cpuset.cpus = 1-3


--
And the result of starting (copied and pasted from konsole)
$ lxc-start -n xenial_test_01 -F -o lxc_test_cpu_160701a.log -l debug
lxc-start: cgfsng.c: cgfsng_setup_limits: 1645 No devices cgroup setup 
for xenial_test_01
lxc-start: start.c: lxc_spawn: 1226 failed to setup the devices cgroup 
for 'xenial_test_01'

lxc-start: start.c: __lxc_start: 1353 failed to spawn 'xenial_test_01'
  lxc-start: lxc_start.c: main: 344 
The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained 
by setting the --logfile and --logpriority options.


--
Last few lines of the log file
  lxc-start 20160701033803.685 DEBUGlxc_conf - 
conf.c:setup_caps:2056 - drop capability 'sys_rawio' (17)
  lxc-start 20160701033803.685 DEBUGlxc_conf - 
conf.c:setup_caps:2065 - capabilities have been setup
  lxc-start 20160701033803.685 NOTICE   lxc_conf - 
conf.c:lxc_setup:3839 - 'xenial_test_01' is setup.
  lxc-start 20160701133803.685 ERRORlxc_cgfsng - 
cgfsng.c:cgfsng_setup_limits:1645 - No devices cgroup setup for 
xenial_test_01
  lxc-start 20160701133803.685 ERRORlxc_start - 
start.c:lxc_spawn:1226 - failed to setup the devices cgroup for 
'xenial_test_01'
  lxc-start 20160701133803.685 ERRORlxc_start - 
start.c:__lxc_start:1353 - failed to spawn 'xenial_test_01'
  lxc-start 20160701133803.721 INFO lxc_conf - 
conf.c:run_script_argv:367 - Executing script 
'/usr/share/lxcfs/lxc.reboot.hook' for container 'xenial_test_01', 
config section 'lxc'
  lxc-start 20160701133804.232 ERRORlxc_start_ui - 
lxc_start.c:main:344 - The container failed to start.
  lxc-start 20160701133804.233 ERRORlxc_start_ui - 
lxc_start.c:main:348 - Additional information can be obtained by setting 
the --logfile and --logpriority options.


I can attach the logfile if that helps, tho it may delay the email due 
to size



___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-06-30 Thread rob e

On 01/07/16 10:58, Serge E. Hallyn wrote:

Quoting rob e (redger...@yahoo.com.au):


thanks Serge,
I tried that. Same result. Additionally, even when I comment out the
CPU controls, leaving only Memory limits, it still fails.

To confirm, I have 3 uses for cgroups -
1)  Resource control on CPU, Memory, Disk, Network etc eg.
 lxc.cgroup.cpuset.cpus = 1-3
 lxc.cgroup.memory.limit_in_bytes = 4G

Let's address them one at a time.  For starters,

if you only leave in the
lxc.cgroup.cpuset.cpus = 1-3
does that now work?  If not, please post the log output to show exactly
how it fails.
And if you only have
lxc.cgroup.memory.limit_in_bytes = 4G
how does that fail, exactly?

Also, what is /proc/self/cgroup now when you login?

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

hi Serge,
thanks for the response, data follows

--
From "my" session
$ cat /proc/self/cgroup
11:blkio:/user.slice
10:hugetlb:/
9:freezer:/user/redger/2
8:pids:/user.slice/user-1000.slice
7:memory:/user/redger/2
6:cpu,cpuacct:/user.slice
5:net_cls,net_prio:/
4:perf_event:/
3:cpuset:/user/redger/2
2:devices:/user.slice
1:name=systemd:/user.slice/user-1000.slice/session-11.scope

--
Lines from PAM (original commented and new line inserted)
#sessionoptionalpam_cgfs.so -c freezer,memory,name=systemd
session optionalpam_cgfs.so -c freezer,memory,name=systemd,cpuset

--
current config for test system - Note Memory limit (only), no other 
cgroup usage
# Template used to create this container: 
/usr/share/lxc/templates/lxc-download

# Parameters passed to the template: -d ubuntu -r trusty -a amd64
# For additional config options, please look at lxc.container.conf(5)

# Distribution configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.include = /usr/share/lxc/config/ubuntu.userns.conf
lxc.arch = x86_64

# Container specific configuration
lxc.id_map = u 0 10 65536
lxc.id_map = g 0 10 65536
lxc.rootfs = /mnt/lxc_images/containers/xenial_test_01/rootfs
lxc.rootfs.backend = dir
lxc.utsname = xenial_test_01

# Network configuration
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:19:3c:15

## Set resource limits   - Cause problems in Xenial
lxc.cgroup.memory.limit_in_bytes = 4G

--
And the result of starting (copied and pasted from konsole)
$ lxc-start -n xenial_test_01 -F -o lxc_test.log -l debug
lxc-start: cgfsng.c: cgfsng_setup_limits: 1645 No devices cgroup setup 
for xenial_test_01
lxc-start: start.c: lxc_spawn: 1226 failed to setup the devices cgroup 
for 'xenial_test_01'

lxc-start: start.c: __lxc_start: 1353 failed to spawn 'xenial_test_01'
  lxc-start: lxc_start.c: main: 344 
The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained 
by setting the --logfile and --logpriority options.


--
Last few lines of the log file
  lxc-start 20160701032506.416 DEBUGlxc_conf - 
conf.c:setup_caps:2056 - drop capability 'sys_rawio' (17)
  lxc-start 20160701032506.416 DEBUGlxc_conf - 
conf.c:setup_caps:2065 - capabilities have been setup
  lxc-start 20160701032506.416 NOTICE   lxc_conf - 
conf.c:lxc_setup:3839 - 'xenial_test_01' is setup.
  lxc-start 20160701132506.417 ERRORlxc_cgfsng - 
cgfsng.c:cgfsng_setup_limits:1645 - No devices cgroup setup for 
xenial_test_01
  lxc-start 20160701132506.417 ERRORlxc_start - 
start.c:lxc_spawn:1226 - failed to setup the devices cgroup for 
'xenial_test_01'
  lxc-start 20160701132506.417 ERRORlxc_start - 
start.c:__lxc_start:1353 - failed to spawn 'xenial_test_01'
  lxc-start 20160701132506.449 INFO lxc_conf - 
conf.c:run_script_argv:367 - Executing script 
'/usr/share/lxcfs/lxc.reboot.hook' for container 'xenial_test_01', 
config section 'lxc'
  lxc-start 20160701132506.960 ERRORlxc_start_ui - 
lxc_start.c:main:344 - The container failed to start.
  lxc-start 20160701132506.960 ERRORlxc_start_ui - 
lxc_start.c:main:348 - Additional information can be obtained by setting 
the --logfile and --logpriority options.


I can attach the logfile if that helps, tho it may delay the email due 
to size




___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-06-30 Thread Serge E. Hallyn
Quoting rob e (redger...@yahoo.com.au):
> On 30/06/16 11:35, Serge E. Hallyn wrote:
> >On Thu, Jun 30, 2016 at 11:24:25AM +1000, Rob wrote:
> >>On 30/06/2016 10:36 AM, Serge E. Hallyn wrote:
> >>>Quoting Rob Edgerton (redger...@yahoo.com.au):
> >Oh, ok.  I'm sorry, this should have been obvious to me from the start.
> >
> >You need to edit /etc/pam.d/common-session and change the line that's
> >something like
> >
> >session optional pam_cgfs.so -c freezer,memory,name=systemd
> >
> >to add ",cpuset" at the end, i.e.
> >
> >session optional pam_cgfs.so -c freezer,memory,name=systemd,cpuset
> >
> >It has been removed from the default because on systems which do a lot
> >of cpu hotplugging it can be a problem:  with the legacy (non-unified)
> >cpuset hierarchy, when you unplug a cpu that is part of /user, it gets
> >removed, but when you re-plug it it does not get re-added.
> >___
> >lxc-users mailing list
> >lxc-users@lists.linuxcontainers.org
> >http://lists.linuxcontainers.org/listinfo/lxc-users
> thanks Serge,
> I tried that. Same result. Additionally, even when I comment out the
> CPU controls, leaving only Memory limits, it still fails.
> 
> To confirm, I have 3 uses for cgroups -
> 1)  Resource control on CPU, Memory, Disk, Network etc eg.
> lxc.cgroup.cpuset.cpus = 1-3
> lxc.cgroup.memory.limit_in_bytes = 4G

Let's address them one at a time.  For starters,

if you only leave in the 
lxc.cgroup.cpuset.cpus = 1-3
does that now work?  If not, please post the log output to show exactly
how it fails.
And if you only have
lxc.cgroup.memory.limit_in_bytes = 4G
how does that fail, exactly?

Also, what is /proc/self/cgroup now when you login?

> 2)  Access to devices, particularly USB tuners
> lxc.cgroup.devices.allow = c 212:* rwm
> 3)  Access to TAP / TUN devices in order to run VPN in a container
> lxc.cgroup.devices.allow = c 10:200 rwm
> 
> All 3 fail in the same way. Any one of them leads to failure
> (including Memory limits)
> 
> Here's the current value from /etc/pam.d/common-session
> session  optional   pam_cfgs.so -c freezer,memory,name=systemd,cpuset
> the memory clause already existed before edits. Memory limit setting
> has failed with default and after the above edit
> 
> Error is  "No devices group set up for .."
> 
> thanks for your help
>   Rob
> 
> PS Some emails appear to have been "lost", apologies if this is a
> logical duplicate
> 
> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-06-30 Thread rob e

On 30/06/16 11:35, Serge E. Hallyn wrote:

On Thu, Jun 30, 2016 at 11:24:25AM +1000, Rob wrote:

On 30/06/2016 10:36 AM, Serge E. Hallyn wrote:

Quoting Rob Edgerton (redger...@yahoo.com.au):

Oh, ok.  I'm sorry, this should have been obvious to me from the start.

You need to edit /etc/pam.d/common-session and change the line that's
something like

session optionalpam_cgfs.so -c freezer,memory,name=systemd

to add ",cpuset" at the end, i.e.

session optionalpam_cgfs.so -c freezer,memory,name=systemd,cpuset

It has been removed from the default because on systems which do a lot
of cpu hotplugging it can be a problem:  with the legacy (non-unified)
cpuset hierarchy, when you unplug a cpu that is part of /user, it gets
removed, but when you re-plug it it does not get re-added.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

thanks Serge,
I tried that. Same result. Additionally, even when I comment out the CPU 
controls, leaving only Memory limits, it still fails.


To confirm, I have 3 uses for cgroups -
1)  Resource control on CPU, Memory, Disk, Network etc eg.
lxc.cgroup.cpuset.cpus = 1-3
lxc.cgroup.memory.limit_in_bytes = 4G
2)  Access to devices, particularly USB tuners
lxc.cgroup.devices.allow = c 212:* rwm
3)  Access to TAP / TUN devices in order to run VPN in a container
lxc.cgroup.devices.allow = c 10:200 rwm

All 3 fail in the same way. Any one of them leads to failure (including 
Memory limits)


Here's the current value from /etc/pam.d/common-session
session  optional   pam_cfgs.so -c freezer,memory,name=systemd,cpuset
the memory clause already existed before edits. Memory limit setting has 
failed with default and after the above edit


Error is  "No devices group set up for .."

thanks for your help
  Rob

PS Some emails appear to have been "lost", apologies if this is a 
logical duplicate


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-06-29 Thread Serge E. Hallyn
On Thu, Jun 30, 2016 at 02:39:37AM +, Rob Edgerton wrote:
...
> I updated pam.d/common-session# = RE Changed 
> = #
> #session    optional    pam_cgfs.so -c freezer,memory,name=systemd
> session optional    pam_cgfs.so -c freezer,memory,name=systemd,cpuset
> # = RE Changed = #
> then restarted, with similar result. Further, the config contains auth for 
> using USB devices too# USB devices
> lxc.cgroup.devices.allow = c 10:200 rwm# CPU & Memory limits
> lxc.cgroup.cpuset.cpus = 1-3
> lxc.cgroup.cpu.shares = 256
> lxc.cgroup.memory.limit_in_bytes = 4G
> lxc.cgroup.blkio.weight = 500

Ok two things here - first, you'll of course need to add every controller
that you want to use to the pam_cgfs.so line in /etc/pam.d/common-session.

Second, in order to set devices cgroup entries you may need to use cgmanager,
as unprivileged users are not allowed to write those.  But then, you
shouldn't need the devices.allow line at all, because your container is
unprivileged and therefore no devices cgroup limits are set.

> Commenting out the first line still results in start failure, as do the other 
> lines. Even just uncommenting the memory.limit lines leads to failure with$ 
> lxc-start -n trusty_unp_ibvpn -F
> lxc-start: cgfsng.c: cgfsng_setup_limits: 1645 No devices cgroup setup for 
> trusty_unp_ibvpn
> lxc-start: start.c: lxc_spawn: 1226 failed to setup the devices cgroup for 
> 'trusty_unp_ibvpn'
> lxc-start: start.c: __lxc_start: 1353 failed to spawn 'trusty_unp_ibvpn'
> lxc-start: lxc_start.c: main: 344 The container failed to start.
> lxc-start: lxc_start.c: main: 348 Additional information can be obtained by 
> setting the --logfile and --logpriority options.
> 
> here's a sample log sequence where ONLY "lxc.cgroup.memory.limit_in_bytes = 
> 4G" was uncommented
>  lxc-start 20160630023739.583 INFO lxc_conf - 
> conf.c:lxc_create_tty:3303 - tty's configured
>   lxc-start 20160630023739.583 INFO lxc_conf - conf.c:setup_tty:995 - 
> 4 tty(s) has been setup
>   lxc-start 20160630023739.583 INFO lxc_conf - 
> conf.c:setup_personality:1393 - set personality to '0x0'
>   lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 
> - drop capability 'mac_admin' (33)
>   lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 
> - drop capability 'mac_override' (32)
>   lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 
> - drop capability 'sys_time' (25)
>   lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 
> - drop capability 'sys_module' (16)
>   lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 
> - drop capability 'sys_rawio' (17)
>   lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2065 
> - capabilities have been setup
>   lxc-start 20160630023739.583 NOTICE   lxc_conf - conf.c:lxc_setup:3839 
> - 'trusty_unp_ibvpn' is setup.
>   lxc-start 20160630123739.583 ERROR    lxc_cgfsng - 
> cgfsng.c:cgfsng_setup_limits:1645 - No devices cgroup setup for 
> trusty_unp_ibvpn
>   lxc-start 20160630123739.583 ERROR    lxc_start - 
> start.c:lxc_spawn:1226 - failed to setup the devices cgroup for 
> 'trusty_unp_ibvpn'
>   lxc-start 20160630123739.583 ERROR    lxc_start - 
> start.c:__lxc_start:1353 - failed to spawn 'trusty_unp_ibvpn'
>   lxc-start 20160630123739.633 INFO lxc_conf - 
> conf.c:run_script_argv:367 - Executing script 
> '/usr/share/lxcfs/lxc.reboot.hook' for container 'trusty_unp_ibvpn', config 
> section 'lxc'
>   lxc-start 20160630123740.147 ERROR    lxc_start_ui - 
> lxc_start.c:main:344 - The container failed to start.
>   lxc-start 20160630123740.147 ERROR    lxc_start_ui - 
> lxc_start.c:main:348 - Additional information can be obtained by setting the 
> --logfile and --logpriority options.
> 
>   

> ___
> lxc-users mailing list
> lxc-users@lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-06-29 Thread Rob Edgerton
 

On Thursday, 30 June 2016, 11:36, Serge E. Hallyn  wrote:
 

 On Thu, Jun 30, 2016 at 11:24:25AM +1000, Rob wrote:
> On 30/06/2016 10:36 AM, Serge E. Hallyn wrote:
> >Quoting Rob Edgerton (redger...@yahoo.com.au):
> >>      lxc-start 20160628155820.614 ERROR    lxc_cgfsng - 
> >>cgfsng.c:cgfsng_setup_limits:1662 - No such file or directory - Error 
> >>setting cpuset.cpus to 1-3 for trusty_unp_ibvpn
> >ENOENT - that's unexpected...
> >
> >>      lxc-start 20160628155820.615 ERROR    lxc_start - 
> >>start.c:lxc_spawn:1180 - failed to setup the cgroup limits for 
> >>@virt-host:~$cgm --version
> >>0.29
> >Can you show 'dpkg -l | grep cgmanager' ?
> >
> >as well as cat /etc/*release
> >
> >Hi, For /proc/self/cgroup and /proc/self/mountinfo, we actually
> >need to see the contents. Can you show 'cat /proc/self/cgroup' and
> >'cat /proc/self/mountinfo'? -serge
> >___ lxc-users mailing
> >list lxc-users@lists.linuxcontainers.org
> >http://lists.linuxcontainers.org/listinfo/lxc-users
> hi Serge,
> here is the follow up info (note that I cut the msg above in order
> to reduce size)
> 
> $ dpkg -l | grep cgmanager
> ii  cgmanager 0.39-2ubuntu5                              amd64
> Central cgroup manager daemon
> ii  libcgmanager0:amd64 0.39-2ubuntu5
> amd64        Central cgroup manager daemon (client library)
> 
> $ cat /etc/*release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=16.04
> DISTRIB_CODENAME=xenial
> DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"
> NAME="Ubuntu"
> VERSION="16.04 LTS (Xenial Xerus)"
> ID=ubuntu
> ID_LIKE=debian
> PRETTY_NAME="Ubuntu 16.04 LTS"
> VERSION_ID="16.04"
> HOME_URL="http://www.ubuntu.com/;
> SUPPORT_URL="http://help.ubuntu.com/;
> BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
> UBUNTU_CODENAME=xenial
> 
> 
> $ cat /proc/self/cgroup
> 11:blkio:/user.slice
> 10:hugetlb:/
> 9:freezer:/user/redger/1
> 8:pids:/user.slice/user-1000.slice
> 7:perf_event:/
> 6:cpu,cpuacct:/user.slice
> 5:net_cls,net_prio:/
> 4:devices:/user.slice
> 3:memory:/user/redger/1
> 2:cpuset:/

Oh, ok.  I'm sorry, this should have been obvious to me from the start.

You need to edit /etc/pam.d/common-session and change the line that's
something like

session optional    pam_cgfs.so -c freezer,memory,name=systemd

to add ",cpuset" at the end, i.e.

session optional    pam_cgfs.so -c freezer,memory,name=systemd,cpuset

It has been removed from the default because on systems which do a lot
of cpu hotplugging it can be a problem:  with the legacy (non-unified)
cpuset hierarchy, when you unplug a cpu that is part of /user, it gets
removed, but when you re-plug it it does not get re-added.
hi Serge,thanks for the response.
I updated pam.d/common-session# = RE Changed = #
#session    optional    pam_cgfs.so -c freezer,memory,name=systemd
session optional    pam_cgfs.so -c freezer,memory,name=systemd,cpuset
# = RE Changed = #
then restarted, with similar result. Further, the config contains auth for 
using USB devices too# USB devices
lxc.cgroup.devices.allow = c 10:200 rwm# CPU & Memory limits
lxc.cgroup.cpuset.cpus = 1-3
lxc.cgroup.cpu.shares = 256
lxc.cgroup.memory.limit_in_bytes = 4G
lxc.cgroup.blkio.weight = 500

Commenting out the first line still results in start failure, as do the other 
lines. Even just uncommenting the memory.limit lines leads to failure with$ 
lxc-start -n trusty_unp_ibvpn -F
lxc-start: cgfsng.c: cgfsng_setup_limits: 1645 No devices cgroup setup for 
trusty_unp_ibvpn
lxc-start: start.c: lxc_spawn: 1226 failed to setup the devices cgroup for 
'trusty_unp_ibvpn'
lxc-start: start.c: __lxc_start: 1353 failed to spawn 'trusty_unp_ibvpn'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by 
setting the --logfile and --logpriority options.

here's a sample log sequence where ONLY "lxc.cgroup.memory.limit_in_bytes = 4G" 
was uncommented
 lxc-start 20160630023739.583 INFO lxc_conf - 
conf.c:lxc_create_tty:3303 - tty's configured
  lxc-start 20160630023739.583 INFO lxc_conf - conf.c:setup_tty:995 - 4 
tty(s) has been setup
  lxc-start 20160630023739.583 INFO lxc_conf - 
conf.c:setup_personality:1393 - set personality to '0x0'
  lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 - 
drop capability 'mac_admin' (33)
  lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 - 
drop capability 'mac_override' (32)
  lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 - 
drop capability 'sys_time' (25)
  lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 - 
drop capability 'sys_module' (16)
  lxc-start 20160630023739.583 DEBUG    lxc_conf - conf.c:setup_caps:2056 - 
drop capability 'sys_rawio' (17)
  lxc-start 20160630023739.583 DEBUG    lxc_conf - 

Re: [lxc-users] using cgroups

2016-06-29 Thread Serge E. Hallyn
On Thu, Jun 30, 2016 at 11:24:25AM +1000, Rob wrote:
> On 30/06/2016 10:36 AM, Serge E. Hallyn wrote:
> >Quoting Rob Edgerton (redger...@yahoo.com.au):
> >>   lxc-start 20160628155820.614 ERRORlxc_cgfsng - 
> >> cgfsng.c:cgfsng_setup_limits:1662 - No such file or directory - Error 
> >> setting cpuset.cpus to 1-3 for trusty_unp_ibvpn
> >ENOENT - that's unexpected...
> >
> >>   lxc-start 20160628155820.615 ERRORlxc_start - 
> >> start.c:lxc_spawn:1180 - failed to setup the cgroup limits for 
> >> @virt-host:~$cgm --version
> >>0.29
> >Can you show 'dpkg -l | grep cgmanager' ?
> >
> >as well as cat /etc/*release
> >
> >Hi, For /proc/self/cgroup and /proc/self/mountinfo, we actually
> >need to see the contents. Can you show 'cat /proc/self/cgroup' and
> >'cat /proc/self/mountinfo'? -serge
> >___ lxc-users mailing
> >list lxc-users@lists.linuxcontainers.org
> >http://lists.linuxcontainers.org/listinfo/lxc-users
> hi Serge,
> here is the follow up info (note that I cut the msg above in order
> to reduce size)
> 
> $ dpkg -l | grep cgmanager
> ii  cgmanager 0.39-2ubuntu5  amd64
> Central cgroup manager daemon
> ii  libcgmanager0:amd64 0.39-2ubuntu5
> amd64Central cgroup manager daemon (client library)
> 
> $ cat /etc/*release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=16.04
> DISTRIB_CODENAME=xenial
> DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"
> NAME="Ubuntu"
> VERSION="16.04 LTS (Xenial Xerus)"
> ID=ubuntu
> ID_LIKE=debian
> PRETTY_NAME="Ubuntu 16.04 LTS"
> VERSION_ID="16.04"
> HOME_URL="http://www.ubuntu.com/;
> SUPPORT_URL="http://help.ubuntu.com/;
> BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
> UBUNTU_CODENAME=xenial
> 
> 
> $ cat /proc/self/cgroup
> 11:blkio:/user.slice
> 10:hugetlb:/
> 9:freezer:/user/redger/1
> 8:pids:/user.slice/user-1000.slice
> 7:perf_event:/
> 6:cpu,cpuacct:/user.slice
> 5:net_cls,net_prio:/
> 4:devices:/user.slice
> 3:memory:/user/redger/1
> 2:cpuset:/

Oh, ok.  I'm sorry, this should have been obvious to me from the start.

You need to edit /etc/pam.d/common-session and change the line that's
something like

session optionalpam_cgfs.so -c freezer,memory,name=systemd

to add ",cpuset" at the end, i.e.

session optionalpam_cgfs.so -c freezer,memory,name=systemd,cpuset

It has been removed from the default because on systems which do a lot
of cpu hotplugging it can be a problem:  with the legacy (non-unified)
cpuset hierarchy, when you unplug a cpu that is part of /user, it gets
removed, but when you re-plug it it does not get re-added.
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-06-29 Thread Rob

On 30/06/2016 10:36 AM, Serge E. Hallyn wrote:

Quoting Rob Edgerton (redger...@yahoo.com.au):

   lxc-start 20160628155820.614 ERRORlxc_cgfsng - 
cgfsng.c:cgfsng_setup_limits:1662 - No such file or directory - Error setting 
cpuset.cpus to 1-3 for trusty_unp_ibvpn

ENOENT - that's unexpected...


   lxc-start 20160628155820.615 ERRORlxc_start - start.c:lxc_spawn:1180 
- failed to setup the cgroup limits for @virt-host:~$cgm --version
0.29

Can you show 'dpkg -l | grep cgmanager' ?

as well as cat /etc/*release

Hi, For /proc/self/cgroup and /proc/self/mountinfo, we actually need 
to see the contents. Can you show 'cat /proc/self/cgroup' and 'cat 
/proc/self/mountinfo'? -serge 
___ lxc-users mailing list 
lxc-users@lists.linuxcontainers.org 
http://lists.linuxcontainers.org/listinfo/lxc-users 

hi Serge,
here is the follow up info (note that I cut the msg above in order to 
reduce size)


$ dpkg -l | grep cgmanager
ii  cgmanager 0.39-2ubuntu5  amd64
Central cgroup manager daemon
ii  libcgmanager0:amd64 0.39-2ubuntu5  
amd64Central cgroup manager daemon (client library)


$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/;
SUPPORT_URL="http://help.ubuntu.com/;
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
UBUNTU_CODENAME=xenial


$ cat /proc/self/cgroup
11:blkio:/user.slice
10:hugetlb:/
9:freezer:/user/redger/1
8:pids:/user.slice/user-1000.slice
7:perf_event:/
6:cpu,cpuacct:/user.slice
5:net_cls,net_prio:/
4:devices:/user.slice
3:memory:/user/redger/1
2:cpuset:/
1:name=systemd:/user.slice/user-1000.slice/session-1.scope

$ cat /proc/self/mountinfo
19 25 0:18 / /sys rw,nosuid,nodev,noexec,relatime shared:7 - sysfs sysfs rw
20 25 0:4 / /proc rw,nosuid,nodev,noexec,relatime shared:12 - proc proc rw
21 25 0:6 / /dev rw,nosuid,relatime shared:2 - devtmpfs udev 
rw,size=8026104k,nr_inodes=2006526,mode=755
22 21 0:14 / /dev/pts rw,nosuid,noexec,relatime shared:3 - devpts devpts 
rw,gid=5,mode=620,ptmxmode=000
23 25 0:19 / /run rw,nosuid,noexec,relatime shared:5 - tmpfs tmpfs 
rw,size=1615856k,mode=755
25 0 8:41 / / rw,relatime shared:1 - ext4 /dev/sdc9 
rw,errors=remount-ro,data=ordered
26 19 0:12 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime 
shared:8 - securityfs securityfs rw

27 21 0:21 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw
28 23 0:22 / /run/lock rw,nosuid,nodev,noexec,relatime shared:6 - tmpfs 
tmpfs rw,size=5120k

29 19 0:23 / /sys/fs/cgroup rw shared:9 - tmpfs tmpfs rw,mode=755
30 29 0:24 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime 
shared:10 - cgroup cgroup 
rw,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
31 19 0:25 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:11 - 
pstore pstore rw
32 29 0:26 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime 
shared:13 - cgroup cgroup rw,cpuset,clone_children
33 29 0:27 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime 
shared:14 - cgroup cgroup rw,memory
34 29 0:28 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime 
shared:15 - cgroup cgroup rw,devices
35 29 0:29 / /sys/fs/cgroup/net_cls,net_prio 
rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup 
rw,net_cls,net_prio
36 29 0:30 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime 
shared:17 - cgroup cgroup rw,cpu,cpuacct
37 29 0:31 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime 
shared:18 - cgroup cgroup 
rw,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event
38 29 0:32 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime 
shared:19 - cgroup cgroup 
rw,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids
39 29 0:33 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime 
shared:20 - cgroup cgroup rw,freezer
40 29 0:34 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime 
shared:21 - cgroup cgroup 
rw,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb
41 29 0:35 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime 
shared:22 - cgroup cgroup rw,blkio
42 20 0:36 / /proc/sys/fs/binfmt_misc rw,relatime shared:23 - autofs 
systemd-1 rw,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct

43 19 0:7 / /sys/kernel/debug rw,relatime shared:24 - debugfs debugfs rw
44 21 0:37 / /dev/hugepages rw,relatime shared:25 - hugetlbfs hugetlbfs rw
45 23 0:38 / /run/rpc_pipefs rw,relatime shared:26 - rpc_pipefs sunrpc rw
46 21 0:17 / /dev/mqueue rw,relatime shared:27 - mqueue mqueue rw
47 20 0:39 / /proc/fs/nfsd rw,relatime shared:28 - nfsd nfsd rw
48 19 0:40 / /sys/fs/fuse/connections rw,relatime shared:29 - fusectl 
fusectl rw
49 25 8:34 / /mnt/snd480_boot_01 rw,relatime shared:30 - ext4 /dev/sdc2 

Re: [lxc-users] using cgroups

2016-06-29 Thread Serge E. Hallyn
Quoting Rob Edgerton (redger...@yahoo.com.au):
> hi,I have the same problem (cgroups not working as expected) on a clean 
> Xenial build (lxc PPA NOT installed, LXD not installed)In my case I have some 
> Ubuntu Trusty containers I really need to use on Xenial, but they won't start 
> because I use cgroups.If I change the existing containers to remove the 
> "lxc.cgroup" clauses from config they start,

Please show the exact clauses you are using as well.

-serge
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-06-28 Thread Rob Edgerton
hi,I have the same problem (cgroups not working as expected) on a clean Xenial 
build (lxc PPA NOT installed, LXD not installed)In my case I have some Ubuntu 
Trusty containers I really need to use on Xenial, but they won't start because 
I use cgroups.If I change the existing containers to remove the "lxc.cgroup" 
clauses from config they start, but not otherwise.Similarly, I created a new 
Xenial container for testing. It works, until I add "lxc.cgroups" clauses at 
which point it also fails to start.@virt-host:~$ lxc-start -n trusty_unp_ibvpn 
-F -l debug -o lxc.log
lxc-start: cgfsng.c: cgfsng_setup_limits: 1662 No such file or directory - 
Error setting cpuset.cpus to 1-3 for trusty_unp_ibvpn
lxc-start: start.c: lxc_spawn: 1180 failed to setup the cgroup limits for 
'trusty_unp_ibvpn'
lxc-start: start.c: __lxc_start: 1353 failed to spawn 'trusty_unp_ibvpn'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by 
setting the --logfile and --logpriority  options.

Logfile Contents=
  lxc-start 20160628155820.562 INFO lxc_start_ui - lxc_start.c:main:264 
- using rcfile /mnt/lxc_images/containers/trusty_unp_ibvpn/config
  lxc-start 20160628155820.562 WARN lxc_confile - 
confile.c:config_pivotdir:1879 - lxc.pivotdir is ignored.  It will soon become 
an error.
  lxc-start 20160628155820.562 INFO lxc_confile - 
confile.c:config_idmap:1500 - read uid map: type u nsid 0 hostid 10 range 
65536
  lxc-start 20160628155820.562 INFO lxc_confile - 
confile.c:config_idmap:1500 - read uid map: type g nsid 0 hostid 10 range 
65536
  lxc-start 20160628155820.564 INFO lxc_lsm - lsm/lsm.c:lsm_init:48 - 
LSM security driver AppArmor
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .reject_force_umount  # comment 
this to allow umount -f;  not recommended.
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for reject_force_umount 
action 0
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for reject_force_umount 
action 0
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:do_resolve_add_rule:216 - Setting seccomp rule to reject force umounts

  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .[all].
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .kexec_load errno 1.
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for kexec_load action 327681
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for kexec_load action 327681
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .open_by_handle_at errno 1.
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for open_by_handle_at action 
327681
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for open_by_handle_at action 
327681
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .init_module errno 1.
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for init_module action 327681
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for init_module action 327681
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .finit_module errno 1.
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for finit_module action 
327681
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for finit_module action 
327681
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:342 - processing: .delete_module errno 1.
  lxc-start 20160628155820.564 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:446 - Adding native rule for delete_module action 
327681
  lxc-start 20160628155820.565 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:449 - Adding compat rule for delete_module action 
327681
  lxc-start 20160628155820.565 INFO lxc_seccomp - 
seccomp.c:parse_config_v2:456 - Merging in the compat seccomp ctx into the main 
one
  lxc-start 20160628155820.565 DEBUG    lxc_start - 
start.c:setup_signal_fd:289 - sigchild handler set
  lxc-start 20160628155820.565 DEBUG    

Re: [lxc-users] using cgroups

2016-06-28 Thread rob e

hi,
I'm experiencing the same problem. I use "lxc.cgroup" to constrain 
resource usage and to provide access to devices


in trying to re-use containers established under Trusty, I find that 
lxc.cgroup clauses prevent the container starting


furthermore, if I create a new "test" container on Xenial, it will start 
and run ok until I start adding lxc.cgroup clauses, at which point it 
will no longer start.


LXC is installed. LXD is NOT installed. CGMANAGER is installed. All 
packages are current from Xenial LTS


Is there anything I can do to help pin this down ? Will a test conducted 
in a KVM based VM be valid / useful ?


Rob

On 27/06/16 11:41, Serge E. Hallyn wrote:

Quoting Mike Wright (nob...@nospam.hostisimo.com):

On 06/26/2016 01:01 PM, Serge E. Hallyn wrote:

Quoting Mike Wright (nob...@nospam.hostisimo.com):

Hi all,

cgmanager and cgmanager-utils are installed.

Environment is ubuntu-xenial, lxc-2.0.1, cgm-0.29

why 0.29?  xenial should have 0.39-2ubuntu5.  I'm on xenial
using 0.41-2~ubuntu16.04.1~ppa1 from the ubuntu-lxc
ppa.

Thanks for the response, Serge.

This is interesting.

sudo apt install -s cgmanager
   cgmanager is already the newest version (0.39-2ubuntu5)

cgm --version
   0.29

Added ppa:ubuntu-lxc/stable, updated and upgraded.

sudo apt install -s cgmanager
   cgmanager is already the newest version (0.41-2~ubuntu16.04.1~ppa1)

cgm --version
   0.29

Oh, huh.  Yeah, that seems to be a cgmanager bug :)


0 ✓ serge@sl ~ $ sudo cgm create all me
[sudo] password for serge:
0 ✓ serge@sl ~ $ sudo cgm chown all me $(id -u) $(id -g)
0 ✓ serge@sl ~ $

Now, I'm not running systemd so it's possible systemd is
doing something unorthodox again.  But really it sounds
like a bug that shouldve been fixed in 0.27-0ubuntu6 -
where cgmanager didn't deal well with comounted controllers.

Still failing at cgm chown...

Ideas on how would I go about determining the problem?

Edit /lib/systemd/system/cgmanager.service and add '--debug' to the
end of the ExecStart line.  Do 'systemctl daemon-reload' followed
by 'systemctl restart cgmanager'.  Then do the above again, and
do 'journalctl -u cgmanager' and list the results here.  Also
show the contents of /proc/self/cgroup and /proc/self/mountinfo.
That should give us what we need.

thanks,
-serge
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users


___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] using cgroups

2016-06-26 Thread Mike Wright

On 06/26/2016 06:41 PM, Serge E. Hallyn wrote:

Quoting Mike Wright (nob...@nospam.hostisimo.com):

Ideas on how would I go about determining the problem?


Edit /lib/systemd/system/cgmanager.service and add '--debug' to the
end of the ExecStart line.  Do 'systemctl daemon-reload' followed
by 'systemctl restart cgmanager'.  Then do the above again, and
do 'journalctl -u cgmanager' and list the results here.  Also
show the contents of /proc/self/cgroup and /proc/self/mountinfo.
That should give us what we need.


Following clean boot; no cli cgm commands given.

Attached is journalctl -u cgmanager, /proc/self/{cgroup,mountinfo}


cgm.tar.bz2
Description: application/bzip
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users