Re: [lxc-users] processes escaped from memory cgroup in container, but CPU group is OK

2014-11-23 Thread Michael R. Hines

On 11/22/2014 06:10 AM, Fajar A. Nugraha wrote:
On Fri, Nov 21, 2014 at 2:45 PM, Michael R. Hines 
mrhi...@linux.vnet.ibm.com mailto:mrhi...@linux.vnet.ibm.com wrote:


Hi All,

I am using LXC 1.0.5, and I have container running Redhat 7.0 on a
Power7 processor. My host kernel version is 3.10.42.

The cgroup for this container located at /cgroup/cpu works very
well - I can manually echo
different shares and control resource usage as expected.

But, to my surprise, I set the memory.limit_in_bytes option of
the container in /cgroup/memory/lxc/../container/memory.limit
to a low number (like 2G in bytes), and the container was still
able to consume all the memory in the system.

So, digging deeper I printed the output of cgroup.procs and
found that *only* systemd inside the container
was properly joined into the group, whereas all the other child
processes of the container were missing.



How did you create the RH7 container?

Thank you for your response: I directly copied it from KVM. was that 
a bad idea?


From my past experience with fedora templates, systemd on the 
container tried to create its own cgroup, OUTSIDE of the normal 
container cgroup path. I suspect in your case it works (as in, 
container started) because there's nothing limiting the container from 
mounting cgroupfs and creating its own cgroup. That wouldn't have 
worked on an Ubuntu host with default apparmor settings active.




Wow. Ok. So, I need to setup apparmor correctly on host-side of RH?

I ended up using the default apparmor profile (to keep it secure), but 
manually creating and bind-mount the cgroups that systemd needs. See 
https://lists.linuxcontainers.org/pipermail/lxc-users/2014-May/007069.html 
, search snippet




Excellent - I will give that a try.

- Michael
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] processes escaped from memory cgroup in container, but CPU group is OK

2014-11-21 Thread brian mullan
forgot to cc the list


On Fri, Nov 21, 2014 at 11:25 AM, brian mullan bmullan.m...@gmail.com
wrote:

 systemd was one of the topics discussed at last weeks Ubuntu Developer's
 Summit
 Systemd transition - 2014-11-14 18:00..18:55 in Platform 1
 http://summit.ubuntu.com/uos-1411/meeting/22401/systemd-transition/
 The various developers discussed the current status and planning for the
 coming
 releases in regards to systemd.   They also discuss some of the blocking
 factors.

 You might want to check it out.

 brian


 -- Forwarded message --
 From: Michael R. Hines mrhi...@linux.vnet.ibm.com
 To: lxc-users@lists.linuxcontainers.org
 Cc:
 Date: Fri, 21 Nov 2014 15:45:47 +0800
 Subject: [lxc-users] processes escaped from memory cgroup in container,
 but CPU group is OK
 Hi All,

 I am using LXC 1.0.5, and I have container running Redhat 7.0 on a Power7
 processor. My host kernel version is 3.10.42.

 The cgroup for this container located at /cgroup/cpu works very well - I
 can manually echo
 different shares and control resource usage as expected.

 But, to my surprise, I set the memory.limit_in_bytes option of the
 container in /cgroup/memory/lxc/../containe
 r/memory.limit
 to a low number (like 2G in bytes), and the container was still able to
 consume all the memory in the system.

 So, digging deeper I printed the output of cgroup.procs and found that
 *only* systemd inside the container
 was properly joined into the group, whereas all the other child processes
 of the container were missing.

 As a further test, I repeated the same procedure with a Ubuntu 14 guest
 (which does not appear to use systemd),
 and the cgroup memory limit worked as expected - all the child processes
 were correctly added to cgroup.procs
 without any problems. When I try to set memory.limit_in_bytes, the control
 works very well.

 So, what gives? Any ideas?

 - Michael

___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Re: [lxc-users] processes escaped from memory cgroup in container, but CPU group is OK

2014-11-21 Thread Fajar A. Nugraha
On Fri, Nov 21, 2014 at 2:45 PM, Michael R. Hines 
mrhi...@linux.vnet.ibm.com wrote:

 Hi All,

 I am using LXC 1.0.5, and I have container running Redhat 7.0 on a Power7
 processor. My host kernel version is 3.10.42.

 The cgroup for this container located at /cgroup/cpu works very well - I
 can manually echo
 different shares and control resource usage as expected.

 But, to my surprise, I set the memory.limit_in_bytes option of the
 container in /cgroup/memory/lxc/../container/memory.limit
 to a low number (like 2G in bytes), and the container was still able to
 consume all the memory in the system.

 So, digging deeper I printed the output of cgroup.procs and found that
 *only* systemd inside the container
 was properly joined into the group, whereas all the other child processes
 of the container were missing.



How did you create the RH7 container?

From my past experience with fedora templates, systemd on the container
tried to create its own cgroup, OUTSIDE of the normal container cgroup
path. I suspect in your case it works (as in, container started) because
there's nothing limiting the container from mounting cgroupfs and creating
its own cgroup. That wouldn't have worked on an Ubuntu host with default
apparmor settings active.

I ended up using the default apparmor profile (to keep it secure), but
manually creating and bind-mount the cgroups that systemd needs. See
https://lists.linuxcontainers.org/pipermail/lxc-users/2014-May/007069.html
, search snippet

-- 
Fajar
___
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users