On Thu, Jul 12, 2012 at 3:11 AM, Kir Kolyshkin k...@openvz.org wrote:
Gentlemen,
We are organizing containers mini-summit during next Linux Plumbers (San
Diego, August 29-31).
The idea is to gather and discuss everything relevant to namespaces,
cgroups, resource management,
* Andrea Righi ari...@develer.com [2011-02-22 18:12:51]:
Currently the blkio.throttle controller only support synchronous IO requests.
This means that we always look at the current task to identify the owner of
each IO request.
However dirty pages in the page cache can be wrote to disk
* Kirill A. Shutsemov kir...@shutemov.name [2011-02-07 11:46:01]:
From: Kirill A. Shutemov kir...@shutemov.name
Signed-off-by: Kirill A. Shutemov kir...@shutemov.name
Acked-by: Paul Menage men...@google.com
Acked-by: Balbir Singh bal...@linux.vnet.ibm.com
--
Three Cheers
* Kirill A. Shutsemov kir...@shutemov.name [2011-02-07 11:46:02]:
From: Kirill A. Shutemov kir...@shutemov.name
Provides a way of tasks grouping by timer slack value. Introduces per
cgroup max and min timer slack value. When a task attaches to a cgroup,
its timer slack value adjusts (if
* Kirill A. Shutemov kir...@shutemov.name [2011-02-07 12:57:30]:
On Mon, Feb 07, 2011 at 03:36:24PM +0530, Balbir Singh wrote:
* Kirill A. Shutsemov kir...@shutemov.name [2011-02-07 11:46:02]:
From: Kirill A. Shutemov kir...@shutemov.name
Provides a way of tasks grouping by timer
On Wed, Jan 5, 2011 at 7:31 PM, Serge Hallyn serge.hal...@canonical.com wrote:
Quoting Daniel Lezcano (daniel.lezc...@free.fr):
On 01/05/2011 10:40 AM, Mike Hommey wrote:
[Copy/pasted from a previous message to lkml, where it was suggested to
try contain...@]
Hi,
I noticed that from
* ccmail111 ccmail...@yahoo.com [2010-12-14 10:22:32]:
--- On Tue, 12/14/10, Balbir Singh bal...@linux.vnet.ibm.com wrote:
From: Balbir Singh bal...@linux.vnet.ibm.com
Subject: Re: cgroup tasks file error
To: ccmail111 ccmail...@yahoo.com
Cc: Jue Hong hon...@gmail.com, contain
On Tue, Dec 14, 2010 at 11:03 PM, ccmail111 ccmail...@yahoo.com wrote:
Isn't ns mounted by default ?
I rebooted machine,
based on 2.6.32 kernel.
Then,
[host:~]$ mkdir /dev/cgroup
[host:~]$ mount -t cgroup cpuset -ocpuset,ns /dev/cgroup
[host:~]$ ps aux | grep libvirt
root 575 0.6
)
Acked-by: Balbir Singh bal...@linux.vnet.ibm.com
--
Three Cheers,
Balbir
___
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
minchan@gmail.com
There are so many placed need vzalloc.
Thanks, Jesper.
Yes, please check memcontrol.c as well
Acked-by: Balbir Singh bal...@linux.vnet.ibm.com
--
Three Cheers,
Balbir
___
Containers mailing list
contain
On Thu, Oct 14, 2010 at 7:11 PM, MALATTAR
mouhannad.alat...@univ-fcomte.fr wrote:
Le 12/10/2010 07:05, KAMEZAWA Hiroyuki a écrit :
On Fri, 08 Oct 2010 10:09:51 +0200
MALATTARmouhannad.alat...@univ-fcomte.fr wrote:
Le 07/10/2010 16:43, MALATTAR a écrit :
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-10-08 19:41:31]:
On Fri, 8 Oct 2010 14:12:01 +0900
KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com wrote:
Sure. It walks the same data three times, potentially causing
thrashing in the L1 cache.
Hmm, make this 2 times, at
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-10-12 12:42:53]:
On Tue, 12 Oct 2010 09:09:15 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-10-08 19:41:31]:
On Fri, 8 Oct 2010 14:12:01 +0900
KAMEZAWA Hiroyuki
* Greg Thelen gthe...@google.com [2010-10-03 23:57:59]:
If pages are being migrated from a memcg, then updates to that
memcg's page statistics are protected by grabbing a bit spin lock
using lock_page_cgroup(). In an upcoming commit memcg dirty page
accounting will be updating memcg page
, FILE_UNSTABLE_NFS)
+TESTCLEARPCGFLAG(FileUnstableNFS, FILE_UNSTABLE_NFS)
+TESTSETPCGFLAG(FileUnstableNFS, FILE_UNSTABLE_NFS)
+
SETPCGFLAG(Migration, MIGRATION)
CLEARPCGFLAG(Migration, MIGRATION)
TESTPCGFLAG(Migration, MIGRATION)
Looks good to me
Acked-by: Balbir Singh bal
* Greg Thelen gthe...@google.com [2010-10-03 23:57:57]:
Document cgroup dirty memory interfaces and statistics.
Signed-off-by: Andrea Righi ari...@develer.com
Signed-off-by: Greg Thelen gthe...@google.com
Acked-by: Balbir Singh bal...@linux.vnet.ibm.com
--
Three Cheers
uniform
From: Balbir Singh bal...@linux.vnet.ibm.com
We today support 'M', 'm', 'k', 'K', 'g' and 'G' suffixes for
general memcg writes. This patch provides the same functionality
for dirty tunables.
---
mm/memcontrol.c | 47 +--
1 files changed, 37
* Balbir Singh bal...@linux.vnet.ibm.com [2010-10-06 19:00:24]:
* Greg Thelen gthe...@google.com [2010-10-03 23:58:03]:
Add cgroupfs interface to memcg dirty page limits:
Direct write-out is controlled with:
- memory.dirty_ratio
- memory.dirty_bytes
Background write-out
I propose restricting page_cgroup.flags to 16 bits. The patch for the
same is below. Comments?
Restrict the bits usage in page_cgroup.flags
From: Balbir Singh bal...@linux.vnet.ibm.com
Restricting the flags helps control growth of the flags unbound.
Restriciting it to 16 bits gives us
* Greg Thelen gthe...@google.com [2010-10-06 09:21:55]:
Looks good to me. I am currently gather performance data on the memcg
series. It should be done in an hour or so. I'll then repost V2 of the
memcg dirty limits series. I'll integrate this patch into the series,
unless there's
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-10-07 08:58:58]:
On Wed, 6 Oct 2010 19:53:14 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
I propose restricting page_cgroup.flags to 16 bits. The patch for the
same is below. Comments?
Restrict the bits usage
* nishim...@mxp.nes.nec.co.jp nishim...@mxp.nes.nec.co.jp [2010-10-07
09:54:58]:
On Wed, 6 Oct 2010 19:53:14 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
I propose restricting page_cgroup.flags to 16 bits. The patch for the
same is below. Comments?
Restrict the bits usage
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-10-07 12:18:16]:
On Thu, 7 Oct 2010 08:42:04 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-10-07 08:58:58]:
On Wed, 6 Oct 2010 19:53:14 +0530
Balbir Singh bal
* nishim...@mxp.nes.nec.co.jp nishim...@mxp.nes.nec.co.jp [2010-10-07
12:47:06]:
On Thu, 7 Oct 2010 08:44:59 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* nishim...@mxp.nes.nec.co.jp nishim...@mxp.nes.nec.co.jp [2010-10-07
09:54:58]:
On Wed, 6 Oct 2010 19:53:14 +0530
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-10-07 13:22:33]:
On Thu, 7 Oct 2010 09:26:08 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-10-07 12:18:16]:
On Thu, 7 Oct 2010 08:42:04 +0530
Balbir Singh bal
* Greg Thelen gthe...@google.com [2010-10-03 23:57:55]:
This patch set provides the ability for each cgroup to have independent dirty
page limits.
Limiting dirty memory is like fixing the max amount of dirty (hard to reclaim)
page cache used by a cgroup. So, in case of multiple cgroup
* Greg Thelen gthe...@google.com [2010-10-03 23:57:55]:
This patch set provides the ability for each cgroup to have independent dirty
page limits.
Limiting dirty memory is like fixing the max amount of dirty (hard to reclaim)
page cache used by a cgroup. So, in case of multiple cgroup
* Greg Thelen gthe...@google.com [2010-10-03 23:57:55]:
This patch set provides the ability for each cgroup to have independent dirty
page limits.
Limiting dirty memory is like fixing the max amount of dirty (hard to reclaim)
page cache used by a cgroup. So, in case of multiple cgroup
* Andrew Morton a...@linux-foundation.org [2010-09-27 13:02:56]:
Good point. It is not really necessary. I started development using the
netlink code. Therefore I first added the new command in the netlink
code. I also thought, it would be a good idea to provide all netlink
commands over
* Michael Holzheu holz...@linux.vnet.ibm.com [2010-09-24 11:10:15]:
Hello Andrew,
On Thu, 2010-09-23 at 13:11 -0700, Andrew Morton wrote:
GOALS OF THIS PATCH SET
---
The intention of this patch set is to provide better support for tools
like
top. The goal
into this. The only
comment I have is that clone says, it clones values, actually it
provides the opportunity for cgroup controllers to do so or anything
else after create succeeds.
Acked-by: Balbir Singh bal...@linux.vnet.ibm.com
___
Containers mailing list
contain
please look up the maintainers in MAINTAINERS and/or
using scripts/get_maintainer.pl and submit __percpu markup patches to
the respective maintainers w/ me cc'd?
Acked-by: Balbir Singh bal...@linux.vnet.ibm.com
Balbir
___
Containers mailing list
contain
* Vivek Goyal vgo...@redhat.com [2010-07-22 17:26:34]:
On Thu, Jul 22, 2010 at 02:18:56PM -0700, Greg KH wrote:
On Thu, Jul 22, 2010 at 03:37:41PM -0400, Vivek Goyal wrote:
On Thu, Jul 22, 2010 at 11:36:15AM -0700, Greg KH wrote:
On Thu, Jul 22, 2010 at 11:31:07AM -0700, Paul Menage
* Greg Thelen gthe...@google.com [2010-04-13 23:55:12]:
On Thu, Mar 18, 2010 at 8:00 PM, KAMEZAWA Hiroyuki
kamezawa.hir...@jp.fujitsu.com wrote:
On Fri, 19 Mar 2010 08:10:39 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03-18 13:21:14]:
On Thu, 18 Mar 2010 09:49:44 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03-18 08:54:11]:
On Wed, 17 Mar 2010 17:28:55 +0530
Balbir Singh bal
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03-18 13:35:27]:
On Thu, 18 Mar 2010 09:49:44 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03-18 08:54:11]:
On Wed, 17 Mar 2010 17:28:55 +0530
Balbir Singh bal
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03-19 10:23:32]:
On Thu, 18 Mar 2010 21:58:55 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03-18 13:35:27]:
Then, no probelm. It's ok to add mem_cgroup_udpate_stat
* Andrea Righi ari...@develer.com [2010-03-15 00:26:37]:
Control the maximum amount of dirty pages a cgroup can have at any given time.
Per cgroup dirty limit is like fixing the max amount of dirty (hard to
reclaim)
page cache used by any cgroup. So, in case of multiple cgroup writers,
* Andrea Righi ari...@develer.com [2010-03-15 00:26:38]:
From: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com
Now, file-mapped is maintaiend. But more generic update function
^^ (typo)
will be needed for dirty page accounting.
For accountig page status, we
* Vivek Goyal vgo...@redhat.com [2010-03-15 13:19:21]:
On Mon, Mar 15, 2010 at 01:12:09PM -0400, Vivek Goyal wrote:
On Mon, Mar 15, 2010 at 12:26:37AM +0100, Andrea Righi wrote:
Control the maximum amount of dirty pages a cgroup can have at any given
time.
Per cgroup dirty limit
* Greg Thelen gthe...@google.com [2010-03-17 09:48:18]:
On Mon, Mar 15, 2010 at 11:41 PM, Daisuke Nishimura
nishim...@mxp.nes.nec.co.jp wrote:
On Mon, 15 Mar 2010 00:26:39 +0100, Andrea Righi ari...@develer.com wrote:
Document cgroup dirty memory interfaces and statistics.
* Vivek Goyal vgo...@redhat.com [2010-03-17 09:34:07]:
On Wed, Mar 17, 2010 at 05:24:28PM +0530, Balbir Singh wrote:
* Vivek Goyal vgo...@redhat.com [2010-03-15 13:19:21]:
On Mon, Mar 15, 2010 at 01:12:09PM -0400, Vivek Goyal wrote:
On Mon, Mar 15, 2010 at 12:26:37AM +0100, Andrea
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03-18 08:54:11]:
On Wed, 17 Mar 2010 17:28:55 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* Andrea Righi ari...@develer.com [2010-03-15 00:26:38]:
From: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com
Now, file
17:28:55 +0530
Balbir Singh bal...@linux.vnet.ibm.com wrote:
* Andrea Righi ari...@develer.com [2010-03-15 00:26:38]:
From: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com
Now, file-mapped is maintaiend. But more generic update function
will be needed for dirty
* Andrea Righi ari...@develer.com [2010-03-10 00:00:31]:
Control the maximum amount of dirty pages a cgroup can have at any given time.
Per cgroup dirty limit is like fixing the max amount of dirty (hard to
reclaim)
page cache used by any cgroup. So, in case of multiple cgroup writers,
* nishim...@mxp.nes.nec.co.jp nishim...@mxp.nes.nec.co.jp [2010-03-10
10:43:09]:
Please please measure the performance overhead of this change.
here.
I made a patch below and measured the time(average of 10 times)
of kernel build
on tmpfs(make -j8 on 8 CPU machine
* nishim...@mxp.nes.nec.co.jp nishim...@mxp.nes.nec.co.jp [2010-03-09
10:29:28]:
On Tue, 9 Mar 2010 09:19:14 +0900, KAMEZAWA Hiroyuki
kamezawa.hir...@jp.fujitsu.com wrote:
On Tue, 9 Mar 2010 01:12:52 +0100
Andrea Righi ari...@develer.com wrote:
On Mon, Mar 08, 2010 at 05:31:00PM
* Andrea Righi ari...@develer.com [2010-03-04 11:40:11]:
Control the maximum amount of dirty pages a cgroup can have at any given time.
Per cgroup dirty limit is like fixing the max amount of dirty (hard to
reclaim)
page cache used by any cgroup. So, in case of multiple cgroup writers,
* Andrea Righi ari...@develer.com [2010-03-04 11:40:13]:
Introduce page_cgroup flags to keep track of file cache pages.
Signed-off-by: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com
Signed-off-by: Andrea Righi ari...@develer.com
---
Looks good
Acked-by: Balbir Singh bal
* Andrea Righi ari...@develer.com [2010-03-04 11:40:15]:
Apply the cgroup dirty pages accounting and limiting infrastructure
to the opportune kernel functions.
Signed-off-by: Andrea Righi ari...@develer.com
---
fs/fuse/file.c |5 +++
fs/nfs/write.c |4 ++
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03-05 10:58:55]:
On Fri, 5 Mar 2010 10:12:34 +0900
Daisuke Nishimura nishim...@mxp.nes.nec.co.jp wrote:
On Thu, 4 Mar 2010 11:40:14 +0100, Andrea Righi ari...@develer.com wrote:
Infrastructure to account dirty pages per cgroup and
* Andrea Righi ari...@develer.com [2010-03-01 22:23:39]:
Infrastructure to account dirty pages per cgroup and add dirty limit
interfaces in the cgroupfs:
- Direct write-out: memory.dirty_ratio, memory.dirty_bytes
- Background write-out: memory.dirty_background_ratio,
* Andrea Righi ari...@develer.com [2010-03-01 22:23:40]:
Apply the cgroup dirty pages accounting and limiting infrastructure to
the opportune kernel functions.
Signed-off-by: Andrea Righi ari...@develer.com
---
fs/fuse/file.c |5 +++
fs/nfs/write.c |4 ++
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2010-03-02 17:23:16]:
On Tue, 2 Mar 2010 09:01:58 +0100
Andrea Righi ari...@develer.com wrote:
On Tue, Mar 02, 2010 at 09:23:09AM +0900, KAMEZAWA Hiroyuki wrote:
On Mon, 1 Mar 2010 22:23:40 +0100
Andrea Righi ari...@develer.com
* Peter Zijlstra pet...@infradead.org [2010-03-02 14:48:56]:
This is ugly and broken.. I thought you'd agreed to something like:
if (mem_cgroup_has_dirty_limit(cgroup))
use mem_cgroup numbers
else
use global numbers
That allows for a 0 dirty limit (which should work and
* Kirill A. Shutemov kir...@shutemov.name [2010-02-22 17:43:40]:
Events should be removed after rmdir of cgroup directory, but before
destroying subsystem state objects. Let's take reference to cgroup
directory dentry to do that.
Signed-off-by: Kirill A. Shutemov kir...@shutemov.name
* Kirill A. Shutemov kir...@shutemov.name [2010-02-24 13:42:15]:
On Wed, Feb 24, 2010 at 10:40 AM, Balbir Singh
bal...@linux.vnet.ibm.com wrote:
* Kirill A. Shutemov kir...@shutemov.name [2010-02-22 17:43:40]:
Events should be removed after rmdir of cgroup directory, but before
* Kirill A. Shutemov kir...@shutemov.name [2010-02-22 17:43:39]:
eventfd are used to notify about two types of event:
- control file-specific, like crossing memory threshold;
- cgroup removing.
To understand what really happen, userspace can check if the cgroup
still exists. To avoid
* Andrea Righi ari...@develer.com [2010-02-21 16:18:44]:
Infrastructure to account dirty pages per cgroup + add memory.dirty_bytes
limit
in cgroupfs.
Signed-off-by: Andrea Righi ari...@develer.com
---
include/linux/memcontrol.h | 31 ++
mm/memcontrol.c| 218
* Vivek Goyal vgo...@redhat.com [2010-02-22 10:58:40]:
We seem to be doing same operation as existing mem_cgroup_update_file_mapped
function is doing to udpate some stats. Can we just reuse that? We
probably can create one core function which take index of stat to update
and
* Vivek Goyal vgo...@redhat.com [2010-02-22 09:27:45]:
On Sun, Feb 21, 2010 at 04:18:43PM +0100, Andrea Righi wrote:
Control the maximum amount of dirty pages a cgroup can have at any given
time.
Per cgroup dirty limit is like fixing the max amount of dirty (hard to
reclaim)
page
the cpu time?
Are these running on behalf of different users? You can always create a
good hiearchy and organize. FAIR_USER scheduler option is going away
soon. Some more context on the firefox applications and whose behalf
they are running on, etc would help.
--
Three Cheers,
Balbir Singh
On Friday 22 January 2010 05:03 PM, william wrote:
Balbir Singh wrote:
On Friday 22 January 2010 11:04 AM, william wrote:
Hello list
I have a question about how i can limit the cpu for a firefox process on
a terminal server.
We have 150 firefox processes running on a terminalserver
On Mon, Jan 4, 2010 at 6:40 AM, Dwight Schauer dscha...@gmail.com wrote:
I'm starting to get is quite regularly, but not every time:
lxc-start: Device or resource busy - failed to remove previous cgroup
'/cgroup/CONTAINER_NAME'
Can you please paste the output of cat /proc/cgroups and cat
On Tue, Jan 12, 2010 at 5:51 AM, KAMEZAWA Hiroyuki
kamezawa.hir...@jp.fujitsu.com wrote:
On Fri, 8 Jan 2010 10:10:38 -0500
Vivek Goyal vgo...@redhat.com wrote:
On Fri, Jan 08, 2010 at 12:30:21AM -0500, Ben Blum wrote:
Convert blk-cgroup to be buildable as a module
From: Ben Blum
- failed to remove previous cgroup '/cgroup/arch64-1') = 77
write(2, \n..., 1
) = 1
I can't see the create of /cgroup/arch64-1 happening, but the rmdir
fails. Hmmm.. does the group already exist at mount time? What are the
permissions?
Balbir Singh
* Kirill A. Shutemov kir...@shutemov.name [2009-12-27 20:37:57]:
On Sun, Dec 27, 2009 at 2:47 PM, Balbir Singh bal...@linux.vnet.ibm.com
wrote:
* Kirill A. Shutemov kir...@shutemov.name [2009-12-27 04:08:58]:
This patchset introduces eventfd-based API for notifications in cgroups
* Kirill A. Shutemov kir...@shutemov.name [2009-12-30 17:57:55]:
This patchset introduces eventfd-based API for notifications in cgroups and
implements memory notifications on top of it.
It uses statistics in memory controler to track memory usage.
Output of time(1) on building kernel on
* Kirill A. Shutemov kir...@shutemov.name [2009-12-27 04:08:58]:
This patchset introduces eventfd-based API for notifications in cgroups and
implements memory notifications on top of it.
It uses statistics in memory controler to track memory usage.
Output of time(1) on building kernel on
* Kirill A. Shutemov kir...@shutemov.name [2009-12-26 02:30:56]:
This patchset introduces eventfd-based API for notifications in cgroups and
implements memory notifications on top of it.
It uses statistics in memory controler to track memory usage.
Output of time(1) on building kernel on
On Mon, Dec 21, 2009 at 10:54 AM, KAMEZAWA Hiroyuki
kamezawa.hir...@jp.fujitsu.com wrote:
Forwarding to container mailing list. Sorry, I myself don't have quick
answer.
But, hmm, what you want cannot be achieved with libcgroup ?
[snip]
CC'ing libcgroup mailing list. Could you please share
* Kirill A. Shutemov kir...@shutemov.name [2009-12-12 00:59:17]:
Helper to get memory or mem+swap usage of the cgroup.
Signed-off-by: Kirill A. Shutemov kir...@shutemov.name
Looks like a good cleanup to me!
Acked-by: Balbir Singh bal...@linux.vnet.ibm.com
--
Balbir
On Thu, Nov 26, 2009 at 9:57 PM, Kirill A. Shutemov
kir...@shutemov.name wrote:
It allows to register multiple memory thresholds and gets notifications
when it crosses.
To register a threshold application need:
- create an eventfd;
- open file memory.usage_in_bytes of a cgroup
- write
for root,
since we have no limits in root anymore.
BTW, Kirill, I've been meaning to write this layer on top of
cgroupstats, is there anything that prevents us from using that today?
CC'ing Dan Malek and Vladslav Buzov who worked on similar patches
earlier.
Balbir Singh
* Pavel Machek pa...@ucw.cz [2009-11-08 18:05:12]:
On Wed 2009-11-04 12:00:05, Balbir Singh wrote:
Hi, All,
We've been having a discussion as to what would be the right place to
mount the cgroup filesystem. Jan has been proactively looking into
this. The FHS has no recommendation
* Jan Safranek jsafr...@redhat.com [2009-11-05 13:07:28]:
On 11/04/2009 05:44 PM, Daniel Lezcano wrote:
Balbir Singh wrote:
Hi, All,
We've been having a discussion as to what would be the right place to
mount the cgroup filesystem. Jan has been proactively looking into
this. The FHS has
* Jan Safranek jsafr...@redhat.com [2009-11-04 17:02:22]:
On 11/04/2009 04:21 PM, Dave Hansen wrote:
On Wed, 2009-11-04 at 13:46 +0530, Balbir Singh wrote:
The reason I liked /dev/cgroup was because cpusets could be
mounted at /dev/cpuset or /dev/cgroup/cpuset. My concern with /cgroup
* Serge E. Hallyn se...@us.ibm.com [2009-11-04 10:11:42]:
Quoting Dave Hansen (d...@linux.vnet.ibm.com):
On Wed, 2009-11-04 at 13:46 +0530, Balbir Singh wrote:
The reason I liked /dev/cgroup was because cpusets could be
mounted at /dev/cpuset or /dev/cgroup/cpuset. My concern
Hi, All,
We've been having a discussion as to what would be the right place to
mount the cgroup filesystem. Jan has been proactively looking into
this. The FHS has no recommendation since cgroup filesystem came in
much later.
The options are
1. /dev/cgroup
2. /cgroup
3. Some place under /sys
* men...@google.com men...@google.com [2009-10-27 23:06:19]:
On Tue, Oct 27, 2009 at 11:04 PM, Paul Menage men...@google.com wrote:
On Tue, Oct 27, 2009 at 6:04 PM, Li Zefan l...@cn.fujitsu.com wrote:
I think maybe it's better to store struct file *file to struct cftype,
so we don't need
(Cc'ing mem controller maintainers in case they find this useful..)
Cc: Balbir Singh bal...@linux.vnet.ibm.com
Cc: Pavel Emelyanov xe...@openvz.org
Cc: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com
Hmm, maybe useful if we decided to add memory.drop_memory or some...
But now, we have
bacause of its limit is a problem,
dirty_ratio for memcg should be implemetned.
I tend to agree, looks like dirty_ratio will become important along
with overcommit support in the future.
Balbir Singh.
___
Containers mailing list
contain...@lists.linux
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2009-08-24 15:58:35]:
On Mon, 24 Aug 2009 08:17:06 +0200
Dietmar Maurer diet...@proxmox.com wrote:
how about memsw_limit for swap? :
I am looking for swap usage statistics from cgroup right now from
memcontrol.c :) but as you did
also want to mention that elsewhere
the sequence is unlock cgroup_mutex followed by inode-i_mutex.
Acked-by: Balbir Singh bal...@linux.vnet.ibm.com
--
Balbir
___
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux
* Zefan Li lizf.ker...@gmail.com [2009-07-21 19:38:03]:
2009/7/21, Balbir Singh bal...@linux.vnet.ibm.com:
* Xiaotian Feng df...@redhat.com [2009-07-21 18:25:26]:
In cgroup_get_sb, the lock sequence is:
mutex_lock(inode-i_mutex);
mutex_lock(cgroup-mutex);
so
* Dan Malek d...@embeddedalley.com [2009-07-14 12:13:32]:
If you look at my presentation from the last ELC, you
will see this patch is one small step of many to improve
resource management. This event notification discussion
is important, but still just a tiny implementation detail in
a
* Dan Malek d...@embeddedalley.com [2009-07-16 11:16:29]:
On Jul 16, 2009, at 10:15 AM, Balbir Singh wrote:
Dan, if you are suggesting that we incrementally add features, I
completely agree with you, that way the code is reviewable and
maintainable. As we add features we need
* men...@google.com men...@google.com [2009-07-13 15:15:45]:
On Tue, Jul 7, 2009 at 5:56 PM, KAMEZAWA
Hiroyukikamezawa.hir...@jp.fujitsu.com wrote:
I know people likes to wait for file descriptor to get notification in
these days.
Can't we have event file descriptor in cgroup layer and
* men...@google.com men...@google.com [2009-07-13 23:49:16]:
On Mon, Jul 13, 2009 at 10:56 PM, Balbir Singhbal...@linux.vnet.ibm.com
wrote:
Waiting for the next scheduling point might be too long, since a
thread can block for arbitrary amounts of time and keeping the marker
around for
* men...@google.com men...@google.com [2009-07-10 16:58:23]:
On Sat, Jul 4, 2009 at 11:38 PM, Balbir Singhbal...@linux.vnet.ibm.com
wrote:
Paul, I don't see an interface to migrate all procs or at-least I
can't read it in the changelog. As discussed in the containers
mini-summit in
* men...@google.com men...@google.com [2009-07-13 09:26:26]:
On Mon, Jul 13, 2009 at 5:11 AM, Balbir Singhbal...@linux.vnet.ibm.com
wrote:
How about lazy migration? Mark a group as to move when the kernel sees
it next for scheduling.
Waiting for the next scheduling point might be too
* Vivek Goyal vgo...@redhat.com [2009-07-08 09:41:14]:
On Wed, Jul 08, 2009 at 09:26:21AM +0530, Balbir Singh wrote:
* Vivek Goyal vgo...@redhat.com [2009-07-02 16:01:32]:
Hi All,
Here is the V6 of the IO controller patches generated on top of
2.6.31-rc1.
Previous
* Vladislav Buzov vbu...@embeddedalley.com [2009-07-07 13:25:10]:
This patch updates the Memory Controller cgroup to add
a configurable memory usage limit notification. The feature
was presented at the April 2009 Embedded Linux Conference.
Signed-off-by: Dan Malek d...@embeddedalley.com
* Vivek Goyal vgo...@redhat.com [2009-07-02 16:01:32]:
Hi All,
Here is the V6 of the IO controller patches generated on top of 2.6.31-rc1.
Previous versions of the patches was posted here.
(V1) http://lkml.org/lkml/2009/3/11/486
(V2) http://lkml.org/lkml/2009/5/5/275
(V3)
* men...@google.com men...@google.com [2009-07-02 16:26:15]:
The following series (written by Ben Blum) adds a cgroup.procs file
to each cgroup that reports unique tgids rather than pids, and fixes a
pid namespace bug in the existing tasks file that could cause
readers in different namespaces
* KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com [2009-07-02 10:57:07]:
On Wed, 1 Jul 2009 18:36:36 -0700
Paul Menage men...@google.com wrote:
Thanks Li - but as I said to Serge in the email when I brought this up
originally, I already had a patch in mind for this; I've had an intern
* Serge E. Hallyn se...@us.ibm.com [2009-06-30 15:06:13]:
Quoting Balbir Singh (bal...@linux.vnet.ibm.com):
On Tue, Jun 23, 2009 at 8:26 PM, Serge E. Hallynse...@us.ibm.com wrote:
A topic on ksummit agenda is 'containers end-game and how do we
get there'.
So for starters, looking
namespace. I think there are quite a few network namespace
exploiters who require sysfs directory tagging (or some equivalent) to
allow us to migrate physical devices into network namespaces. And
checkpoint/restart needs... checkpoint/restart.
Balbir Singh
:51:16PM +0530, Balbir Singh wrote:
* Vivek Goyal vgo...@redhat.com [2009-06-19 16:37:18]:
Hi All,
Here is the V5 of the IO controller patches generated on top of
2.6.30.
[snip]
Testing
===
[snip]
I've not been
* Fabio Checconi fchecc...@gmail.com [2009-06-23 06:10:52]:
From: Vivek Goyal vgo...@redhat.com
Date: Mon, Jun 22, 2009 10:43:37PM -0400
On Mon, Jun 22, 2009 at 02:43:13PM +0200, Fabio Checconi wrote:
...
Please help me understand this, we sort the tree by finish time, but
1 - 100 of 525 matches
Mail list logo