[ cc: linux-fsdevel ]
On Thu, Feb 14, 2008 at 7:22 AM, Paul Menage [EMAIL PROTECTED] wrote:
On Wed, Feb 13, 2008 at 10:02 PM, Christoph Hellwig [EMAIL PROTECTED] wrote:
I think this concept is reasonable, but I don't think MS_BIND_FLAGS
is a descriptive name for this flag
On Thu, Feb 14, 2008 at 8:03 AM, Miklos Szeredi [EMAIL PROTECTED] wrote:
The flags argument could be the same as for regular mount, and
contain the mnt_flags - so the extra argument could maybe usefully be
a mnt_flags_mask, to indicate which flags we actually care about
overriding.
On Thu, Feb 14, 2008 at 9:31 AM, Miklos Szeredi [EMAIL PROTECTED] wrote:
I deliberately not used the MS_* flags, which is currently a messy mix
of things with totally different meanings.
Does this solve all the issues?
We should add a size parameter either in the mount_params or as a
Add linux-fsdevel to the VFS entry in MAINTAINERS
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
MAINTAINERS |1 +
1 file changed, 1 insertion(+)
Index: 2.6.24-mm1-bindflags/MAINTAINERS
===
--- 2.6.24-mm1-bindflags.orig
From: Paul Menage <[EMAIL PROTECTED]>
Add a new mount() flag, MS_BIND_FLAGS.
MS_BIND_FLAGS indicates that a bind mount should take its per-mount flags
from the arguments passed to mount() rather than from the source
mountpoint.
This flag allows you to create a bind mount with the desir
From: Paul Menage [EMAIL PROTECTED]
Add a new mount() flag, MS_BIND_FLAGS.
MS_BIND_FLAGS indicates that a bind mount should take its per-mount flags
from the arguments passed to mount() rather than from the source
mountpoint.
This flag allows you to create a bind mount with the desired per
On Feb 7, 2008 12:28 PM, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
>
> While on the subject, could someone document struct cgroup_subsys.
There's documentation for all the methods in Documentation/cgroup.txt
> particular, I've wondered why we have: cgroup_subsys::can_attach() and
> not use a
On Feb 7, 2008 7:37 AM, Pavel Emelyanov <[EMAIL PROTECTED]> wrote:
> The Documentation/cgroups.txt file contains the info on how
> to write some controller for cgroups subsystem, but even with
> this, one need to write quite a lot of code before developing
> the core (or copy-n-paste it from some
On Feb 7, 2008 12:28 PM, Peter Zijlstra [EMAIL PROTECTED] wrote:
While on the subject, could someone document struct cgroup_subsys.
There's documentation for all the methods in Documentation/cgroup.txt
particular, I've wondered why we have: cgroup_subsys::can_attach() and
not use a return
On Feb 7, 2008 7:37 AM, Pavel Emelyanov [EMAIL PROTECTED] wrote:
The Documentation/cgroups.txt file contains the info on how
to write some controller for cgroups subsystem, but even with
this, one need to write quite a lot of code before developing
the core (or copy-n-paste it from some other
On Jan 31, 2008 11:58 PM, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> > Is there a restriction in CFS that stops a given group from
> > simultaneously holding tasks and sub-groups? If so, couldn't we change
> > CFS to make it possible rather than enforcing awkward restrictions on
> > cgroups?
>
>
On Jan 30, 2008 6:40 PM, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
>
> Here are some questions that arise in this picture:
>
> 1. What is the relationship of the task-group in A/tasks with the
>task-group in A/a1/tasks? In otherwords do they form siblings
>of the same parent A?
I'd
On Jan 30, 2008 6:40 PM, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Here are some questions that arise in this picture:
1. What is the relationship of the task-group in A/tasks with the
task-group in A/a1/tasks? In otherwords do they form siblings
of the same parent A?
I'd argue the
Update comments in cpuset.c
Some of the comments in kernel/cpuset.c were stale following the
transition to control groups; this patch updates them to more closely
match reality.
Signed-off-by: Paul Menage <[EMAIL PROTECTED]>
Acked-by: Paul Jackson <[EMAIL PROTECTED]>
---
kernel/cpu
.
> the pid as it is seen from inside a namespace.
>
> Tune the code accordingly.
>
> Signed-off-by: Pavel Emelyanov <[EMAIL PROTECTED]>
Acked-by: Paul Menage <[EMAIL PROTECTED]>
>
> ---
> kernel/cgroup.c |4 ++--
> 1 files changed, 2 insertions(+), 2
a namespace.
Tune the code accordingly.
Signed-off-by: Pavel Emelyanov [EMAIL PROTECTED]
Acked-by: Paul Menage [EMAIL PROTECTED]
---
kernel/cgroup.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 2c5cccb..4766bb6
Update comments in cpuset.c
Some of the comments in kernel/cpuset.c were stale following the
transition to control groups; this patch updates them to more closely
match reality.
Signed-off-by: Paul Menage [EMAIL PROTECTED]
Acked-by: Paul Jackson [EMAIL PROTECTED]
---
kernel/cpuset.c | 128
On Jan 23, 2008 8:48 AM, Andrea Righi <[EMAIL PROTECTED]> wrote:
> >
> > 1. Implementation of soft limits (limit on contention of resource)
> >gets harder
>
> Why? do you mean implementing a grace time when the soft-limit is
> exceeded? this could be done in cgroup_nl_throttle() introducing 3
An approach that we've been experimenting with at Google is much simpler:
- add a "network class id" subsystem, that lets you associated an id
with each cgroup
- propagate this id to sockets created by that cgroup, and from there
to packets sent/received on that socket
- add a new traffic
An approach that we've been experimenting with at Google is much simpler:
- add a network class id subsystem, that lets you associated an id
with each cgroup
- propagate this id to sockets created by that cgroup, and from there
to packets sent/received on that socket
- add a new traffic filter
On Jan 23, 2008 8:48 AM, Andrea Righi [EMAIL PROTECTED] wrote:
1. Implementation of soft limits (limit on contention of resource)
gets harder
Why? do you mean implementing a grace time when the soft-limit is
exceeded? this could be done in cgroup_nl_throttle() introducing 3
additional
On Jan 18, 2008 7:36 AM, Dhaval Giani <[EMAIL PROTECTED]> wrote:
> On Fri, Jan 18, 2008 at 12:41:03PM +0100, Andrea Righi wrote:
> > Allow to limit the block I/O bandwidth for specific process containers
> > (cgroups) imposing additional delays on I/O requests for those processes
> > that exceed
On Jan 18, 2008 7:36 AM, Dhaval Giani [EMAIL PROTECTED] wrote:
On Fri, Jan 18, 2008 at 12:41:03PM +0100, Andrea Righi wrote:
Allow to limit the block I/O bandwidth for specific process containers
(cgroups) imposing additional delays on I/O requests for those processes
that exceed the limits
On Dec 1, 2007 10:36 AM, Rik van Riel <[EMAIL PROTECTED]> wrote:
>
> With the /proc/refaults info, we can measure how much extra
> memory each process group needs, if any.
What's the status of that? It looks as though it would be better than
the "accessed in the last N seconds" metric that we've
On Dec 1, 2007 10:36 AM, Rik van Riel [EMAIL PROTECTED] wrote:
With the /proc/refaults info, we can measure how much extra
memory each process group needs, if any.
What's the status of that? It looks as though it would be better than
the accessed in the last N seconds metric that we've been
Hi Vatsa,
Thanks, this looks pretty good.
On Nov 30, 2007 4:42 AM, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
>
> - Removed load average information. I felt it needs more thought (esp
> to deal with SMP and virtualized platforms) and can be added for
> 2.6.25 after
On Nov 29, 2007 6:11 PM, Nick Piggin <[EMAIL PROTECTED]> wrote:
> And also some
> results or even anecdotes of where this is going to be used would be
> interesting...
We want to be able to run multiple isolated jobs on the same machine.
So being able to limit how much memory each job can
Hi Vatsa,
Thanks, this looks pretty good.
On Nov 30, 2007 4:42 AM, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
- Removed load average information. I felt it needs more thought (esp
to deal with SMP and virtualized platforms) and can be added for
2.6.25 after more
On Nov 29, 2007 6:11 PM, Nick Piggin [EMAIL PROTECTED] wrote:
And also some
results or even anecdotes of where this is going to be used would be
interesting...
We want to be able to run multiple isolated jobs on the same machine.
So being able to limit how much memory each job can consume, in
On Nov 12, 2007 11:59 PM, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
>
> Thinking of it more, this requirement to "group tasks for only accounting
> purpose" may be required for other resources (mem, io, network etc) as well?
> Should we have a generic accounting controller which can provide
On Nov 12, 2007 11:59 PM, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Thinking of it more, this requirement to group tasks for only accounting
purpose may be required for other resources (mem, io, network etc) as well?
Should we have a generic accounting controller which can provide these
On Nov 12, 2007 11:48 PM, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
>
> Regarding your concern about tracking cpu usage in different ways, it
> could be mitigated if we have cpuacct controller track usage as per
> information present in a task's sched entity structure
>
On Nov 12, 2007 11:29 PM, Balbir Singh <[EMAIL PROTECTED]> wrote:
>
> I think it's a good hack, but not sure about the complexity to implement
> the code. I worry that if the number of tasks increase (say run into
> thousands for one or more groups and a few groups have just a few
> tasks), we'll
On Nov 12, 2007 11:00 PM, Balbir Singh <[EMAIL PROTECTED]> wrote:
>
> Right now, one of the limitations of the CPU controller is that
> the moment you create another control group, the bandwidth gets
> divided by the default number of shares. We can't create groups
> just for monitoring.
Could we
On Nov 12, 2007 10:00 PM, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> On second thoughts, this may be a usefull controller of its own.
> Say I just want to "monitor" usage (for accounting purpose) of a group of
> tasks, but don't want to control their cpu consumption, then cpuacct
>
Hi Linus,
Please can you revert commit 62d0df64065e7c135d0002f069444fbdfc64768f,
entitled "Task Control Groups: example CPU accounting subsystem" ?
This was originally intended as a simple initial example of how to
create a control groups subsystem; it wasn't intended for mainline,
but I didn't
;
>
> Signed-off-by: Diego Calleja <[EMAIL PROTECTED]>
(with the addition of akpm's KERN_INFO for cgroup_init_subsys() )
Acked-by: Paul Menage <[EMAIL PROTECTED]>
Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a messa
of akpm's KERN_INFO for cgroup_init_subsys() )
Acked-by: Paul Menage [EMAIL PROTECTED]
Paul
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ
Hi Linus,
Please can you revert commit 62d0df64065e7c135d0002f069444fbdfc64768f,
entitled Task Control Groups: example CPU accounting subsystem ?
This was originally intended as a simple initial example of how to
create a control groups subsystem; it wasn't intended for mainline,
but I didn't
On Nov 12, 2007 10:00 PM, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On second thoughts, this may be a usefull controller of its own.
Say I just want to monitor usage (for accounting purpose) of a group of
tasks, but don't want to control their cpu consumption, then cpuacct
controller
On Nov 12, 2007 11:29 PM, Balbir Singh [EMAIL PROTECTED] wrote:
I think it's a good hack, but not sure about the complexity to implement
the code. I worry that if the number of tasks increase (say run into
thousands for one or more groups and a few groups have just a few
tasks), we'll lose
On Nov 12, 2007 11:00 PM, Balbir Singh [EMAIL PROTECTED] wrote:
Right now, one of the limitations of the CPU controller is that
the moment you create another control group, the bandwidth gets
divided by the default number of shares. We can't create groups
just for monitoring.
Could we get
On Nov 12, 2007 11:48 PM, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Regarding your concern about tracking cpu usage in different ways, it
could be mitigated if we have cpuacct controller track usage as per
information present in a task's sched entity structure
(tsk-se.sum_exec_runtime) i.e
Report CPU usage in CFS Cgroup directories
Adds a cpu.usage file to the CFS cgroup that reports CPU usage in
milliseconds for that cgroup's tasks
Signed-off-by: Paul Menage <[EMAIL PROTECTED]>
---
kernel/sched.c | 36 +++-
1 file changed, 31 insertions
Report CPU usage in CFS Cgroup directories
Adds a cpu.usage file to the CFS cgroup that reports CPU usage in
milliseconds for that cgroup's tasks
Signed-off-by: Paul Menage <[EMAIL PROTECTED]>
---
kernel/sched.c | 36 +++-
1 file changed, 31 insertions
Report CPU usage in CFS Cgroup directories
Adds a cpu.usage file to the CFS cgroup that reports CPU usage in
milliseconds for that cgroup's tasks
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
kernel/sched.c | 36 +++-
1 file changed, 31 insertions(+), 5
Report CPU usage in CFS Cgroup directories
Adds a cpu.usage file to the CFS cgroup that reports CPU usage in
milliseconds for that cgroup's tasks
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
kernel/sched.c | 36 +++-
1 file changed, 31 insertions(+), 5
-by: Paul Menage <[EMAIL PROTECTED]>
---
Documentation/cgroups.txt | 22 +++---
kernel/cgroup.c | 36
2 files changed, 35 insertions(+), 23 deletions(-)
Index: container-2.6.23-mm1/kernel/cg
On 10/25/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Paul M wrote:
> > Sounds reasonable to me. Is there any kind of compile-time assert
> > macro in the kernel?
>
> Check out the assembly code generated by:
>
> BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX));
>
> (Hint: you can't
On 10/23/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
>
> agreed, we need to be reporting idle time in (milli)seconds, although
> the requirement we had was to report it back in percentage. I guess the
> percentage figure can be derived from the raw idle time number.
>
> How about:
>
>
it is the
kernel standard.
Acked-by: Paul Menage <[EMAIL PROTECTED]>
>
> ---
>
> This patch applies --after-- Adrian Bunk's patch:
> [2.6 patch] kernel/cgroup.c: remove dead code
>
> kernel/cgroup.c | 15 +--
> 1 file changed, 5 insertions(+), 10 d
On 10/24/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> From: Paul Jackson <[EMAIL PROTECTED]>
>
> Simplify the space stripping code in cgroup file write.
>
> Signed-off-by: Paul Jackson <[EMAIL PROTECTED]>
Acked-by: Paul Menage <[EMAIL PROTECTED]>
&g
On 10/24/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Paul M wrote:
> > I think I'd rather not make this change - if we later changed the size
> > of release_agent_path[] this could silently fail. Can we get around
> > the coverity checker somehow?
>
> Perhaps we can simplify this check then, to:
On 10/24/07, Paul Jackson [EMAIL PROTECTED] wrote:
Paul M wrote:
I think I'd rather not make this change - if we later changed the size
of release_agent_path[] this could silently fail. Can we get around
the coverity checker somehow?
Perhaps we can simplify this check then, to:
On 10/24/07, Paul Jackson [EMAIL PROTECTED] wrote:
From: Paul Jackson [EMAIL PROTECTED]
Simplify the space stripping code in cgroup file write.
Signed-off-by: Paul Jackson [EMAIL PROTECTED]
Acked-by: Paul Menage [EMAIL PROTECTED]
---
This patch applies after both:
Adrian Bunk's
On 10/24/07, Paul Jackson [EMAIL PROTECTED] wrote:
From: Paul Jackson [EMAIL PROTECTED]
Coding style fix - one line conditionals don't get braces.
Signed-off-by: Paul Jackson [EMAIL PROTECTED]
Not a coding style that I'm in favor of, but I suppose it is the
kernel standard.
Acked-by: Paul
On 10/23/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
agreed, we need to be reporting idle time in (milli)seconds, although
the requirement we had was to report it back in percentage. I guess the
percentage figure can be derived from the raw idle time number.
How about:
idle
On 10/25/07, Paul Jackson [EMAIL PROTECTED] wrote:
Paul M wrote:
Sounds reasonable to me. Is there any kind of compile-time assert
macro in the kernel?
Check out the assembly code generated by:
BUG_ON(sizeof(cgrp-root-release_agent_path) PATH_MAX));
(Hint: you can't find it ;)
It
-by: Paul Menage [EMAIL PROTECTED]
---
Documentation/cgroups.txt | 22 +++---
kernel/cgroup.c | 36
2 files changed, 35 insertions(+), 23 deletions(-)
Index: container-2.6.23-mm1/kernel/cgroup.c
On 10/24/07, Adrian Bunk <[EMAIL PROTECTED]> wrote:
>
> Two questions:
> - Is it really intended to perhaps change release_agent_path[] to have
> less than PATH_MAX size?
I've got no intention to do so currently.
> - If yes, do you want to return -E2BIG for (nbytes >= PATH_MAX) or for
>
On 10/24/07, Adrian Bunk <[EMAIL PROTECTED]> wrote:
> cgroup_is_releasable() and notify_on_release() should be static,
> not global inline.
>
> Signed-off-by: Adrian Bunk <[EMAIL PROTECTED]>
Acked-by: Paul Menage <[EMAIL PROTECTED]>
>
> ---
>
> kernel
I think I'd rather not make this change - if we later changed the size
of release_agent_path[] this could silently fail. Can we get around
the coverity checker somehow?
Paul
On 10/24/07, Adrian Bunk <[EMAIL PROTECTED]> wrote:
> This patch removes dead code spotted by the Coverity checker
> (look
On 10/24/07, Adrian Bunk <[EMAIL PROTECTED]> wrote:
> cgroup_is_releasable() and notify_on_release() should be static,
> not global inline.
>
They seem like they could be usefully static inline - or will the
compiler inline them anyway since they're simple enough?
Paul
-
To unsubscribe from this
I think I'd rather not make this change - if we later changed the size
of release_agent_path[] this could silently fail. Can we get around
the coverity checker somehow?
Paul
On 10/24/07, Adrian Bunk [EMAIL PROTECTED] wrote:
This patch removes dead code spotted by the Coverity checker
(look at
On 10/24/07, Adrian Bunk [EMAIL PROTECTED] wrote:
cgroup_is_releasable() and notify_on_release() should be static,
not global inline.
They seem like they could be usefully static inline - or will the
compiler inline them anyway since they're simple enough?
Paul
-
To unsubscribe from this
On 10/24/07, Adrian Bunk [EMAIL PROTECTED] wrote:
cgroup_is_releasable() and notify_on_release() should be static,
not global inline.
Signed-off-by: Adrian Bunk [EMAIL PROTECTED]
Acked-by: Paul Menage [EMAIL PROTECTED]
---
kernel/cgroup.c |4 ++--
1 file changed, 2 insertions(+), 2
On 10/24/07, Adrian Bunk [EMAIL PROTECTED] wrote:
Two questions:
- Is it really intended to perhaps change release_agent_path[] to have
less than PATH_MAX size?
I've got no intention to do so currently.
- If yes, do you want to return -E2BIG for (nbytes = PATH_MAX) or for
(nbytes =
Clean up some CFS CGroup code
- replace "cont" with "cgrp" in a few places in the CFS cgroup code,
- use write_uint rather than write for cpu.shares write function
Signed-off-by: Paul Menage <[EMAIL PROTECTED]>
---
This is a resend of yesterday's mail with the sa
On 10/23/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> > Suppose you have two cgroups that would each want to use, say, 55% of
> > a CPU - technically they should each be regarded as having 45% idle
> > time, but if they run on a the same CPU the chances are that they will
> > both always
On 10/23/07, Jeff Garzik <[EMAIL PROTECTED]> wrote:
> Signed-off-by: Jeff Garzik <[EMAIL PROTECTED]>
Acked-by: Paul Menage <[EMAIL PROTECTED]>
> ---
>
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index 5987dcc..3fe21e1 100644
> --- a/kernel/cgroup.c
On 10/23/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> > Adds a cpu.usage file to the CFS cgroup that reports CPU usage in
> > milliseconds for that cgroup's tasks
>
> It would be nice to split this into user and sys time at some point.
Sounds reasonable - but does CFS track this?
> We
subsystem state objects alive until the file
is closed.
The documentation is updated to reflect the changed semantics of
destroy(); additionally the locking comments for destroy() and some
other methods were clarified and decrustified.
Signed-off-by: Paul Menage <[EMAIL PROTEC
On 10/23/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
>
> Well, without notify_on_release you can never be sure if a new task
> got added to the control group or if someone acquired a reference
> to it. I can't think of a safe way of removing control groups/cpusets
> without using
On 10/23/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
>
> Well, most people who care about deletion will use the notify_on_release
> callback and retry.
I'm not convinced this is true. Certainly the userspace approaches
we're developing at Google don't (currently) use notify_on_release.
Paul
-
To
On 10/22/07, Paul Menage <[EMAIL PROTECTED]> wrote:
> On 10/22/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> >
> > Minor nit: From pov of making this patch series git bisect safe, shouldn't
> > we
> > be registering a write_uint() handler in this patc
On 10/22/07, Paul Menage <[EMAIL PROTECTED]> wrote:
>
> Using cgroup_mutex is certainly possible for now, although more
> heavy-weight than I'd like long term. Using css_get isn't the right
> approach, I think - we shouldn't be able to cause an rmdir to fail due
> to a c
On 10/22/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
>
> Minor nit: From pov of making this patch series git bisect safe, shouldn't we
> be registering a write_uint() handler in this patch as well?
>
Yes, we should. Sigh. I originally had the cleanup and the new
reporting interface in the
are held when cpuset_rt_set_overload() is called?
>
> Questions for Paul Menage:
>
> Does 'tsk' need to be locked for the above task_cs() call?
Cgroups doesn't change the locking rules for accessing a cpuset from a
task - you have to have one of:
- task_lock(task)
- callback_mutex
On 10/22/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
>
> I think we also need the notion of load, like we have in cpu_acct.c
Yes, a notion of load would be good - but the "load" calculated by
cpu_acct.c isn't all that useful yet - it's just a total of the CPU
cycles used in the 10 second time
On 10/22/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> On Mon, Oct 22, 2007 at 05:49:39PM -0700, Paul Menage wrote:
> > +static u64 cpu_usage_read(struct cgroup *cgrp, struct cftype *cft)
> > +{
> > + struct task_group *tg = cgroup_tg(cgrp);
> > +
On 10/22/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Mon, Oct 22, 2007 at 05:49:39PM -0700, Paul Menage wrote:
+static u64 cpu_usage_read(struct cgroup *cgrp, struct cftype *cft)
+{
+ struct task_group *tg = cgroup_tg(cgrp);
+ int i;
+ u64 res = 0
On 10/22/07, Balbir Singh [EMAIL PROTECTED] wrote:
I think we also need the notion of load, like we have in cpu_acct.c
Yes, a notion of load would be good - but the load calculated by
cpu_acct.c isn't all that useful yet - it's just a total of the CPU
cycles used in the 10 second time interval
On 10/22/07, Paul Jackson [EMAIL PROTECTED] wrote:
Steven wrote:
+void cpuset_rt_set_overload(struct task_struct *tsk, int cpu)
+{
+ cpu_set(cpu, task_cs(tsk)-rt_overload);
+}
Question for Steven:
What locks are held when cpuset_rt_set_overload() is called?
Questions for Paul
On 10/22/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Minor nit: From pov of making this patch series git bisect safe, shouldn't we
be registering a write_uint() handler in this patch as well?
Yes, we should. Sigh. I originally had the cleanup and the new
reporting interface in the same
On 10/22/07, Paul Menage [EMAIL PROTECTED] wrote:
Using cgroup_mutex is certainly possible for now, although more
heavy-weight than I'd like long term. Using css_get isn't the right
approach, I think - we shouldn't be able to cause an rmdir to fail due
to a concurrent read.
OK, the obvious
On 10/22/07, Paul Menage [EMAIL PROTECTED] wrote:
On 10/22/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Minor nit: From pov of making this patch series git bisect safe, shouldn't
we
be registering a write_uint() handler in this patch as well?
Yes, we should. Sigh. I originally had
On 10/23/07, Balbir Singh [EMAIL PROTECTED] wrote:
Well, most people who care about deletion will use the notify_on_release
callback and retry.
I'm not convinced this is true. Certainly the userspace approaches
we're developing at Google don't (currently) use notify_on_release.
Paul
-
To
On 10/23/07, Balbir Singh [EMAIL PROTECTED] wrote:
Well, without notify_on_release you can never be sure if a new task
got added to the control group or if someone acquired a reference
to it. I can't think of a safe way of removing control groups/cpusets
without using notify_on_release.
subsystem state objects alive until the file
is closed.
The documentation is updated to reflect the changed semantics of
destroy(); additionally the locking comments for destroy() and some
other methods were clarified and decrustified.
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
Documentation
On 10/23/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Adds a cpu.usage file to the CFS cgroup that reports CPU usage in
milliseconds for that cgroup's tasks
It would be nice to split this into user and sys time at some point.
Sounds reasonable - but does CFS track this?
We have also
On 10/23/07, Jeff Garzik [EMAIL PROTECTED] wrote:
Signed-off-by: Jeff Garzik [EMAIL PROTECTED]
Acked-by: Paul Menage [EMAIL PROTECTED]
---
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 5987dcc..3fe21e1 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2402,7 +2402,6 @@ struct
On 10/23/07, Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
Suppose you have two cgroups that would each want to use, say, 55% of
a CPU - technically they should each be regarded as having 45% idle
time, but if they run on a the same CPU the chances are that they will
both always have some
Clean up some CFS CGroup code
- replace cont with cgrp in a few places in the CFS cgroup code,
- use write_uint rather than write for cpu.shares write function
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
This is a resend of yesterday's mail with the same subject; the final hunk
Clean up some CFS CGroup code
- replace "cont" with "cgrp" in a few places in the CFS cgroup code,
- use write_uint rather than write for cpu.shares write function
Signed-off-by: Paul Menage <[EMAIL PROTECTED]>
---
kernel/sched.c | 51 +--
Report CPU usage in CFS Cgroup directories
Adds a cpu.usage file to the CFS cgroup that reports CPU usage in
milliseconds for that cgroup's tasks
This replaces the "example CPU Accounting CGroup subsystem" that
was merged into mainline last week.
Signed-off-by: Paul Menage <[EM
These two patches consist of a small cleanup to CFS, and adding a control file
reporting CPU usage in milliseconds in each CGroup directory. They're just
bundled together since the second patch depends slightly on the cleanups in the
first patch.
-
To unsubscribe from this list: send the line
Thanks - this has already been sent to akpm.
Paul
On 10/22/07, Dave Hansen <[EMAIL PROTECTED]> wrote:
>
> I just noticed this in mainline:
>
> kernel/cgroup.c: In function `proc_cgroupstats_show':
> kernel/cgroup.c:2405: warning: unused variable `root'
>
>
> ---
>
>
Thanks - this has already been sent to akpm.
Paul
On 10/22/07, Dave Hansen [EMAIL PROTECTED] wrote:
I just noticed this in mainline:
kernel/cgroup.c: In function `proc_cgroupstats_show':
kernel/cgroup.c:2405: warning: unused variable `root'
---
linux-2.6.git-dave/kernel/cgroup.c |
These two patches consist of a small cleanup to CFS, and adding a control file
reporting CPU usage in milliseconds in each CGroup directory. They're just
bundled together since the second patch depends slightly on the cleanups in the
first patch.
-
To unsubscribe from this list: send the line
Clean up some CFS CGroup code
- replace cont with cgrp in a few places in the CFS cgroup code,
- use write_uint rather than write for cpu.shares write function
Signed-off-by: Paul Menage [EMAIL PROTECTED]
---
kernel/sched.c | 51 +--
1 file
101 - 200 of 663 matches
Mail list logo