From: Glauber Costa
While stress-running very-small container scenarios with the Kernel
Memory Controller, I've run into a lockdep-detected lock imbalance in
cfq-iosched.c.
I'll apologize beforehand for not posting a backlog: I didn't anticipate
it would be so hard to reproduce, so I didn't
Hello Mr. Someone.
On 01/28/2013 06:28 PM, Aristeu Rozanski wrote:
> On Fri, Jan 25, 2013 at 06:21:00PM -0800, Eric W. Biederman wrote:
>> When I initially wrote the code for /proc//uid_map. I was lazy
>> and avoided duplicate mappings by the simple expedient of ensuring the
>> first number in a
On 01/28/2013 12:14 PM, Eric W. Biederman wrote:
> Lord Glauber Costa of Sealand writes:
>
>> I just saw in a later patch of yours that your concern here seems not
>> limited to backed ram by tmpfs, but with things like the internal
>> structures for userns , to av
On 01/28/2013 12:14 PM, Eric W. Biederman wrote:
Lord Glauber Costa of Sealand glom...@parallels.com writes:
I just saw in a later patch of yours that your concern here seems not
limited to backed ram by tmpfs, but with things like the internal
structures for userns , to avoid patterns
Hello Mr. Someone.
On 01/28/2013 06:28 PM, Aristeu Rozanski wrote:
On Fri, Jan 25, 2013 at 06:21:00PM -0800, Eric W. Biederman wrote:
When I initially wrote the code for /proc/pid/uid_map. I was lazy
and avoided duplicate mappings by the simple expedient of ensuring the
first number in a new
From: Glauber Costa glom...@parallels.com
While stress-running very-small container scenarios with the Kernel
Memory Controller, I've run into a lockdep-detected lock imbalance in
cfq-iosched.c.
I'll apologize beforehand for not posting a backlog: I didn't anticipate
it would be so hard
On 01/28/2013 08:19 PM, Eric W. Biederman wrote:
Lord Glauber Costa of Sealand glom...@parallels.com writes:
On 01/28/2013 12:14 PM, Eric W. Biederman wrote:
Lord Glauber Costa of Sealand glom...@parallels.com writes:
I just saw in a later patch of yours that your concern here seems
On 01/28/2013 11:37 AM, Lord Glauber Costa of Sealand wrote:
> On 01/26/2013 06:22 AM, Eric W. Biederman wrote:
>>
>> In the help text describing user namespaces recommend use of memory
>> control groups. In many cases memory control groups are the only
>> mechanism
On 01/26/2013 06:22 AM, Eric W. Biederman wrote:
>
> In the help text describing user namespaces recommend use of memory
> control groups. In many cases memory control groups are the only
> mechanism there is to limit how much memory a user who can create
> user namespaces can use.
>
>
On 01/26/2013 06:22 AM, Eric W. Biederman wrote:
In the help text describing user namespaces recommend use of memory
control groups. In many cases memory control groups are the only
mechanism there is to limit how much memory a user who can create
user namespaces can use.
Signed-off-by:
On 01/28/2013 11:37 AM, Lord Glauber Costa of Sealand wrote:
On 01/26/2013 06:22 AM, Eric W. Biederman wrote:
In the help text describing user namespaces recommend use of memory
control groups. In many cases memory control groups are the only
mechanism there is to limit how much memory
From: Glauber Costa
All the information we have that is needed for cpuusage (and
cpuusage_percpu) is present in schedstats. It is already recorded
in a sane hierarchical way.
If we have CONFIG_SCHEDSTATS, we don't really need to do any extra
work. All former functions become empty inlines
off-by: Tejun Heo
Cc: Peter Zijlstra
Cc: Glauber Costa
Cc: Michal Hocko
Cc: Kay Sievers
Cc: Lennart Poettering
Cc: Dave Jones
Cc: Ben Hutchings
Cc: Paul Turner
---
init/Kconfig| 11 ++-
kernel/cgroup.c | 47 ++-
kernel/sc
From: Glauber Costa
Hi all,
This is an attempt to provide userspace with enough information to reconstruct
per-container version of files like "/proc/stat". In particular, we are
interested in knowing the per-cgroup slices of user time, system time, wait
time, number of processes, and
on
top of which cpu can implement proper optimization.
[ glommer: don't call *_charge in stop_task.c ]
Signed-off-by: Tejun Heo
Signed-off-by: Glauber Costa
Cc: Peter Zijlstra
Cc: Michal Hocko
Cc: Kay Sievers
Cc: Lennart Poettering
Cc: Dave Jones
Cc: Ben Hutchings
Cc: Paul Turner
From: Glauber Costa
We already track multiple tick statistics per-cgroup, using
the task_group_account_field facility. This patch accounts
guest_time in that manner as well.
Signed-off-by: Glauber Costa
CC: Peter Zijlstra
CC: Paul Turner
---
kernel/sched/cputime.c | 10 --
1 file
From: Glauber Costa
The CPU cgroup is so far, undocumented. Although data exists in the
Documentation directory about its functioning, it is usually spread,
and/or presented in the context of something else. This file
consolidates all cgroup-related information about it.
Signed-off-by: Glauber
From: Glauber Costa
exec_clock already provides per-group cpu usage metrics, and can be
reused by cpuacct in case cpu and cpuacct are comounted.
However, it is only provided by tasks in fair class. Doing the same for
rt is easy, and can be done in an already existing hierarchy loop
From: Glauber Costa
The file cpu.stat_percpu will show various scheduler related
information, that are usually available to the top level through other
files.
For instance, most of the meaningful data in /proc/stat is presented
here. Given this file, a container can easily construct a local
: incorporated mailing list feedback ]
Signed-off-by: Peter Zijlstra
Signed-off-by: Glauber Costa
---
include/linux/sched.h| 8 +++-
kernel/sched/core.c | 20 +++-
kernel/sched/fair.c | 6 +-
kernel/sched/idle_task.c | 6 +-
kernel/sched/rt.c| 27
From: Glauber Costa
This patch changes the calculation of nr_context_switches. The variable
"nr_switches" is now used to account for the number of transition to the
idle task, or stop task. It is removed from the schedule() path.
The total calculation can be made usin
-off-by: Tejun Heo
Cc: Peter Zijlstra
Cc: Glauber Costa
---
include/linux/cgroup.h | 1 +
kernel/cgroup.c| 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 7d73905..7d193f9 100644
--- a/include/linux/cgroup.h
+++ b
From: Glauber Costa
Context switches are, to this moment, a property of the runqueue. When
running containers, we would like to be able to present a separate
figure for each container (or cgroup, in this context).
The chosen way to accomplish this is to increment a per cfs_rq or
rt_rq
From: Glauber Costa
Commit 8f618968 changed stop_task to do the same bookkeping as the
other classes. However, the call to cpuacct_charge() doesn't affect
the scheduler decisions at all, and doesn't need to be moved over.
Moreover, being a kthread, the migration thread won't belong to any
From: Glauber Costa glom...@parallels.com
Commit 8f618968 changed stop_task to do the same bookkeping as the
other classes. However, the call to cpuacct_charge() doesn't affect
the scheduler decisions at all, and doesn't need to be moved over.
Moreover, being a kthread, the migration thread
files.
Signed-off-by: Tejun Heo t...@kernel.org
Cc: Peter Zijlstra pet...@infradead.org
Cc: Glauber Costa glom...@parallels.com
---
include/linux/cgroup.h | 1 +
kernel/cgroup.c| 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/cgroup.h b/include/linux
From: Glauber Costa glom...@parallels.com
Context switches are, to this moment, a property of the runqueue. When
running containers, we would like to be able to present a separate
figure for each container (or cgroup, in this context).
The chosen way to accomplish this is to increment a per
From: Glauber Costa glom...@parallels.com
This patch changes the calculation of nr_context_switches. The variable
nr_switches is now used to account for the number of transition to the
idle task, or stop task. It is removed from the schedule() path.
The total calculation can be made using
From: Glauber Costa glom...@parallels.com
The file cpu.stat_percpu will show various scheduler related
information, that are usually available to the top level through other
files.
For instance, most of the meaningful data in /proc/stat is presented
here. Given this file, a container can easily
.
[ glom...@parallels.com: incorporated mailing list feedback ]
Signed-off-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Glauber Costa glom...@parallels.com
---
include/linux/sched.h| 8 +++-
kernel/sched/core.c | 20 +++-
kernel/sched/fair.c | 6
From: Glauber Costa glom...@parallels.com
exec_clock already provides per-group cpu usage metrics, and can be
reused by cpuacct in case cpu and cpuacct are comounted.
However, it is only provided by tasks in fair class. Doing the same for
rt is easy, and can be done in an already existing
From: Glauber Costa glom...@parallels.com
The CPU cgroup is so far, undocumented. Although data exists in the
Documentation directory about its functioning, it is usually spread,
and/or presented in the context of something else. This file
consolidates all cgroup-related information about
From: Glauber Costa glom...@parallels.com
We already track multiple tick statistics per-cgroup, using
the task_group_account_field facility. This patch accounts
guest_time in that manner as well.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Peter Zijlstra a.p.zijls...@chello.nl
CC
From: Glauber Costa glom...@parallels.com
Hi all,
This is an attempt to provide userspace with enough information to reconstruct
per-container version of files like /proc/stat. In particular, we are
interested in knowing the per-cgroup slices of user time, system time, wait
time, number
and creating a base on
top of which cpu can implement proper optimization.
[ glommer: don't call *_charge in stop_task.c ]
Signed-off-by: Tejun Heo t...@kernel.org
Signed-off-by: Glauber Costa glom...@parallels.com
Cc: Peter Zijlstra pet...@infradead.org
Cc: Michal Hocko mho...@suse.cz
Cc: Kay Sievers
From: Glauber Costa glom...@parallels.com
All the information we have that is needed for cpuusage (and
cpuusage_percpu) is present in schedstats. It is already recorded
in a sane hierarchical way.
If we have CONFIG_SCHEDSTATS, we don't really need to do any extra
work. All former functions
]
Signed-off-by: Tejun Heo t...@kernel.org
Cc: Peter Zijlstra pet...@infradead.org
Cc: Glauber Costa glom...@parallels.com
Cc: Michal Hocko mho...@suse.cz
Cc: Kay Sievers kay.siev...@vrfy.org
Cc: Lennart Poettering mzxre...@0pointer.de
Cc: Dave Jones da...@redhat.com
Cc: Ben Hutchings b
On 01/22/2013 03:21 AM, Dave Chinner wrote:
> On Mon, Jan 21, 2013 at 08:08:53PM +0400, Glauber Costa wrote:
>> On 11/28/2012 03:14 AM, Dave Chinner wrote:
>>> [PATCH 09/19] list_lru: per-node list infrastructure
>>>
>>> This makes the generic LRU
On 01/10/2013 01:27 AM, Glauber Costa wrote:
> On 01/10/2013 01:17 AM, Andrew Morton wrote:
>> On Thu, 10 Jan 2013 01:10:02 +0400
>> Glauber Costa wrote:
>>
>>> The main advantage I see in this approach, is that there is way less
>>> data to be written
On 01/10/2013 12:42 AM, Andrew Morton wrote:
> Also, I'm not seeing any changes to Docmentation/ in this patchset.
> How do we explain the interface to our users?
There is little point in adding any Documentation, since the cpu cgroup
itself is not documented. I took the liberty of doing this
On 01/23/2013 05:53 AM, Colin Cross wrote:
> On Tue, Jan 22, 2013 at 5:02 PM, Tejun Heo wrote:
>> Hello,
>>
>> On Mon, Jan 21, 2013 at 04:14:27PM +0400, Glauber Costa wrote:
>>>> Android userspace is currently using both cpu and cpuacct, and not
>
On 01/23/2013 05:53 AM, Colin Cross wrote:
On Tue, Jan 22, 2013 at 5:02 PM, Tejun Heo t...@kernel.org wrote:
Hello,
On Mon, Jan 21, 2013 at 04:14:27PM +0400, Glauber Costa wrote:
Android userspace is currently using both cpu and cpuacct, and not
co-mounting them. They are used
On 01/10/2013 12:42 AM, Andrew Morton wrote:
Also, I'm not seeing any changes to Docmentation/ in this patchset.
How do we explain the interface to our users?
There is little point in adding any Documentation, since the cpu cgroup
itself is not documented. I took the liberty of doing this
On 01/10/2013 01:27 AM, Glauber Costa wrote:
On 01/10/2013 01:17 AM, Andrew Morton wrote:
On Thu, 10 Jan 2013 01:10:02 +0400
Glauber Costa glom...@parallels.com wrote:
The main advantage I see in this approach, is that there is way less
data to be written using a header. Although your way
On 01/22/2013 03:21 AM, Dave Chinner wrote:
On Mon, Jan 21, 2013 at 08:08:53PM +0400, Glauber Costa wrote:
On 11/28/2012 03:14 AM, Dave Chinner wrote:
[PATCH 09/19] list_lru: per-node list infrastructure
This makes the generic LRU list much more scalable by changing it to
a {list,lock,count
On 11/28/2012 03:14 AM, Dave Chinner wrote:
> [PATCH 09/19] list_lru: per-node list infrastructure
>
> This makes the generic LRU list much more scalable by changing it to
> a {list,lock,count} tuple per node. There are no external API
> changes to this changeover, so is transparent to current
On 01/16/2013 04:33 AM, Colin Cross wrote:
> On Wed, Jan 9, 2013 at 3:45 AM, Glauber Costa wrote:
>> [ update: I thought I posted this already before leaving for holidays.
>> However,
>> now that I am checking for replies, I can't find nor replies nor the
>> ori
On 01/16/2013 04:33 AM, Colin Cross wrote:
On Wed, Jan 9, 2013 at 3:45 AM, Glauber Costa glom...@parallels.com wrote:
[ update: I thought I posted this already before leaving for holidays.
However,
now that I am checking for replies, I can't find nor replies nor the
original
mail in my
On 11/28/2012 03:14 AM, Dave Chinner wrote:
[PATCH 09/19] list_lru: per-node list infrastructure
This makes the generic LRU list much more scalable by changing it to
a {list,lock,count} tuple per node. There are no external API
changes to this changeover, so is transparent to current users.
On 01/18/2013 04:10 PM, Dave Chinner wrote:
> On Fri, Jan 18, 2013 at 11:10:00AM -0800, Glauber Costa wrote:
>> On 01/18/2013 12:11 AM, Dave Chinner wrote:
>>> On Thu, Jan 17, 2013 at 04:14:10PM -0800, Glauber Costa wrote:
>>>> On 01/17/2013 04:10 PM, Dave Chinner wro
On 01/18/2013 12:11 AM, Dave Chinner wrote:
> On Thu, Jan 17, 2013 at 04:14:10PM -0800, Glauber Costa wrote:
>> On 01/17/2013 04:10 PM, Dave Chinner wrote:
>>> And then each object uses:
>>>
>>> struct lru_item {
>>> struct list_head glo
On 01/18/2013 12:08 AM, Dave Chinner wrote:
> On Thu, Jan 17, 2013 at 04:51:03PM -0800, Glauber Costa wrote:
>> On 01/17/2013 04:10 PM, Dave Chinner wrote:
>>> and we end up with:
>>>
>>> lru_add(struct lru_list *lru, struct lru_item *item)
>>> {
&
On 01/18/2013 12:08 AM, Dave Chinner wrote:
On Thu, Jan 17, 2013 at 04:51:03PM -0800, Glauber Costa wrote:
On 01/17/2013 04:10 PM, Dave Chinner wrote:
and we end up with:
lru_add(struct lru_list *lru, struct lru_item *item)
{
node_id = min(object_to_nid(item), lru-numnodes
On 01/18/2013 12:11 AM, Dave Chinner wrote:
On Thu, Jan 17, 2013 at 04:14:10PM -0800, Glauber Costa wrote:
On 01/17/2013 04:10 PM, Dave Chinner wrote:
And then each object uses:
struct lru_item {
struct list_head global_list;
struct list_head memcg_list;
}
by objects you mean
On 01/18/2013 04:10 PM, Dave Chinner wrote:
On Fri, Jan 18, 2013 at 11:10:00AM -0800, Glauber Costa wrote:
On 01/18/2013 12:11 AM, Dave Chinner wrote:
On Thu, Jan 17, 2013 at 04:14:10PM -0800, Glauber Costa wrote:
On 01/17/2013 04:10 PM, Dave Chinner wrote:
And then each object uses:
struct
On 01/17/2013 04:10 PM, Dave Chinner wrote:
> and we end up with:
>
> lru_add(struct lru_list *lru, struct lru_item *item)
> {
> node_id = min(object_to_nid(item), lru->numnodes);
>
> __lru_add(lru, node_id, >global_list);
> if (memcg) {
> memcg_lru =
On 01/17/2013 04:10 PM, Dave Chinner wrote:
> And then each object uses:
>
> struct lru_item {
> struct list_head global_list;
> struct list_head memcg_list;
> }
by objects you mean dentries, inodes, and the such, right?
Would it be acceptable to you?
We've been of course doing our
>> Deepest fears:
>>
>> 1) snakes.
>
> Snakes are merely poisonous. Drop Bears are far more dangerous :P
>
fears are irrational anyway...
>> 2) It won't surprise you to know that I am adapting your work, which
>> provides a very sane and helpful API, to memcg shrinking.
>>
>> The dumb and
Deepest fears:
1) snakes.
Snakes are merely poisonous. Drop Bears are far more dangerous :P
fears are irrational anyway...
2) It won't surprise you to know that I am adapting your work, which
provides a very sane and helpful API, to memcg shrinking.
The dumb and simple approach in
On 01/17/2013 04:10 PM, Dave Chinner wrote:
And then each object uses:
struct lru_item {
struct list_head global_list;
struct list_head memcg_list;
}
by objects you mean dentries, inodes, and the such, right?
Would it be acceptable to you?
We've been of course doing our best to
On 01/17/2013 04:10 PM, Dave Chinner wrote:
and we end up with:
lru_add(struct lru_list *lru, struct lru_item *item)
{
node_id = min(object_to_nid(item), lru-numnodes);
__lru_add(lru, node_id, item-global_list);
if (memcg) {
memcg_lru =
>> The superblocks only, are present by the dozens even in a small system,
>> and I believe the whole goal of this API is to get more users to switch
>> to it. This can easily use up a respectable bunch of megs.
>>
>> Isn't it a bit too much ?
>
> Maybe, but for active superblocks it only takes
On 11/27/2012 03:14 PM, Dave Chinner wrote:
> From: Dave Chinner
>
> Now that we have an LRU list API, we can start to enhance the
> implementation. This splits the single LRU list into per-node lists
> and locks to enhance scalability. Items are placed on lists
> according to the node the
On 11/27/2012 03:14 PM, Dave Chinner wrote:
From: Dave Chinner dchin...@redhat.com
Now that we have an LRU list API, we can start to enhance the
implementation. This splits the single LRU list into per-node lists
and locks to enhance scalability. Items are placed on lists
according to the
The superblocks only, are present by the dozens even in a small system,
and I believe the whole goal of this API is to get more users to switch
to it. This can easily use up a respectable bunch of megs.
Isn't it a bit too much ?
Maybe, but for active superblocks it only takes a handful of
On 01/15/2013 02:19 AM, Sha Zhengju wrote:
> On Mon, Jan 14, 2013 at 10:55 PM, Glauber Costa wrote:
>> On 01/14/2013 12:34 AM, Sha Zhengju wrote:
>>>> + struct kernel_cpustat *kcpustat =
>>>> this_cpu_ptr(ca->cpustat);
>>>>>
On 01/15/2013 02:19 AM, Sha Zhengju wrote:
On Mon, Jan 14, 2013 at 10:55 PM, Glauber Costa glom...@parallels.com wrote:
On 01/14/2013 12:34 AM, Sha Zhengju wrote:
+ struct kernel_cpustat *kcpustat =
this_cpu_ptr(ca-cpustat);
+
kcpustat = this_cpu_ptr(ca-cpustat
On 01/14/2013 12:34 AM, Sha Zhengju wrote:
>> + struct kernel_cpustat *kcpustat = this_cpu_ptr(ca->cpustat);
>> > +
>> > kcpustat = this_cpu_ptr(ca->cpustat);
> Is this reassignment unnecessary?
>
>
No.
--
To unsubscribe from this list: send the line "unsubscribe
On 01/14/2013 12:34 AM, Sha Zhengju wrote:
+ struct kernel_cpustat *kcpustat = this_cpu_ptr(ca-cpustat);
+
kcpustat = this_cpu_ptr(ca-cpustat);
Is this reassignment unnecessary?
No.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
> If it's configure as ZONE_NORMAL, you need to pray for offlining memory.
>
> AFAIK, IBM's ppc? has 16MB section size. So, some of sections can be
> offlined
> even if they are configured as ZONE_NORMAL. For them, placement of offlined
> memory is not important because it's virtualized by LPAR,
If it's configure as ZONE_NORMAL, you need to pray for offlining memory.
AFAIK, IBM's ppc? has 16MB section size. So, some of sections can be
offlined
even if they are configured as ZONE_NORMAL. For them, placement of offlined
memory is not important because it's virtualized by LPAR, they
If it's configure as ZONE_NORMAL, you need to pray for offlining memory.
AFAIK, IBM's ppc? has 16MB section size. So, some of sections can be
offlined
even if they are configured as ZONE_NORMAL. For them, placement of offlined
memory is not important because it's virtualized by LPAR, they
On 01/10/2013 11:31 AM, Kamezawa Hiroyuki wrote:
> (2013/01/10 16:14), Glauber Costa wrote:
>> On 01/10/2013 06:17 AM, Tang Chen wrote:
>>>>> Note: if the memory provided by the memory device is used by the
>>>>> kernel, it
>>>>> can't be offl
On 01/10/2013 02:06 AM, Anton Vorontsov wrote:
> On Wed, Jan 09, 2013 at 01:55:14PM -0800, Tejun Heo wrote:
> [...]
>>> We can use mempressure w/o memcg, and even then it can (or should :) be
>>> useful (for cpuset, for example).
>>
>> The problem is that you end with, at the very least, duplicate
On 01/10/2013 06:17 AM, Tang Chen wrote:
>>> Note: if the memory provided by the memory device is used by the
>>> kernel, it
>>> can't be offlined. It is not a bug.
>>
>> Right. But how often does this happen in testing? In other words,
>> please provide an overall description of how well memory
On 01/10/2013 01:17 AM, Andrew Morton wrote:
> On Thu, 10 Jan 2013 01:10:02 +0400
> Glauber Costa wrote:
>
>> The main advantage I see in this approach, is that there is way less
>> data to be written using a header. Although your way works, it means we
>> will write
On 01/10/2013 12:37 AM, Tejun Heo wrote:
> Hello,
>
> Can you please cc me too when posting further patches? I kinda missed
> the whole discussion upto this point.
>
> On Fri, Jan 04, 2013 at 12:29:11AM -0800, Anton Vorontsov wrote:
>> This commit implements David Rientjes' idea of mempressure
On 01/10/2013 12:42 AM, Andrew Morton wrote:
> On Wed, 9 Jan 2013 15:45:38 +0400
> Glauber Costa wrote:
>
>> The file cpu.stat_percpu will show various scheduler related
>> information, that are usually available to the top level through other
>> file
On 12/30/2012 09:58 AM, Wen Congyang wrote:
> At 12/25/2012 04:35 PM, Glauber Costa Wrote:
>> On 12/24/2012 04:09 PM, Tang Chen wrote:
>>> From: Wen Congyang
>>>
>>> memory can't be offlined when CONFIG_MEMCG is selected.
>>> For example: there is
On 01/09/2013 01:44 AM, Andrew Morton wrote:
> On Fri, 4 Jan 2013 00:29:11 -0800
> Anton Vorontsov wrote:
>
>> This commit implements David Rientjes' idea of mempressure cgroup.
>>
>> The main characteristics are the same to what I've tried to add to vmevent
>> API; internally, it uses Mel
On 01/09/2013 01:15 PM, Andrew Morton wrote:
> On Wed, 9 Jan 2013 12:56:46 +0400 Glauber Costa wrote:
>
>>> +#if IS_SUBSYS_ENABLED(CONFIG_CGROUP_MEMPRESSURE)
>>> +SUBSYS(mpc_cgroup)
>>> +#endif
>>
>> It might be just me, but if one does not know wha
off-by: Tejun Heo
Cc: Peter Zijlstra
Cc: Glauber Costa
Cc: Michal Hocko
Cc: Kay Sievers
Cc: Lennart Poettering
Cc: Dave Jones
Cc: Ben Hutchings
Cc: Paul Turner
---
init/Kconfig| 11 ++-
kernel/cgroup.c | 47 ++-
kernel/sc
on
top of which cpu can implement proper optimization.
[ glommer: don't call *_charge in stop_task.c ]
Signed-off-by: Tejun Heo
Signed-off-by: Glauber Costa
Cc: Peter Zijlstra
Cc: Michal Hocko
Cc: Kay Sievers
Cc: Lennart Poettering
Cc: Dave Jones
Cc: Ben Hutchings
Cc: Paul Turner
All the information we have that is needed for cpuusage (and
cpuusage_percpu) is present in schedstats. It is already recorded
in a sane hierarchical way.
If we have CONFIG_SCHEDSTATS, we don't really need to do any extra
work. All former functions become empty inlines.
Signed-off-by: Glauber
not likely, it seems a fair
price to pay.
2. Those figures do not include switches from and to the idle or stop
task. Those need to be recorded separately, which will happen in a
follow up patch.
Signed-off-by: Glauber Costa
CC: Peter Zijlstra
CC: Paul Turner
---
kernel/sched/fair.c | 18
s provided
by the cpu controller, resulting in greater simplicity.
This also tries to hook into the existing scheduler hierarchy walks instead of
providing new ones.
Glauber Costa (7):
don't call cpuacct_charge in stop_task.c
sched: adjust exec_clock to use it as cpu usage metric
cpuacct: don'
this call quite useless.
Signed-off-by: Glauber Costa
CC: Mike Galbraith
CC: Peter Zijlstra
CC: Thomas Gleixner
---
kernel/sched/stop_task.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/stop_task.c b/kernel/sched/stop_task.c
index da5eb5b..fda1cbe 100644
--- a/kernel/sched
air and rt classes are recorded in the root_task_group. One can easily
derive the total figure by adding those quantities together.
Signed-off-by: Glauber Costa
CC: Peter Zijlstra
CC: Paul Turner
---
kernel/sched/core.c | 17 +++--
kernel/sched/idle_task.c | 3 +++
kernel/sched/s
the independent hierarchy walk executed by
cpuacct.
Signed-off-by: Glauber Costa
CC: Dave Jones
CC: Ben Hutchings
CC: Peter Zijlstra
CC: Paul Turner
CC: Lennart Poettering
CC: Kay Sievers
CC: Tejun Heo
---
kernel/sched/rt.c| 1 +
kernel/sched/sched.h | 3 +++
2 files changed, 4 insertions
are cgroup-local versions of
their global counterparts.
The file includes a header, so fields can come and go if needed.
Signed-off-by: Glauber Costa
CC: Peter Zijlstra
CC: Paul Turner
---
kernel/sched/core.c | 97
kernel/sched/fair.c
: incorporated mailing list feedback ]
Signed-off-by: Peter Zijlstra
Signed-off-by: Glauber Costa
---
include/linux/sched.h| 8 +++-
kernel/sched/core.c | 20 +++-
kernel/sched/fair.c | 6 +-
kernel/sched/idle_task.c | 6 +-
kernel/sched/rt.c| 27
We already track multiple tick statistics per-cgroup, using
the task_group_account_field facility. This patch accounts
guest_time in that manner as well.
Signed-off-by: Glauber Costa
CC: Peter Zijlstra
CC: Paul Turner
---
kernel/sched/cputime.c | 10 --
1 file changed, 4 insertions
-off-by: Tejun Heo
Cc: Peter Zijlstra
Cc: Glauber Costa
---
include/linux/cgroup.h | 1 +
kernel/cgroup.c| 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 7d73905..7d193f9 100644
--- a/include/linux/cgroup.h
+++ b
On 01/04/2013 01:35 AM, Tejun Heo wrote:
> Note that this leaves memcg as the only external user of cgroup_mutex.
> Michal, Kame, can you guys please convert memcg to use its own locking
> too?
I've already done this, I just have to rework it according to latest
feedback and repost it.
It should
Hi.
I have a couple of small questions.
On 01/04/2013 12:29 PM, Anton Vorontsov wrote:
> This commit implements David Rientjes' idea of mempressure cgroup.
>
> The main characteristics are the same to what I've tried to add to vmevent
> API; internally, it uses Mel Gorman's idea of
Hi.
I have a couple of small questions.
On 01/04/2013 12:29 PM, Anton Vorontsov wrote:
This commit implements David Rientjes' idea of mempressure cgroup.
The main characteristics are the same to what I've tried to add to vmevent
API; internally, it uses Mel Gorman's idea of
On 01/04/2013 01:35 AM, Tejun Heo wrote:
Note that this leaves memcg as the only external user of cgroup_mutex.
Michal, Kame, can you guys please convert memcg to use its own locking
too?
I've already done this, I just have to rework it according to latest
feedback and repost it.
It should be
files.
Signed-off-by: Tejun Heo t...@kernel.org
Cc: Peter Zijlstra pet...@infradead.org
Cc: Glauber Costa glom...@parallels.com
---
include/linux/cgroup.h | 1 +
kernel/cgroup.c| 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/cgroup.h b/include/linux
.
[ glom...@parallels.com: incorporated mailing list feedback ]
Signed-off-by: Peter Zijlstra a.p.zijls...@chello.nl
Signed-off-by: Glauber Costa glom...@parallels.com
---
include/linux/sched.h| 8 +++-
kernel/sched/core.c | 20 +++-
kernel/sched/fair.c | 6
We already track multiple tick statistics per-cgroup, using
the task_group_account_field facility. This patch accounts
guest_time in that manner as well.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Peter Zijlstra a.p.zijls...@chello.nl
CC: Paul Turner p...@google.com
---
kernel/sched
401 - 500 of 3816 matches
Mail list logo