On Sat, 6 Oct 2007, Paul Menage wrote:
> > The getting and putting of the tasks will prevent them from exiting or
> > being deallocated prematurely. But this is also a critical section that
> > will need to be protected by some mutex so it doesn't race with other
> > set_cpus_allowed().
>
> Is t
On Sat, 6 Oct 2007, Paul Jackson wrote:
> > struct cgroup_iter it;
> > struct task_struct *p, **tasks;
> > int i = 0;
> >
> > cgroup_iter_start(cs->css.cgroup, &it);
> > while ((p = cgroup_iter_next(cs->css.cgroup, &it))) {
> > get_task_struct(p);
> > t
Balbir Singh wrote:
> YAMAMOTO Takashi wrote:
>>> hi,
>>>
>>> i implemented some statistics for your memory controller.
>>>
>>> it's tested with 2.6.23-rc2-mm2 + memory controller v7.
>>> i think it can be applied to 2.6.23-rc4-mm1 as well.
>>>
>>> YAMOMOTO Takshi
>>>
>>> todo: something like nr_ac
On Sun, Oct 07, 2007 at 02:31:42PM +1300, Sam Vilain wrote:
> I see that 2.6.23 has the CFS in it - has anyone written a CPU
> controller for that scheduler yet?
Hi Sam,
Yes, it has been written. It is slated to go in 2.6.24 as part of the
CFS-devel tree. http://lkml.org/lkml/2007/9/24/412
The l
I see that 2.6.23 has the CFS in it - has anyone written a CPU
controller for that scheduler yet?
Sam.
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
_
Paul M wrote:
>
> What's wrong with:
>
> allocate a page of task_struct pointers
> again:
> need_repeat = false;
> cgroup_iter_start();
> while (cgroup_iter_next()) {
> if (p->cpus_allowed != new_cpumask) {
> store p;
> if (page is full) {
> need_repeat = true;
>
On 10/6/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> From: Paul Jackson <[EMAIL PROTECTED]>
>
> Need to include kmod.h to define UMH_WAIT_EXEC, at least
> for my configuration (sn2_defconfig).
>
> Signed-off-by: Paul Jackson <[EMAIL PROTECTED]>
Acked-by: Paul Menage <[EMAIL PROTECTED]>
> Cc: Pau
On 10/6/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
>
> This isn't working for me.
>
> The key kernel routine for updating a tasks cpus_allowed
> cannot be called while holding a spinlock.
>
> But the above loop holds a spinlock, css_set_lock, between
> the cgroup_iter_start and the cgroup_iter_end
On 10/6/07, David Rientjes <[EMAIL PROTECTED]> wrote:
> The getting and putting of the tasks will prevent them from exiting or
> being deallocated prematurely. But this is also a critical section that
> will need to be protected by some mutex so it doesn't race with other
> set_cpus_allowed().
Is
On 10/6/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> David wrote:
> > It would probably be better to just save references to the tasks.
> >
> > struct cgroup_iter it;
> > struct task_struct *p, **tasks;
> > int i = 0;
> >
> > cgroup_iter_start(cs->css.cgroup, &it);
> >
David wrote:
> It would probably be better to just save references to the tasks.
>
> struct cgroup_iter it;
> struct task_struct *p, **tasks;
> int i = 0;
>
> cgroup_iter_start(cs->css.cgroup, &it);
> while ((p = cgroup_iter_next(cs->css.cgroup, &it))) {
>
On Sat, 6 Oct 2007, Paul Jackson wrote:
> This isn't working for me.
>
> The key kernel routine for updating a tasks cpus_allowed
> cannot be called while holding a spinlock.
>
> But the above loop holds a spinlock, css_set_lock, between
> the cgroup_iter_start and the cgroup_iter_end.
>
> I en
From: Paul Jackson <[EMAIL PROTECTED]>
Need to include kmod.h to define UMH_WAIT_EXEC, at least
for my configuration (sn2_defconfig).
Signed-off-by: Paul Jackson <[EMAIL PROTECTED]>
Cc: Paul Menage <[EMAIL PROTECTED]>
---
kernel/cgroup.c |1 +
1 file changed, 1 insertion(+)
--- 2.6.23-rc8
Paul Menage wrote:
> What was wrong with my suggestion from a couple of emails back? Adding
> the following in cpuset_attach():
>
> struct cgroup_iter it;
> struct task_struct *p;
> cgroup_iter_start(cs->css.cgroup, &it);
> while ((p = cgroup_iter_next(cs->css.cgroup, &it)))
>set_cpus_allo
14 matches
Mail list logo