Siddha, Suresh B wrote:
On Tue, May 29, 2007 at 07:18:18PM -0700, Peter Williams wrote:
Siddha, Suresh B wrote:
I can try 32-bit kernel to check.
Don't bother. I just checked 2.6.22-rc3 and the problem is not present
which means something between rc2 and rc3 has fixed the problem. I hate
it
Siddha, Suresh B wrote:
On Tue, May 29, 2007 at 07:18:18PM -0700, Peter Williams wrote:
Siddha, Suresh B wrote:
I can try 32-bit kernel to check.
Don't bother. I just checked 2.6.22-rc3 and the problem is not present
which means something between rc2 and rc3 has fixed the problem. I hate
it
On Tue, May 29, 2007 at 07:18:18PM -0700, Peter Williams wrote:
> Siddha, Suresh B wrote:
> > I can try 32-bit kernel to check.
>
> Don't bother. I just checked 2.6.22-rc3 and the problem is not present
> which means something between rc2 and rc3 has fixed the problem. I hate
> it when problems
Siddha, Suresh B wrote:
On Tue, May 29, 2007 at 04:54:29PM -0700, Peter Williams wrote:
I tried with various refresh rates of top too.. Do you see the issue
at runlevel 3 too?
I haven't tried that.
Do your spinners ever relinquish the CPU voluntarily?
Nope. Simple and plain while(1); 's
I c
On Tue, May 29, 2007 at 04:54:29PM -0700, Peter Williams wrote:
> > I tried with various refresh rates of top too.. Do you see the issue
> > at runlevel 3 too?
>
> I haven't tried that.
>
> Do your spinners ever relinquish the CPU voluntarily?
Nope. Simple and plain while(1); 's
I can try 32-bi
Siddha, Suresh B wrote:
On Thu, May 24, 2007 at 04:23:19PM -0700, Peter Williams wrote:
Siddha, Suresh B wrote:
On Thu, May 24, 2007 at 12:43:58AM -0700, Peter Williams wrote:
Further testing indicates that CONFIG_SCHED_MC is not implicated and
it's CONFIG_SCHED_SMT that's causing the problem.
On Thu, May 24, 2007 at 04:23:19PM -0700, Peter Williams wrote:
> Siddha, Suresh B wrote:
> > On Thu, May 24, 2007 at 12:43:58AM -0700, Peter Williams wrote:
> > >
> > > Further testing indicates that CONFIG_SCHED_MC is not implicated and
> > > it's CONFIG_SCHED_SMT that's causing the problem. Thi
Siddha, Suresh B wrote:
On Thu, May 24, 2007 at 12:43:58AM -0700, Peter Williams wrote:
Peter Williams wrote:
The relevant code, find_busiest_group() and find_busiest_queue(), has a
lot of code that is ifdefed by CONFIG_SCHED_MC and CONFIG_SCHED_SMT and,
as these macros were defined in the ker
On Thu, May 24, 2007 at 12:43:58AM -0700, Peter Williams wrote:
>Peter Williams wrote:
>> The relevant code, find_busiest_group() and find_busiest_queue(), has a
>> lot of code that is ifdefed by CONFIG_SCHED_MC and CONFIG_SCHED_SMT and,
>> as these macros were defined in the kernels I was testin
Peter Williams wrote:
Peter Williams wrote:
Peter Williams wrote:
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and
Dmitry Adamushko wrote:
On 22/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
> [...]
> Hum.. I guess, a 0/4 scenario wouldn't fit well in this explanation..
No, and I haven't seen one.
Well, I just took one of your calculated probabilities as something
you have really observed - (*) below.
Ingo Molnar wrote:
* Chris Friesen <[EMAIL PROTECTED]> wrote:
Is there a way in CFS to tune the amount of time over which the load
balancer is fair? (Of course there would be some overhead involved.)
it should be fair pretty fast (see the 10 seconds run of massive_intr) -
so it's not 1 min
* Chris Friesen <[EMAIL PROTECTED]> wrote:
> Ingo Molnar wrote:
>
> >CFS is fair even on SMP. Consider for example the worst-case
> >3-tasks-on-2-CPUs workload on a 2-CPU box:
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
> > 2658 mingo 20 0 1580 248 200 R
Ingo Molnar wrote:
CFS is fair even on SMP. Consider for example the worst-case
3-tasks-on-2-CPUs workload on a 2-CPU box:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
2658 mingo 20 0 1580 248 200 R 67 0.0 0:56.30 loop
2656 mingo 20 0 1580 252 2
Peter Williams wrote:
Peter Williams wrote:
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is that they r
On 22/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
> [...]
> Hum.. I guess, a 0/4 scenario wouldn't fit well in this explanation..
No, and I haven't seen one.
Well, I just took one of your calculated probabilities as something
you have really observed - (*) below.
"The probabilities for t
Peter Williams wrote:
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is that they run at a more or less co
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is that they run at a more or less constant interval (and,
i
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is that they run at a more or less constant interval (and,
in this case, X would also
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>> cfs should probably consider aggregate lag as opposed to aggregate
>> weighted load. Mainline's convergence to proper CPU bandwidth
>> distributions on SMP (e.g. N+1 tasks of equal nice on N cpus) is
>> incredibly slow and probably also fragi
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> cfs should probably consider aggregate lag as opposed to aggregate
> weighted load. Mainline's convergence to proper CPU bandwidth
> distributions on SMP (e.g. N+1 tasks of equal nice on N cpus) is
> incredibly slow and probably also fragile
On Sat, May 19, 2007 at 03:27:54PM +0200, Dmitry Adamushko wrote:
> Just an(quick) another idea. Say, the load balancer would consider not
> only p->load_weight but also something like Tw(task) =
> (time_spent_on_runqueue / total_task's_runtime) * some_scale_constant
> as an additional "load" compo
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is that they run at a more or less constant interval (and,
i
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is that they run at a more or less constant interval (and,
in this case, X would also
Peter Williams wrote:
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
I've now done this test on a number of kernels: 2.6.21 and 2.6.22-rc1
with and without CFS; and the problem is always present. It's not
"nice" related as the all four tasks are run at nice == 0.
could you
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
I've now done this test on a number of kernels: 2.6.21 and 2.6.22-rc1
with and without CFS; and the problem is always present. It's not
"nice" related as the all four tasks are run at nice == 0.
could you try -v13 and did this b
* Bill Huey <[EMAIL PROTECTED]> wrote:
> On Sun, May 13, 2007 at 05:38:53PM +0200, Ingo Molnar wrote:
> > Even a simple 3D app like glxgears does a sys_sched_yield() for
> > every frame it generates (!) on certain 3D cards, which in essence
> > punishes any scheduler that implements sys_sched_y
On Sun, May 13, 2007 at 05:38:53PM +0200, Ingo Molnar wrote:
>> So i've added a yield workaround to -v12, which makes it work similar to
>> how the vanilla scheduler and SD does it. (Xorg has been notified and
>> this bug should be fixed there too. This took some time to debug because
>> the 3D
On Thu, May 17, 2007 at 05:18:41PM -0700, Bill Huey wrote:
> On Sun, May 13, 2007 at 05:38:53PM +0200, Ingo Molnar wrote:
> > Even a simple 3D app like glxgears does a sys_sched_yield() for every
> > frame it generates (!) on certain 3D cards, which in essence punishes
> > any scheduler that impl
On Sun, May 13, 2007 at 05:38:53PM +0200, Ingo Molnar wrote:
> Even a simple 3D app like glxgears does a sys_sched_yield() for every
> frame it generates (!) on certain 3D cards, which in essence punishes
> any scheduler that implements sys_sched_yield() in a sane manner. This
> interaction of C
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
Load balancing appears to be badly broken in this version. When I
started 4 hard spinners on my 2 CPU machine one ended up on one CPU
and the other 3 on the other CPU and they stayed there.
could you try to debug this a bit more
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
As usual, any sort of feedback, bugreport, fix and suggestion is more
than welcome,
Load balancing appears to be badly broken in this version. When I
started 4 hard spinners on my 2 CPU machine one ended up on one CPU
and the oth
* Peter Williams <[EMAIL PROTECTED]> wrote:
> >As usual, any sort of feedback, bugreport, fix and suggestion is more
> >than welcome,
>
> Load balancing appears to be badly broken in this version. When I
> started 4 hard spinners on my 2 CPU machine one ended up on one CPU
> and the other 3
Ingo Molnar wrote:
i'm pleased to announce release -v12 of the CFS scheduler patchset.
The CFS patch against v2.6.22-rc1, v2.6.21.1 or v2.6.20.10 can be
downloaded from the usual place:
http://people.redhat.com/mingo/cfs-scheduler/
-v12 fixes the '3D bug' that caused trivial latencies
i'm pleased to announce release -v12 of the CFS scheduler patchset.
The CFS patch against v2.6.22-rc1, v2.6.21.1 or v2.6.20.10 can be
downloaded from the usual place:
http://people.redhat.com/mingo/cfs-scheduler/
-v12 fixes the '3D bug' that caused trivial latencies in 3D games: it
turn
35 matches
Mail list logo