* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
> > Seems this didn't get merged? Latest git as of today still has the code
> > as it was before this patch.
>
> This is must fix for .23 and Ingo previously mentioned that he will push it
> for .23
yep, it's queued up and it will send it with the
On Tue, Sep 04, 2007 at 07:35:21PM -0400, Chuck Ebbert wrote:
> On 08/28/2007 06:27 PM, Siddha, Suresh B wrote:
> > Try to fix MC/HT scheduler optimization breakage again, with out breaking
> > the FUZZ logic.
> >
> > First fix the check
> > if (*imbalance + SCHED_LOAD_SCALE_FUZZ <
On 08/28/2007 06:27 PM, Siddha, Suresh B wrote:
> On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
>> Essentially I observed that nice 0 tasks still endup on two cores of same
>> package, with out getting spread out to two different packages. This behavior
>> is same with out this
On 08/28/2007 06:27 PM, Siddha, Suresh B wrote:
On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
Essentially I observed that nice 0 tasks still endup on two cores of same
package, with out getting spread out to two different packages. This behavior
is same with out this fix
On Tue, Sep 04, 2007 at 07:35:21PM -0400, Chuck Ebbert wrote:
On 08/28/2007 06:27 PM, Siddha, Suresh B wrote:
Try to fix MC/HT scheduler optimization breakage again, with out breaking
the FUZZ logic.
First fix the check
if (*imbalance + SCHED_LOAD_SCALE_FUZZ busiest_load_per_task)
* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
> On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
> > Essentially I observed that nice 0 tasks still endup on two cores of same
> > package, with out getting spread out to two different packages. This
> > behavior
> > is same with
On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
> Essentially I observed that nice 0 tasks still endup on two cores of same
> package, with out getting spread out to two different packages. This behavior
> is same with out this fix and this fix doesn't help in any way.
Ingo,
On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
Essentially I observed that nice 0 tasks still endup on two cores of same
package, with out getting spread out to two different packages. This behavior
is same with out this fix and this fix doesn't help in any way.
Ingo,
* Siddha, Suresh B [EMAIL PROTECTED] wrote:
On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
Essentially I observed that nice 0 tasks still endup on two cores of same
package, with out getting spread out to two different packages. This
behavior
is same with out this
On Mon, Aug 27, 2007 at 09:23:24PM +0200, Ingo Molnar wrote:
>
> * Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
>
> > > - if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) {
> > > + if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {
> >
> > Ingo, this is still
* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
> > - if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) {
> > + if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {
>
> Ingo, this is still broken. This condition is always false for nice-0
> tasks..
yes -
On Thu, Aug 23, 2007 at 02:13:41PM +0200, Ingo Molnar wrote:
>
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > [...] So how about the patch below instead?
>
> the right patch attached.
>
> >
> Subject: sched: fix broken SMT/MC optimizations
> From: "Siddha,
On 8/23/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> with no patch, or with my patch below each gets ~66% of CPU time,
> long-term:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
> 2290 mingo 20 0 2736 528 252 R 67 0.0 3:22.95 bash
> 2291 mingo 20 0
On 8/23/07, Ingo Molnar [EMAIL PROTECTED] wrote:
with no patch, or with my patch below each gets ~66% of CPU time,
long-term:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
2290 mingo 20 0 2736 528 252 R 67 0.0 3:22.95 bash
2291 mingo 20 0 2736
On Thu, Aug 23, 2007 at 02:13:41PM +0200, Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
[...] So how about the patch below instead?
the right patch attached.
Subject: sched: fix broken SMT/MC optimizations
From: Siddha, Suresh B [EMAIL
* Siddha, Suresh B [EMAIL PROTECTED] wrote:
- if (*imbalance + SCHED_LOAD_SCALE_FUZZ busiest_load_per_task/2) {
+ if (*imbalance + SCHED_LOAD_SCALE_FUZZ busiest_load_per_task) {
Ingo, this is still broken. This condition is always false for nice-0
tasks..
yes - negative reniced
On Mon, Aug 27, 2007 at 09:23:24PM +0200, Ingo Molnar wrote:
* Siddha, Suresh B [EMAIL PROTECTED] wrote:
- if (*imbalance + SCHED_LOAD_SCALE_FUZZ busiest_load_per_task/2) {
+ if (*imbalance + SCHED_LOAD_SCALE_FUZZ busiest_load_per_task) {
Ingo, this is still broken. This
On Thu, 2007-08-23 at 14:13 +0200, Ingo Molnar wrote:
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > [...] So how about the patch below instead?
>
> the right patch attached.
>
> >
> Subject: sched: fix broken SMT/MC optimizations
> From: "Siddha, Suresh B"
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> [...] So how about the patch below instead?
the right patch attached.
>
Subject: sched: fix broken SMT/MC optimizations
From: "Siddha, Suresh B" <[EMAIL PROTECTED]>
On a four package system with HT - HT load balancing
* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
>* a think about bumping its value to force at least one task to be
>* moved
>*/
> - if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) {
> + if (*imbalance < busiest_load_per_task) {
>
* Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
> Ingo, let me know if there any side effects of this change. Thanks.
> ---
>
> On a four package system with HT - HT load balancing optimizations
> were broken. For example, if two tasks end up running on two logical
> threads of one of the
* Siddha, Suresh B [EMAIL PROTECTED] wrote:
Ingo, let me know if there any side effects of this change. Thanks.
---
On a four package system with HT - HT load balancing optimizations
were broken. For example, if two tasks end up running on two logical
threads of one of the packages,
* Siddha, Suresh B [EMAIL PROTECTED] wrote:
* a think about bumping its value to force at least one task to be
* moved
*/
- if (*imbalance + SCHED_LOAD_SCALE_FUZZ busiest_load_per_task/2) {
+ if (*imbalance busiest_load_per_task) {
unsigned long
* Ingo Molnar [EMAIL PROTECTED] wrote:
[...] So how about the patch below instead?
the right patch attached.
Subject: sched: fix broken SMT/MC optimizations
From: Siddha, Suresh B [EMAIL PROTECTED]
On a four package system with HT - HT load balancing
On Thu, 2007-08-23 at 14:13 +0200, Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
[...] So how about the patch below instead?
the right patch attached.
Subject: sched: fix broken SMT/MC optimizations
From: Siddha, Suresh B [EMAIL PROTECTED]
Ingo, let me know if there any side effects of this change. Thanks.
---
On a four package system with HT - HT load balancing optimizations
were broken. For example, if two tasks end up running on two logical
threads of one of the packages, scheduler is not able to pull one of
the tasks to a
Ingo, let me know if there any side effects of this change. Thanks.
---
On a four package system with HT - HT load balancing optimizations
were broken. For example, if two tasks end up running on two logical
threads of one of the packages, scheduler is not able to pull one of
the tasks to a
27 matches
Mail list logo