Re: [PATCH 5/5] sched: make fix_small_imbalance work with asymmetric packing

2010-04-14 Thread Michael Neuling
In message <1271208670.2834.55.ca...@sbs-t61.sc.intel.com> you wrote:
> On Tue, 2010-04-13 at 05:29 -0700, Peter Zijlstra wrote:
> > On Fri, 2010-04-09 at 16:21 +1000, Michael Neuling wrote:
> > > With the asymmetric packing infrastructure, fix_small_imbalance is
> > > causing idle higher threads to pull tasks off lower threads.  
> > > 
> > > This is being caused by an off-by-one error.  
> > > 
> > > Signed-off-by: Michael Neuling 
> > > ---
> > > I'm not sure this is the right fix but without it, higher threads pull
> > > tasks off the lower threads, then the packing pulls it back down, etc
> > > etc and tasks bounce around constantly.
> > 
> > Would help if you expand upon the why/how it manages to get pulled up.
> > 
> > I can't immediately spot anything wrong with the patch, but then that
> > isn't my favourite piece of code either.. Suresh, any comments?
> > 
> 
> Sorry didn't pay much attention to this patchset. But based on the
> comments from Michael and looking at this patchset, it has SMT/MC
> implications. I will review and run some tests and get back in a day.
> 
> As far as this particular patch is concerned, original code is coming
> from Ingo's original CFS code commit (dd41f596) and the below hunk
> pretty much explains what the change is about.
> 
> -   if (max_load - this_load >= busiest_load_per_task * imbn) {
> +   if (max_load - this_load + SCHED_LOAD_SCALE_FUZZ >=
> +   busiest_load_per_task * imbn) {
> 
> So the below proposed change will probably break what the above
> mentioned commit was trying to achieve, which is: for fairness reasons
> we were bouncing the small extra load (between the max_load and
> this_load) around.

Actually, you can drop this patch.  

In the process of clarifying why it was needed for the changelog, I
discovered I don't actually need it.  

Sorry about that.

Mikey

> 
> > > ---
> > > 
> > >  kernel/sched_fair.c |2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > Index: linux-2.6-ozlabs/kernel/sched_fair.c
> > > ===
> > > --- linux-2.6-ozlabs.orig/kernel/sched_fair.c
> > > +++ linux-2.6-ozlabs/kernel/sched_fair.c
> > > @@ -2652,7 +2652,7 @@ static inline void fix_small_imbalance(s
> > >* SCHED_LOAD_SCALE;
> > >   scaled_busy_load_per_task /= sds->busiest->cpu_power;
> > >  
> > > - if (sds->max_load - sds->this_load + scaled_busy_load_per_task >=
> > > + if (sds->max_load - sds->this_load + scaled_busy_load_per_task >
> > >   (scaled_busy_load_per_task * imbn)) {
> > >   *imbalance = sds->busiest_load_per_task;
> > >   return;
> > 
> 
> thanks,
> suresh
> 
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 5/5] sched: make fix_small_imbalance work with asymmetric packing

2010-04-13 Thread Suresh Siddha
On Tue, 2010-04-13 at 05:29 -0700, Peter Zijlstra wrote:
> On Fri, 2010-04-09 at 16:21 +1000, Michael Neuling wrote:
> > With the asymmetric packing infrastructure, fix_small_imbalance is
> > causing idle higher threads to pull tasks off lower threads.  
> > 
> > This is being caused by an off-by-one error.  
> > 
> > Signed-off-by: Michael Neuling 
> > ---
> > I'm not sure this is the right fix but without it, higher threads pull
> > tasks off the lower threads, then the packing pulls it back down, etc
> > etc and tasks bounce around constantly.
> 
> Would help if you expand upon the why/how it manages to get pulled up.
> 
> I can't immediately spot anything wrong with the patch, but then that
> isn't my favourite piece of code either.. Suresh, any comments?
> 

Sorry didn't pay much attention to this patchset. But based on the
comments from Michael and looking at this patchset, it has SMT/MC
implications. I will review and run some tests and get back in a day.

As far as this particular patch is concerned, original code is coming
from Ingo's original CFS code commit (dd41f596) and the below hunk
pretty much explains what the change is about.

-   if (max_load - this_load >= busiest_load_per_task * imbn) {
+   if (max_load - this_load + SCHED_LOAD_SCALE_FUZZ >=
+   busiest_load_per_task * imbn) {

So the below proposed change will probably break what the above
mentioned commit was trying to achieve, which is: for fairness reasons
we were bouncing the small extra load (between the max_load and
this_load) around.

> > ---
> > 
> >  kernel/sched_fair.c |2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > Index: linux-2.6-ozlabs/kernel/sched_fair.c
> > ===
> > --- linux-2.6-ozlabs.orig/kernel/sched_fair.c
> > +++ linux-2.6-ozlabs/kernel/sched_fair.c
> > @@ -2652,7 +2652,7 @@ static inline void fix_small_imbalance(s
> >  * SCHED_LOAD_SCALE;
> > scaled_busy_load_per_task /= sds->busiest->cpu_power;
> >  
> > -   if (sds->max_load - sds->this_load + scaled_busy_load_per_task >=
> > +   if (sds->max_load - sds->this_load + scaled_busy_load_per_task >
> > (scaled_busy_load_per_task * imbn)) {
> > *imbalance = sds->busiest_load_per_task;
> > return;
> 

thanks,
suresh

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 5/5] sched: make fix_small_imbalance work with asymmetric packing

2010-04-13 Thread Peter Zijlstra
On Fri, 2010-04-09 at 16:21 +1000, Michael Neuling wrote:
> With the asymmetric packing infrastructure, fix_small_imbalance is
> causing idle higher threads to pull tasks off lower threads.  
> 
> This is being caused by an off-by-one error.  
> 
> Signed-off-by: Michael Neuling 
> ---
> I'm not sure this is the right fix but without it, higher threads pull
> tasks off the lower threads, then the packing pulls it back down, etc
> etc and tasks bounce around constantly.

Would help if you expand upon the why/how it manages to get pulled up.

I can't immediately spot anything wrong with the patch, but then that
isn't my favourite piece of code either.. Suresh, any comments?

> ---
> 
>  kernel/sched_fair.c |2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> Index: linux-2.6-ozlabs/kernel/sched_fair.c
> ===
> --- linux-2.6-ozlabs.orig/kernel/sched_fair.c
> +++ linux-2.6-ozlabs/kernel/sched_fair.c
> @@ -2652,7 +2652,7 @@ static inline void fix_small_imbalance(s
>* SCHED_LOAD_SCALE;
>   scaled_busy_load_per_task /= sds->busiest->cpu_power;
>  
> - if (sds->max_load - sds->this_load + scaled_busy_load_per_task >=
> + if (sds->max_load - sds->this_load + scaled_busy_load_per_task >
>   (scaled_busy_load_per_task * imbn)) {
>   *imbalance = sds->busiest_load_per_task;
>   return;

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 5/5] sched: make fix_small_imbalance work with asymmetric packing

2010-04-08 Thread Michael Neuling
With the asymmetric packing infrastructure, fix_small_imbalance is
causing idle higher threads to pull tasks off lower threads.  

This is being caused by an off-by-one error.  

Signed-off-by: Michael Neuling 
---
I'm not sure this is the right fix but without it, higher threads pull
tasks off the lower threads, then the packing pulls it back down, etc
etc and tasks bounce around constantly.

---

 kernel/sched_fair.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6-ozlabs/kernel/sched_fair.c
===
--- linux-2.6-ozlabs.orig/kernel/sched_fair.c
+++ linux-2.6-ozlabs/kernel/sched_fair.c
@@ -2652,7 +2652,7 @@ static inline void fix_small_imbalance(s
 * SCHED_LOAD_SCALE;
scaled_busy_load_per_task /= sds->busiest->cpu_power;
 
-   if (sds->max_load - sds->this_load + scaled_busy_load_per_task >=
+   if (sds->max_load - sds->this_load + scaled_busy_load_per_task >
(scaled_busy_load_per_task * imbn)) {
*imbalance = sds->busiest_load_per_task;
return;
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev