On Sun, May 5, 2013 at 6:45 PM, Alex Shi <alex....@intel.com> wrote: > Except using runnable load average in background, move_tasks is also > the key functions in load balance. We need consider the runnable load > average in it in order to the apple to apple load comparison. > > Signed-off-by: Alex Shi <alex....@intel.com> > --- > kernel/sched/fair.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 0bf88e8..790e23d 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -3966,6 +3966,12 @@ static unsigned long task_h_load(struct task_struct > *p); > > static const unsigned int sched_nr_migrate_break = 32; > > +static unsigned long task_h_load_avg(struct task_struct *p) > +{ > + return div_u64(task_h_load(p) * (u64)p->se.avg.runnable_avg_sum, > + p->se.avg.runnable_avg_period + 1);
Similarly, I think you also want to at least include blocked_load_avg here. More fundamentally: I suspect the instability from comparing these to an average taken on them will not give a representative imbalance weight. While we should be no worse off than the present situation; we could be doing much better. Consider that by not consuming {runnable, blocked}_load_avg directly you are "hiding" the movement from one load-balancer to the next. > +} > + > /* > * move_tasks tries to move up to imbalance weighted load from busiest to > * this_rq, as part of a balancing operation within domain "sd". > @@ -4001,7 +4007,7 @@ static int move_tasks(struct lb_env *env) > if (throttled_lb_pair(task_group(p), env->src_cpu, > env->dst_cpu)) > goto next; > > - load = task_h_load(p); > + load = task_h_load_avg(p); > > if (sched_feat(LB_MIN) && load < 16 && > !env->sd->nr_balance_failed) > goto next; > -- > 1.7.12 > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/