On Fri, Jun 07, 2013 at 03:20:52PM +0800, Alex Shi wrote: > blocked_load_avg sometime is too heavy and far bigger than runnable load > avg, that make balance make wrong decision. So remove it. > > Changlong tested this patch, found ltp cgroup stress testing get better > performance: https://lkml.org/lkml/2013/5/23/65 > --- > 3.10-rc1 patch1-7 patch1-8 > duration=764 duration=754 duration=750 > duration=764 duration=754 duration=751 > duration=763 duration=755 duration=751 > > duration means the seconds of testing cost. > --- > > And Jason also tested this patchset on his 8 sockets machine: > https://lkml.org/lkml/2013/5/29/673 > --- > When using a 3.10-rc2 tip kernel with patches 1-8, there was about a 40% > improvement in performance of the workload compared to when using the > vanilla 3.10-rc2 tip kernel with no patches. When using a 3.10-rc2 tip > kernel with just patches 1-7, the performance improvement of the > workload over the vanilla 3.10-rc2 tip kernel was about 25%. > --- > > Signed-off-by: Alex Shi <[email protected]> > Tested-by: Changlong Xie <[email protected]> > Tested-by: Jason Low <[email protected]> > --- > kernel/sched/fair.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 3aa1dc0..985d47e 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -1358,7 +1358,7 @@ static inline void > __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq, > struct task_group *tg = cfs_rq->tg; > s64 tg_contrib; > > - tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg; > + tg_contrib = cfs_rq->runnable_load_avg; > tg_contrib -= cfs_rq->tg_load_contrib; > > if (force_update || abs64(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
PJT ping! ;-) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

