On Fri, May 10, 2013 at 11:17:27PM +0800, Alex Shi wrote: > They are the base values in load balance, update them with rq runnable > load average, then the load balance will consider runnable load avg > naturally. > > We also try to include the blocked_load_avg as cpu load in balancing, > but that cause kbuild/aim7/oltp benchmark performance drop. > > Signed-off-by: Alex Shi <alex....@intel.com> > --- > kernel/sched/core.c | 16 ++++++++++++++-- > kernel/sched/fair.c | 5 +++-- > 2 files changed, 17 insertions(+), 4 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index f1f9641..8ab37c3 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2528,9 +2528,14 @@ static void __update_cpu_load(struct rq *this_rq, > unsigned long this_load, > void update_idle_cpu_load(struct rq *this_rq) > { > unsigned long curr_jiffies = ACCESS_ONCE(jiffies); > - unsigned long load = this_rq->load.weight; > + unsigned long load; > unsigned long pending_updates; > > +#ifdef CONFIG_SMP > + load = this_rq->cfs.runnable_load_avg; > +#else > + load = this_rq->load.weight; > +#endif > /* > * bail if there's load or we're actually up-to-date. > */ > @@ -2574,11 +2579,18 @@ void update_cpu_load_nohz(void) > */ > static void update_cpu_load_active(struct rq *this_rq) > { > + unsigned long load; > + > +#ifdef CONFIG_SMP > + load = this_rq->cfs.runnable_load_avg; > +#else > + load = this_rq->load.weight; > +#endif > /* > * See the mess around update_idle_cpu_load() / update_cpu_load_nohz().
This just smells like you want a helper function... :-) Also it doesn't apply anymore due to Paul Gortemaker moving some of this stuff about. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/