The cpu_load decays on time according past cpu load of rq. The sched_avg 
also decays tasks' load on time. Now we has 2 kind decay for cpu_load. 
That is a kind of redundancy. And increase the system load by decay 
calculation. This patch try to remove the cpu_load decay.

There are 5 load_idx used for cpu_load in sched_domain. busy_idx and 
idle_idx are not zero usually, but newidle_idx, wake_idx and forkexec_idx 
are all zero on every arch. A shortcut to remove cpu_Load decay in the first
patch. just one line patch for this change. Then I try to clean up code 
followed by this change.


V4,
1, rebase on latest tip/master
2, replace target_load by biased_load as Morten's suggestion

V3,
1, correct the wake_affine bias. Thanks for Morten's reminder!
2, replace source_load by weighted_cpuload for better function name meaning.

V2,
1, This version do some tuning on load bias of target load.
2, Got further to remove the cpu_load in rq.
3, Revert the patch 'Limit sd->*_idx range on sysctl' since no needs

Any testing/comments are appreciated.

This patch rebase on latest tip/master.
The git tree for this patchset at:
 g...@github.com:alexshi/power-scheduling.git noload

Thanks
Alex

 [PATCH 01/11] sched: shortcut to remove load_idx
 [PATCH 02/11] sched: remove rq->cpu_load[load_idx] array
 [PATCH 03/11] sched: clean up cpu_load update
 [PATCH 04/11] sched: unify imbalance bias for target group
 [PATCH 05/11] sched: rewrite update_cpu_load_nohz
 [PATCH 06/11] sched: clean up source_load/target_load
 [PATCH 07/11] sched: replace source_load by weighted_cpuload
 [PATCH 08/11] sched: replace target_load by biased_load
 [PATCH 09/11] sched: remove rq->cpu_load and rq->nr_load_updates
 [PATCH 10/11] sched: rename update_*_cpu_load
 [PATCH 11/11] sched: clean up task_hot function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to