On 12/05/15 20:39, Morten Rasmussen wrote: > Let available compute capacity and estimated energy impact select > wake-up target cpu when energy-aware scheduling is enabled and the > system in not over-utilized (above the tipping point). > > energy_aware_wake_cpu() attempts to find group of cpus with sufficient > compute capacity to accommodate the task and find a cpu with enough spare > capacity to handle the task within that group. Preference is given to > cpus with enough spare capacity at the current OPP. Finally, the energy > impact of the new target and the previous task cpu is compared to select > the wake-up target cpu. > > cc: Ingo Molnar <mi...@redhat.com> > cc: Peter Zijlstra <pet...@infradead.org> > > Signed-off-by: Morten Rasmussen <morten.rasmus...@arm.com>
[...] > /* > * select_task_rq_fair: Select target runqueue for the waking task in domains > * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, > @@ -5446,7 +5526,10 @@ select_task_rq_fair(struct task_struct *p, int > prev_cpu, int sd_flag, int wake_f > prev_cpu = cpu; > > if (sd_flag & SD_BALANCE_WAKE && want_sibling) { > - new_cpu = select_idle_sibling(p, prev_cpu); > + if (energy_aware() && !cpu_rq(cpu)->rd->overutilized) > + new_cpu = energy_aware_wake_cpu(p); If you run RFCv4 on an X86 system w/o energy model, you get a 'BUG: unable to handle kernel paging request at ...' problem after you've enabled energy awareness (echo ENERGY_AWARE > /sys/kernel/debug/sched_features). This is related to the fact that cpumask functions like cpumask_test_cpu (e.g. later in select_task_rq) can't deal with cpu set to -1. If you enable CONFIG_DEBUG_PER_CPU_MAPS you get the following warning in this case: WARNING: CPU: 0 PID: 0 at include/linux/cpumask.h:117 cpumask_check.part.79+0x1f/0x30() We also get the warning on ARM (w/o energy model) but my TC2 system is not crashing like the X86 box. Shouldn't we return prev_cpu in case sd_ea is NULL just as select_idle_sibling does if prev_cpu is idle? diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f5897a021f23..8a014fdd6e76 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5394,7 +5394,7 @@ static int select_idle_sibling(struct task_struct *p, int target) return target; } -static int energy_aware_wake_cpu(struct task_struct *p) +static int energy_aware_wake_cpu(struct task_struct *p, int target) { struct sched_domain *sd; struct sched_group *sg, *sg_target; @@ -5405,7 +5405,7 @@ static int energy_aware_wake_cpu(struct task_struct *p) sd = rcu_dereference(per_cpu(sd_ea, task_cpu(p))); if (!sd) - return -1; + return target; sg = sd->groups; sg_target = sg; @@ -5527,7 +5527,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f if (sd_flag & SD_BALANCE_WAKE && want_sibling) { if (energy_aware() && !cpu_rq(cpu)->rd->overutilized) - new_cpu = energy_aware_wake_cpu(p); + new_cpu = energy_aware_wake_cpu(p, prev_cpu); else new_cpu = select_idle_sibling(p, prev_cpu); goto unlock; > + else > + new_cpu = select_idle_sibling(p, prev_cpu); > goto unlock; > } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/