On Wed, Dec 20, 2017 at 06:38:14PM +0100, Juri Lelli wrote:
> On 20/12/17 16:30, Peter Zijlstra wrote:
> 
> [...]
> 
> > @@ -327,12 +331,7 @@ static unsigned int sugov_next_freq_shar
> >             if (delta_ns > TICK_NSEC) {
> >                     j_sg_cpu->iowait_boost = 0;
> >                     j_sg_cpu->iowait_boost_pending = false;
> > -                   j_sg_cpu->util_cfs = 0;
> > -                   if (j_sg_cpu->util_dl == 0)
> > -                           continue;
> >             }
> 
> This goes away because with Brendan/Vincent fix we don't need the
> workaround for stale CFS util contribution for idle CPUs anymore?

An easy fix would be something like the below I suppose (also folded a
change from Viresh).

This way it completely ignores the demand from idle CPUs. Which I
suppose is exactly what you want, no?

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index ab84d2261554..9736b537386a 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -315,8 +315,8 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu 
*sg_cpu, u64 time)
                unsigned long j_util, j_max;
                s64 delta_ns;
 
-               if (j_sg_cpu != sg_cpu)
-                       sugov_get_util(j_sg_cpu);
+               if (idle_cpu(j))
+                       continue;
 
                /*
                 * If the CFS CPU utilization was last updated before the
@@ -354,7 +354,6 @@ static void sugov_update_shared(struct update_util_data 
*hook, u64 time,
 
        raw_spin_lock(&sg_policy->update_lock);
 
-       sugov_get_util(sg_cpu);
        sugov_set_iowait_boost(sg_cpu, time, flags);
        sg_cpu->last_update = time;
 

Reply via email to