When the SCHED_DEADLINE scheduling class increases the CPU utilization,
we should not wait for the rate limit, otherwise we may miss some
deadline.

Tests using rt-app on Exynos5422 with up to 10 SCHED_DEADLINE tasks have
shown reductions of even 10% of deadline misses with a negligible
increase of energy consumption (measured through Baylibre Cape).

Signed-off-by: Claudio Scordino <clau...@evidence.eu.com>
Acked-by: Viresh Kumar <viresh.ku...@linaro.org>
Reviewed-by: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
CC: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
CC: Viresh Kumar <viresh.ku...@linaro.org>
CC: Patrick Bellasi <patrick.bell...@arm.com>
CC: Dietmar Eggemann <dietmar.eggem...@arm.com>
CC: Morten Rasmussen <morten.rasmus...@arm.com>
CC: Juri Lelli <juri.le...@redhat.com>
CC: Vincent Guittot <vincent.guit...@linaro.org>
CC: Todd Kjos <tk...@android.com>
CC: Joel Fernandes <joe...@google.com>
CC: linux...@vger.kernel.org
CC: linux-kernel@vger.kernel.org
---
Changes from v3:
 - Specific routine renamed as ignore_dl_rate_limit()
---
Changes from v2:
 - Rate limit ignored also in case of "fast switch"
 - Specific routine added
---
Changes from v1:
 - Logic moved from sugov_should_update_freq() to
   sugov_update_single()/_shared() to not duplicate data structures
 - Rate limit not ignored in case of "fast switch"
---
 kernel/sched/cpufreq_schedutil.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index feb5f89..2aeb1ca 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -257,6 +257,16 @@ static bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu)
 static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; 
}
 #endif /* CONFIG_NO_HZ_COMMON */
 
+/*
+ * Make sugov_should_update_freq() ignore the rate limit when DL
+ * has increased the utilization.
+ */
+static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu, struct 
sugov_policy *sg_policy)
+{
+       if (cpu_util_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->util_dl)
+               sg_policy->need_freq_update = true;
+}
+
 static void sugov_update_single(struct update_util_data *hook, u64 time,
                                unsigned int flags)
 {
@@ -270,6 +280,8 @@ static void sugov_update_single(struct update_util_data 
*hook, u64 time,
        sugov_set_iowait_boost(sg_cpu, time);
        sg_cpu->last_update = time;
 
+       ignore_dl_rate_limit(sg_cpu, sg_policy);
+
        if (!sugov_should_update_freq(sg_policy, time))
                return;
 
@@ -351,6 +363,8 @@ sugov_update_shared(struct update_util_data *hook, u64 
time, unsigned int flags)
 
        raw_spin_lock(&sg_policy->update_lock);
 
+       ignore_dl_rate_limit(sg_cpu, sg_policy);
+
        sugov_get_util(sg_cpu);
        sg_cpu->flags = flags;
 
-- 
2.7.4

Reply via email to