On 29/05/18 09:40, Quentin Perret wrote:
> Hi Vincent,
> 
> On Friday 25 May 2018 at 15:12:26 (+0200), Vincent Guittot wrote:
> > Now that we have both the dl class bandwidth requirement and the dl class
> > utilization, we can use the max of the 2 values when agregating the
> > utilization of the CPU.
> > 
> > Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
> > ---
> >  kernel/sched/sched.h | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 4526ba6..0eb07a8 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -2194,7 +2194,11 @@ static inline void cpufreq_update_util(struct rq 
> > *rq, unsigned int flags) {}
> >  #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL
> >  static inline unsigned long cpu_util_dl(struct rq *rq)
> >  {
> > -   return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT;
> > +   unsigned long util = (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> 
> > BW_SHIFT;
> > +
> > +   util = max_t(unsigned long, util, READ_ONCE(rq->avg_dl.util_avg));
> 
> Would it make sense to use a UTIL_EST version of that signal here ? I
> don't think that would make sense for the RT class with your patch-set
> since you only really use the blocked part of the signal for RT IIUC,
> but would that work for DL ?

Well, UTIL_EST for DL looks pretty much what we already do by computing
utilization based on dl.running_bw. That's why I was thinking of using
that as a starting point for dl.util_avg decay phase.

Reply via email to