On Mon, Nov 23, 2020 at 10:26:13AM +0100, Dietmar Eggemann wrote:
> On 20/11/2020 08:55, Peter Zijlstra wrote:
> 
> [...]
> 
> > PELT (Per Entity Load Tracking)
> > -------------------------------
> 
> [...]
> 
> > Using this we track 2 key metrics: 'running' and 'runnable'. 'Running'
> > reflects the time an entity spends on the CPU, while 'runnable' reflects the
> > time an entity spends on the runqueue. When there is only a single task 
> > these
> > two metrics are the same, but once there is contention for the CPU 'running'
> > will decrease to reflect the fraction of time each task spends on the CPU
> > while 'runnable' will increase to reflect the amount of contention.
> 
> People might find it confusing to map 'running and 'runnable' into the 3
> PELT signals (load_avg, runnable_avg and util_avg) being used in the
> scheduler ... with load_avg being 'runnable' and 'weight' based.

Yeah, but that's for another document, I suppose. much of pelt.c uses
runnable. Also, the comment that goes with struct sched_avg should
explain.

> > For more detail see: kernel/sched/pelt.c
> > 
> > 
> > Frequency- / Heterogeneous Invariance
> > -------------------------------------
> 
> We call 'Heterogeneous Invariance' CPU invariance in chapter 2.3
> Documentation/scheduler/sched-capacity.rst.
> 
> [...]

Fair enough; I've renamed it to match.

> > For more detail see:
> > 
> >  - kernel/sched/pelt.h:update_rq_clock_pelt()
> >  - arch/x86/kernel/smpboot.c:"APERF/MPERF frequency ratio computation."
> 
> drivers/base/arch_topology.c:"f_cur/f_max ratio computation".

I can't seem to find that in any tree near me (I tried tip/master and
next/master)

> > UTIL_EST / UTIL_EST_FASTUP
> > --------------------------
> 
> [...]
> 
> >   util_est := \Sum_t max( t_running, t_util_est_ewma )
> > 
> > For more detail see: kernel/sched/fair.h:util_est_dequeue()
> 
> s/fair.h/fair.c
> 
> > UCLAMP
> > ------
> > 
> > It is possible to set effective u_min and u_max clamps on each task; the
> 
> s/on each task/on each CFS or RT task

Thanks!

Reply via email to