On 1/23/19 10:48 AM, Vincent Guittot wrote:
On Wed, 23 Jan 2019 at 09:26, Dietmar Eggemann wrote:
On 1/16/19 10:43 AM, Vincent Guittot wrote:
[...]
+static inline u64 rq_clock_pelt(struct rq *rq)
+{
Doesn't this function need
lockdep_assert_held(&rq->lock);
assert_clock_updated(r
On Wed, 23 Jan 2019 at 09:26, Dietmar Eggemann wrote:
>
> On 1/16/19 10:43 AM, Vincent Guittot wrote:
>
> [...]
>
> > +static inline u64 rq_clock_pelt(struct rq *rq)
> > +{
>
> Doesn't this function need
>
>lockdep_assert_held(&rq->lock);
>assert_clock_updated(rq);
originally, it was repl
On 1/16/19 10:43 AM, Vincent Guittot wrote:
[...]
+static inline u64 rq_clock_pelt(struct rq *rq)
+{
Doesn't this function need
lockdep_assert_held(&rq->lock);
assert_clock_updated(rq);
like rq_clock() and rq_clock_task()? Later to support commit
cb42c9a3ebbb "sched/core: Add debugging
The current implementation of load tracking invariance scales the
contribution with current frequency and uarch performance (only for
utilization) of the CPU. One main result of this formula is that the
figures are capped by current capacity of CPU. Another one is that the
load_avg is not invariant
4 matches
Mail list logo