On 31 July 2014 21:16, Yuyang Du wrote:
> Hi Vincent,
>
> On Thu, Jul 31, 2014 at 11:56:13AM +0200, Vincent Guittot wrote:
>>
>> load_sum is now the average runnable time before being weighted
>
> So when weight changes, load_avg will completely use new weight. I have
> some cents:
>
> 1) Task doe
Hi Vincent,
On Thu, Jul 31, 2014 at 11:56:13AM +0200, Vincent Guittot wrote:
>
> load_sum is now the average runnable time before being weighted
So when weight changes, load_avg will completely use new weight. I have
some cents:
1) Task does not change weight much, so it is practically ok
2)
Resend with a correct subject
Hi Yuyang,
Does something like the patch below to be applied of top of your patchset, seem
reasonable add-on?
It adds 1 new usage_sum statistics which is something that I use to detect the
overload of a rq in my patchset that reworks cpu_power and removes
capacity_f
On Wed, Jul 30, 2014 at 10:30:08AM +0200, Peter Zijlstra wrote:
> >
> > Isn't the entire effort starting from PJT and Ben up to now to soften the
> > extremely
> > dynamic changes (runnable or not, weight change, etc)? Assume task does not
> > change
> > weight much, but group entity does as Pet
On Wed, Jul 30, 2014 at 06:27:52AM +0800, Yuyang Du wrote:
> On Tue, Jul 29, 2014 at 03:17:29PM +0200, Vincent Guittot wrote:
> > >>
> > >> IMHO, we should apply the same policy than the one i mentioned for
> > >> task. So the load_avg of an entity or a cfs_rq will not be disturbed
> > >> by an old
On Tue, Jul 29, 2014 at 03:35:10PM +0200, Peter Zijlstra wrote:
>
> Does not compute, sorry. How would delaying the effect of migrations
> help?
>
> Suppose we have 2 cpus and 6 tasks. cpu0 has 2 tasks, cpu1 has 4 tasks.
> the group weights are resp. 341 and 682. We compute we have an imbalance
>
On Tue, Jul 29, 2014 at 03:17:29PM +0200, Vincent Guittot wrote:
> >>
> >> IMHO, we should apply the same policy than the one i mentioned for
> >> task. So the load_avg of an entity or a cfs_rq will not be disturbed
> >> by an old but no more valid weight
> >>
> >
> > Well, I see your point. But th
On Tue, Jul 29, 2014 at 03:35:10PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 29, 2014 at 09:53:44AM +0800, Yuyang Du wrote:
> > On Tue, Jul 29, 2014 at 11:39:11AM +0200, Peter Zijlstra wrote:
> > > > > For task, assuming its load.weight does not change much, yes, we can.
> > > > > But in theory,
On Tue, Jul 29, 2014 at 09:53:44AM +0800, Yuyang Du wrote:
> On Tue, Jul 29, 2014 at 11:39:11AM +0200, Peter Zijlstra wrote:
> > > > For task, assuming its load.weight does not change much, yes, we can.
> > > > But in theory, task's
> > >
> > > I would even say that the load_avg of a task should
On 29 July 2014 03:43, Yuyang Du wrote:
> On Tue, Jul 29, 2014 at 11:12:37AM +0200, Vincent Guittot wrote:
>> >>
>> >> Do you really need to have *w for computing the load_sum ? can't you
>> >> only use it when computing the load_avg ?
>> >>
>> >> sa->load_avg = div_u64(sa->load_sum * w , LOAD_AVG
On Tue, Jul 29, 2014 at 09:09:45AM +0800, Yuyang Du wrote:
> > > +#define subtract_until_zero(minuend, subtrahend) \
> > > + (subtrahend < minuend ? minuend - subtrahend : 0)
> >
> > WTH is a minuend or subtrahend? Are you a wordsmith in your spare time
> > and like to make up your own words?
> >
On Tue, Jul 29, 2014 at 08:56:41AM +0800, Yuyang Du wrote:
> On Mon, Jul 28, 2014 at 12:48:37PM +0200, Peter Zijlstra wrote:
> > > +static __always_inline u64 decay_load(u64 val, u64 n)
> > > +{
> > > + if (likely(val <= UINT_MAX))
> > > + val = decay_load32(val, n);
> > > + else {
> > > +
On Tue, Jul 29, 2014 at 11:39:11AM +0200, Peter Zijlstra wrote:
> > > For task, assuming its load.weight does not change much, yes, we can. But
> > > in theory, task's
> >
> > I would even say that the load_avg of a task should not be impacted by
> > an old priority value. Once, the priority of a
On Tue, Jul 29, 2014 at 11:12:37AM +0200, Vincent Guittot wrote:
> >>
> >> Do you really need to have *w for computing the load_sum ? can't you
> >> only use it when computing the load_avg ?
> >>
> >> sa->load_avg = div_u64(sa->load_sum * w , LOAD_AVG_MAX)
> >>
> >
> > For task, assuming its load.w
On Tue, Jul 29, 2014 at 11:12:37AM +0200, Vincent Guittot wrote:
> On 27 July 2014 19:36, Yuyang Du wrote:
> > Hi Vincent,
> >
> > On Fri, Jul 18, 2014 at 11:43:00AM +0200, Vincent Guittot wrote:
> >> > @@ -2291,23 +2299,24 @@ static __always_inline int
> >> > __update_entity_runnable_avg(u64 now
On Mon, Jul 28, 2014 at 07:19:09PM +0200, Peter Zijlstra wrote:
> > > And here we try and make good on that assumption. The thing I worry
> > > about is what happens if the machine is entirely idle...
> > >
> > > What guarantees an semi up-to-date cfs_rq->avg.last_update_time.
> >
> > update_block
On 27 July 2014 19:36, Yuyang Du wrote:
> Hi Vincent,
>
> On Fri, Jul 18, 2014 at 11:43:00AM +0200, Vincent Guittot wrote:
>> > @@ -2291,23 +2299,24 @@ static __always_inline int
>> > __update_entity_runnable_avg(u64 now,
>> > delta >>= 10;
>> > if (!delta)
>> > re
On Mon, Jul 28, 2014 at 01:39:39PM +0200, Peter Zijlstra wrote:
> > -static inline void __update_group_entity_contrib(struct sched_entity *se)
> > +static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
> > {
> > + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> >
> >
On Mon, Jul 28, 2014 at 12:48:37PM +0200, Peter Zijlstra wrote:
> > +static __always_inline u64 decay_load(u64 val, u64 n)
> > +{
> > + if (likely(val <= UINT_MAX))
> > + val = decay_load32(val, n);
> > + else {
> > + val *= (u32)decay_load32(1 << 15, n);
> > + val
On Mon, Jul 28, 2014 at 09:58:19AM -0700, bseg...@google.com wrote:
> Peter Zijlstra writes:
>
> >> @@ -4551,18 +4382,34 @@ migrate_task_rq_fair(struct task_struct *p, int
> >> next_cpu)
> >> {
> >>struct sched_entity *se = &p->se;
> >>struct cfs_rq *cfs_rq = cfs_rq_of(se);
> >> + u64
Peter Zijlstra writes:
>> @@ -4551,18 +4382,34 @@ migrate_task_rq_fair(struct task_struct *p, int
>> next_cpu)
>> {
>> struct sched_entity *se = &p->se;
>> struct cfs_rq *cfs_rq = cfs_rq_of(se);
>> +u64 last_update_time;
>>
>> /*
>> + * Task on old CPU catches up with i
> +static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
> {
> + int decayed;
>
> + if (atomic_long_read(&cfs_rq->removed_load_avg)) {
> + long r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
> + cfs_rq->avg.load_avg =
> subtract_until_ze
On Fri, Jul 18, 2014 at 07:26:06AM +0800, Yuyang Du wrote:
> -static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int
> force_update)
> +/* Add the load generated by se into cfs_rq's load average */
> +static inline void enqueue_entity_load_avg(struct sched_entity *se)
> {
> + struc
On Fri, Jul 18, 2014 at 07:26:06AM +0800, Yuyang Du wrote:
> -static inline void __update_tg_runnable_avg(struct sched_avg *sa,
> - struct cfs_rq *cfs_rq)
> -{
> - struct task_group *tg = cfs_rq->tg;
> - long contrib;
> -
> - /* The fractio
On Fri, Jul 18, 2014 at 07:26:06AM +0800, Yuyang Du wrote:
> @@ -665,20 +660,27 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct
> sched_entity *se)
> }
>
> #ifdef CONFIG_SMP
> -static unsigned long task_h_load(struct task_struct *p);
>
> -static inline void __update_task_entity_cont
Hi Vincent,
On Fri, Jul 18, 2014 at 11:43:00AM +0200, Vincent Guittot wrote:
> > @@ -2291,23 +2299,24 @@ static __always_inline int
> > __update_entity_runnable_avg(u64 now,
> > delta >>= 10;
> > if (!delta)
> > return 0;
> > - sa->last_runnable_update = now;
On 18 July 2014 01:26, Yuyang Du wrote:
> The idea of per entity runnable load average (let runnable time contribute to
> load
> weight) was proposed by Paul Turner, and it is still followed by this
> rewrite. This
> rewrite is done due to the following ends:
>
> 1. cfs_rq's load average (namely
27 matches
Mail list logo