On 15 June 2016 at 21:19, Yuyang Du <yuyang...@intel.com> wrote: > On Mon, May 30, 2016 at 05:52:20PM +0200, Vincent Guittot wrote: >> The cfs_rq->avg.last_update_time is initialize to 0 with the main effect >> that the 1st sched_entity that will be attached, will keep its >> last_update_time set to 0 and will attached once again during the >> enqueue. >> Initialize cfs_rq->avg.last_update_time to 1 instead. >> >> Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org> >> --- >> >> v3: >> - add initialization of load_last_update_time_copy for not 64bits system >> - move init into init_cfs_rq >> >> v2: >> - rq_clock_task(rq_of(cfs_rq)) can't be used because lock is not held >> >> kernel/sched/fair.c | 10 ++++++++++ >> 1 file changed, 10 insertions(+) >> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index 218f8e8..86be9c1 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -8459,6 +8459,16 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) >> cfs_rq->min_vruntime_copy = cfs_rq->min_vruntime; >> #endif >> #ifdef CONFIG_SMP >> + /* >> + * Set last_update_time to something different from 0 to make >> + * sure the 1st sched_entity will not be attached twice: once >> + * when attaching the task to the group and one more time when >> + * enqueueing the task. >> + */ >> + cfs_rq->avg.last_update_time = 1; >> +#ifndef CONFIG_64BIT >> + cfs_rq->load_last_update_time_copy = 1; >> +#endif >> atomic_long_set(&cfs_rq->removed_load_avg, 0); >> atomic_long_set(&cfs_rq->removed_util_avg, 0); >> #endif > > Then, when enqueued, both cfs_rq and task will be decayed to 0, due to > a large gap between 1 and now, no?
yes, like it is done currently (but 1ns later) .