From: jun qian <[email protected]> When the sched_schedstat changes from 0 to 1, some sched se maybe already in the runqueue, the se->statistics.wait_start will be 0. So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start) wrong. We need to avoid this scenario.
Signed-off-by: jun qian <[email protected]> Signed-off-by: Yafang Shao <[email protected]> --- kernel/sched/fair.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 658aa7a..dd7c3bb 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -908,6 +908,14 @@ static void update_curr_fair(struct rq *rq) if (!schedstat_enabled()) return; + /* + * When the sched_schedstat changes from 0 to 1, some sched se maybe + * already in the runqueue, the se->statistics.wait_start will be 0. + * So it will let the delta wrong. We need to avoid this scenario. + */ + if (unlikely(!schedstat_val(se->statistics.wait_start))) + return; + delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start); if (entity_is_task(se)) { -- 1.8.3.1

