On Mon, 06 Jun, at 08:20:39AM, Yuyang Du wrote: > Newly forked task has not been enqueued, so should not be removed from > cfs_rq in task_move_group_fair(). To do so, we need to pass the fork > information all the way from sched_move_task() to task_move_group_fair(). > > Signed-off-by: Yuyang Du <yuyang...@intel.com> > --- > kernel/sched/auto_group.c | 2 +- > kernel/sched/core.c | 8 ++++---- > kernel/sched/fair.c | 8 ++++++-- > kernel/sched/sched.h | 4 ++-- > 4 files changed, 13 insertions(+), 9 deletions(-)
[...] > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 28b3415..370e284 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -8429,9 +8429,13 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) > } > > #ifdef CONFIG_FAIR_GROUP_SCHED > -static void task_move_group_fair(struct task_struct *p) > +static void task_move_group_fair(struct task_struct *p, bool fork) > { > - detach_task_cfs_rq(p); > + /* > + * Newly forked task should not be removed from any cfs_rq > + */ > + if (!fork) > + detach_task_cfs_rq(p); > set_task_rq(p, task_cpu(p)); > attach_task_cfs_rq(p); > /* Wouldn't it be more symmetric to add, if (p->se.avg.last_update_time) __update_load_avg(...) to detach_entity_load_avg() so that there's no need to pass the @fork parameter all the way down the stack, and more importantly, remember to *not* call detach_task_cfs_rq() if the code changes in the future? Additionally adding the above check looks like it would fix a race that (I think) occurs if a newly forked task's class is changed before it is enqueued in wake_up_new_task().