Even with coscheduling, we define the fields rq->nr_running and rq->load of per-CPU runqueues to represent the total amount of tasks and the total amount of load on that CPU, respectively, so that existing code continues to work as expected.
Make sure to still account load changes on per-CPU runqueues. The change in set_next_entity() just silences a warning. The code looks bogus even without coscheduling, as the weight of an SE is independent from the weight of the runqueue, when task groups are involved. It's just for statistics anyway. Signed-off-by: Jan H. Schönherr <[email protected]> --- kernel/sched/fair.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fff88694560c..0bba924b40ba 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2741,8 +2741,8 @@ static void account_entity_enqueue(struct cfs_rq *cfs_rq, struct sched_entity *se) { update_load_add(&cfs_rq->load, se->load.weight); - if (!parent_entity(se)) - update_load_add(&rq_of(cfs_rq)->load, se->load.weight); + if (!parent_entity(se) || is_sd_se(parent_entity(se))) + update_load_add(&hrq_of(cfs_rq)->load, se->load.weight); #ifdef CONFIG_SMP if (entity_is_task(se)) { struct rq *rq = rq_of(cfs_rq); @@ -2758,8 +2758,8 @@ static void account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se) { update_load_sub(&cfs_rq->load, se->load.weight); - if (!parent_entity(se)) - update_load_sub(&rq_of(cfs_rq)->load, se->load.weight); + if (!parent_entity(se) || is_sd_se(parent_entity(se))) + update_load_sub(&hrq_of(cfs_rq)->load, se->load.weight); #ifdef CONFIG_SMP if (entity_is_task(se)) { account_numa_dequeue(rq_of(cfs_rq), task_of(se)); @@ -4122,7 +4122,8 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) * least twice that of our own weight (i.e. dont track it * when there are only lesser-weight tasks around): */ - if (schedstat_enabled() && rq_of(cfs_rq)->load.weight >= 2*se->load.weight) { + if (schedstat_enabled() && + hrq_of(cfs_rq)->load.weight >= 2 * se->load.weight) { schedstat_set(se->statistics.slice_max, max((u64)schedstat_val(se->statistics.slice_max), se->sum_exec_runtime - se->prev_sum_exec_runtime)); -- 2.9.3.1.gcba166c.dirty

