On 08/04/2014 04:28 PM, Jason Low wrote:
When running workloads on 2+ socket systems, based on perf profiles, the
update_cfs_rq_blocked_load function constantly shows up as taking up a
noticeable % of run time. This is especially apparent on an 8 socket
machine. For example, when running the AIM7 custom workload, we see:

    4.18%        reaim  [kernel.kallsyms]        [k] update_cfs_rq_blocked_load

Much of the contention is in __update_cfs_rq_tg_load_contrib when we
update the tg load contribution stats.  However, it turns out that in many
cases, they don't need to be updated and "tg_contrib" is 0.

This patch adds a check in __update_cfs_rq_tg_load_contrib to skip updating
tg load contribution stats when nothing needs to be updated. This reduces the
cacheline contention that would be unnecessary. In the above case, with the
patch, perf reports the total time spent in this function went down by more
than a factor of 3x:

    1.18%        reaim  [kernel.kallsyms]        [k] update_cfs_rq_blocked_load

Signed-off-by: Jason Low<jason.l...@hp.com>
---
  kernel/sched/fair.c |    3 +++
  1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bfa3c86..8d4cc72 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2377,6 +2377,9 @@ static inline void __update_cfs_rq_tg_load_contrib(struct 
cfs_rq *cfs_rq,
        tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;
        tg_contrib -= cfs_rq->tg_load_contrib;

+       if (!tg_contrib)
+               return;
+
        if (force_update || abs(tg_contrib)>  cfs_rq->tg_load_contrib / 8) {
                atomic_long_add(tg_contrib,&tg->load_avg);
                cfs_rq->tg_load_contrib += tg_contrib;
Reviewed-by: Waiman Long <waiman.l...@hp.com>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to