From: Rik van Riel <r...@redhat.com>

It looks like all the call paths that lead to __acct_update_integrals
already have irqs disabled, and __acct_update_integrals does not need
to disable irqs itself.

This is very convenient since about half the CPU time left in this
function was spent in local_irq_save alone.

Performance of a microbenchmark that calls an invalid syscall
ten million times in a row on a nohz_full CPU improves 21% vs.
4.5-rc1 with both the removal of divisions from __acct_update_integrals
and this patch, with runtime dropping from 3.7 to 2.9 seconds.

With these patches applied, the highest remaining cpu user in
the trace is native_sched_clock, which is addressed in the next
patch.

Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Rik van Riel <r...@redhat.com>
---
 kernel/tsacct.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/kernel/tsacct.c b/kernel/tsacct.c
index 9c23584c76c4..31fb6c9746d4 100644
--- a/kernel/tsacct.c
+++ b/kernel/tsacct.c
@@ -124,20 +124,18 @@ static void __acct_update_integrals(struct task_struct 
*tsk,
                                    cputime_t utime, cputime_t stime)
 {
        cputime_t time, dtime;
-       unsigned long flags;
        u64 delta;
 
        if (!likely(tsk->mm))
                return;
 
-       local_irq_save(flags);
        time = stime + utime;
        dtime = time - tsk->acct_timexpd;
        /* Avoid division: cputime_t is often in nanoseconds already. */
        delta = cputime_to_nsecs(dtime);
 
        if (delta < TICK_NSEC)
-               goto out;
+               return;
 
        tsk->acct_timexpd = time;
        /*
@@ -147,8 +145,6 @@ static void __acct_update_integrals(struct task_struct *tsk,
         */
        tsk->acct_rss_mem1 += delta * get_mm_rss(tsk->mm) >> 10;
        tsk->acct_vm_mem1 += delta * tsk->mm->total_vm >> 10;
-out:
-       local_irq_restore(flags);
 }
 
 /**
-- 
2.5.0

Reply via email to