On 01/30/2016 09:44 AM, Frederic Weisbecker wrote:
> On Fri, Jan 29, 2016 at 10:36:02PM -0500, r...@redhat.com wrote:
>> From: Rik van Riel <r...@redhat.com>
>>
>> When running a microbenchmark calling an invalid syscall number
>> in a loop, on a nohz_full CPU, we spend a full 9% of our CPU
>> time in __acct_update_integrals.
>>
>> This function converts cputime_t to jiffies, to a timeval, only to
>> convert the timeval back to microseconds before discarding it.
>>
>> This patch leaves __acct_update_integrals functionally equivalent,
>> but speeds things up by about 12%, with 10 million calls to an
>> invalid syscall number dropping from 3.7 to 3.25 seconds.
>>
>> Signed-off-by: Rik van Riel <r...@redhat.com>
>> ---
>>  kernel/tsacct.c | 19 +++++++++----------
>>  1 file changed, 9 insertions(+), 10 deletions(-)
>>
>> diff --git a/kernel/tsacct.c b/kernel/tsacct.c
>> index 975cb49e32bf..41667b23dbd0 100644
>> --- a/kernel/tsacct.c
>> +++ b/kernel/tsacct.c
>> @@ -93,9 +93,9 @@ void xacct_add_tsk(struct taskstats *stats, struct 
>> task_struct *p)
>>  {
>>      struct mm_struct *mm;
>>  
>> -    /* convert pages-usec to Mbyte-usec */
>> -    stats->coremem = p->acct_rss_mem1 * PAGE_SIZE / MB;
>> -    stats->virtmem = p->acct_vm_mem1 * PAGE_SIZE / MB;
>> +    /* convert pages-nsec/KB to Mbyte-usec, see __acct_update_integrals */
>> +    stats->coremem = p->acct_rss_mem1 * PAGE_SIZE / (1000 * KB);
>> +    stats->virtmem = p->acct_vm_mem1 * PAGE_SIZE / (1000 * KB);
>>      mm = get_task_mm(p);
>>      if (mm) {
>>              /* adjust to KB unit */
>> @@ -125,22 +125,21 @@ static void __acct_update_integrals(struct task_struct 
>> *tsk,
>>  {
>>      if (likely(tsk->mm)) {
>>              cputime_t time, dtime;
>> -            struct timeval value;
>>              unsigned long flags;
>>              u64 delta;
>>  
>>              local_irq_save(flags);
>>              time = stime + utime;
>>              dtime = time - tsk->acct_timexpd;
>> -            jiffies_to_timeval(cputime_to_jiffies(dtime), &value);
>> -            delta = value.tv_sec;
>> -            delta = delta * USEC_PER_SEC + value.tv_usec;
>> +            delta = cputime_to_nsecs(dtime);
> 
> You might want to add a comment specifying why we don't call 
> cputime_to_usecs()
> directly (because we optimize if delta < TICK_NSEC).
> 
> Although this has a good impact on nohz_full, it might have a tiny bad one on 
> !nohz_full
> because now we first convert jiffies to nsecs (which implies a multiplication 
> by 1000)
> that we later divide again by 1000. Now this is ok because I plan to convert 
> tsk->utime/stime
> to nsecs and thus remove most of the cputime_t use and conversions everywhere.

Isn't cputime_t in nanoseconds even on !nohz_full systems nowadays,
due to sched_clock?

Also, a multiplication is essentially instantaneous compared to
a division, which is why Peter suggested going this way around.

>>  
>> -            if (delta == 0)
>> +            if (delta < TICK_NSEC)
>>                      goto out;
> 
> 
>> +
>>              tsk->acct_timexpd = time;
>> -            tsk->acct_rss_mem1 += delta * get_mm_rss(tsk->mm);
>> -            tsk->acct_vm_mem1 += delta * tsk->mm->total_vm;
>> +            /* The final unit will be Mbyte-usecs, see xacct_add_tsk */
>> +            tsk->acct_rss_mem1 += delta * get_mm_rss(tsk->mm) / 1024;
>> +            tsk->acct_vm_mem1 += delta * tsk->mm->total_vm / 1024;
> 
> The use of 1024 and the change on MB above are confusing me. Why are we doing 
> that?
> 
> Thanks.

So the compiler can just do a right shift in the frequently called
code, and have no divide at all left in __acct_update_integrals.
However, reducing the value here does seem useful for the prevention
of overflows.

The divide is saved for when the statistics are read out to
userspace.

-- 
All rights reversed

Reply via email to