On 12 June 2018 at 11:20, Quentin Perret <quentin.per...@arm.com> wrote:
> On Tuesday 12 Jun 2018 at 11:16:56 (+0200), Vincent Guittot wrote:
>> The time spent under interrupt can be significant but it is not reflected
>> in the utilization of CPU when deciding to choose an OPP. Now that we have
>> access to this metric, schedutil can take it into account when selecting
>> the OPP for a CPU.
>> rqs utilization don't see the time spend under interrupt context and report
>> their value in the normal context time window. We need to compensate this 
>> when
>> adding interrupt utilization
>>
>> The CPU utilization is :
>>   irq util_avg + (1 - irq util_avg / max capacity ) * /Sum rq util_avg
>>
>> A test with iperf on hikey (octo arm64) gives:
>> iperf -c server_address -r -t 5
>>
>> w/o patch             w/ patch
>> Tx 276 Mbits/sec        304 Mbits/sec +10%
>> Rx 299 Mbits/sec        328 Mbits/sec +09%
>>
>> 8 iterations
>> stdev is lower than 1%
>> Only WFI idle state is enable (shallowest diel state)
>                                             ^^^^
> nit: s/diel/idle
>
> And, out of curiosity, what happens if you leave the idle states
> untouched ? Do you still see an improvement ? Or is it lost in the
> noise ?

the result are less stable because c-state wake up time impact
performance and cpuidle is not good to select the right idle state in
such case. Normally, an app should use qos dma latency or a driver per
device resume latency

>
> Thanks,
> Quentin

Reply via email to