On 19/06/2020 19:20, Qais Yousef wrote> This series attempts to address the
report that uclamp logic could be expensive
> sometimes and shows a regression in netperf UDP_STREAM under certain
> conditions.
>
> The first patch is a fix for how struct uclamp_rq is initialized which is
> required by the 2nd patch which contains the real 'fix'.
>
> Worth noting that the root cause of the overhead is believed to be system
> specific or related to potential certain code/data layout issues, leading to
> worse I/D $ performance.
>
> Different systems exhibited different behaviors and the regression did
> disappear in certain kernel version while attempting to reporoduce.
>
> More info can be found here:
>
> https://lore.kernel.org/lkml/20200616110824.dgkkbyapn3io6wik@e107158-lin/
>
> Having the static key seemed the best thing to do to ensure the effect of
> uclamp is minimized for kernels that compile it in but don't have a userspace
> that uses it, which will allow distros to distribute uclamp capable kernels by
> default without having to compromise on performance for some systems that
> could
> be affected.
My test data indicates that the static key w/o any uclamp users (3)
brings the performance number for the 'perf bench sched pipe'
workload back (i.e. from w/ !CONFIG_UCLAMP_TASK) (1).
platform:
Arm64 Hikey960 (only little CPUs [0-3]), no CPUidle,
performance CPUfreq governor
workload:
perf stat -n -r 20 -- perf bench sched pipe -T -l 100000
(A) *** Performance results ***
(1) tip/sched/core
# CONFIG_UCLAMP_TASK is not set
*1.39285* +- 0.00191 seconds time elapsed ( +- 0.14% )
(2) tip/sched/core
CONFIG_UCLAMP_TASK=y
*1.42877* +- 0.00181 seconds time elapsed ( +- 0.13% )
(3) tip/sched/core + opt_skip_uclamp_v2
CONFIG_UCLAMP_TASK=y
*1.38833* +- 0.00291 seconds time elapsed ( +- 0.21% )
(4) tip/sched/core + opt_skip_uclamp_v2
CONFIG_UCLAMP_TASK=y
echo 512 > /proc/sys/kernel/sched_util_clamp_min (enable uclamp)
*1.42062* +- 0.00238 seconds time elapsed ( +- 0.17% )
(B) *** Profiling on CPU0 and CPU1 ***
(further hp'ing out CPU2 and CPU3 to get consistent hit numbers)
(1)
CPU0: Function Hit Time Avg s^2
-------- --- ---- --- ---
deactivate_task 1997346 2207642 us *1.105* us 0.033 us
activate_task 1997391 1840057 us *0.921* us 0.054 us
CPU1: Function Hit Time Avg s^2
-------- --- ---- --- ---
deactivate_task 1997455 2225960 us 1.114 us 0.034 us
activate_task 1997410 1842603 us 0.922 us 0.052 us
(2)
CPU0: Function Hit Time Avg s^2
-------- --- ---- --- ---
deactivate_task 1998538 2419719 us *1.210* us 0.061 us
activate_task 1997119 1960401 us *0.981* us 0.034 us
CPU1: Function Hit Time Avg s^2
-------- --- ---- --- ---
deactivate_task 1996597 2400760 us 1.202 us 0.059 us
activate_task 1998016 1985013 us 0.993 us 0.028 us
(3)
CPU0: Function Hit Time Avg s^2
-------- --- ---- --- ---
deactivate_task 1997525 2155416 us *1.079* us 0.020 us
activate_task 1997874 1899002 us *0.950* us 0.044 us
CPU1: Function Hit Time Avg s^2
-------- --- ---- --- ---
deactivate_task 1997935 2118648 us 1.060 us 0.017 us
activate_task 1997586 1895162 us 0.948 us 0.044 us
(4)
CPU0: Function Hit Time Avg s^2
-------- --- ---- --- ---
deactivate_task 1998246 2428121 us *1.215* us 0.062 us
activate_task 1998252 2132141 us *1.067* us 0.020 us
CPU1: Function Hit Time Avg s^2
-------- --- ---- --- ---
deactivate_task 1996154 2414194 us 1.209 us 0.060 us
activate_task 1996148 2140667 us 1.072 us 0.021 us