Hi Qais,

On 6/24/20 6:26 PM, Qais Yousef wrote:
This series attempts to address the report that uclamp logic could be expensive
sometimes and shows a regression in netperf UDP_STREAM under certain
conditions.

The first patch is a fix for how struct uclamp_rq is initialized which is
required by the 2nd patch which contains the real 'fix'.

Worth noting that the root cause of the overhead is believed to be system
specific or related to potential certain code/data layout issues, leading to
worse I/D $ performance.

Different systems exhibited different behaviors and the regression did
disappear in certain kernel version while attempting to reporoduce.

More info can be found here:

https://lore.kernel.org/lkml/20200616110824.dgkkbyapn3io6wik@e107158-lin/

Having the static key seemed the best thing to do to ensure the effect of
uclamp is minimized for kernels that compile it in but don't have a userspace
that uses it, which will allow distros to distribute uclamp capable kernels by
default without having to compromise on performance for some systems that could
be affected.

Changes in v3:
        * Avoid double negatives and rename the static key to uclamp_used
        * Unconditionally enable the static key through any of the paths where
          the user can modify the default uclamp value.
        * Use C99 named struct initializer for struct uclamp_rq which is easier
          to read than the memset().

Changes in v2:
        * Add more info in the commit message about the result of perf diff to
          demonstrate that the activate/deactivate_task pressure is reduced in
          the fast path.

        * Fix sparse warning reported by the test robot.

        * Add an extra commit about using static_branch_likely() instead of
          static_branc_unlikely().

Thanks

--
Qais Yousef

Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
CC: Patrick Bellasi <[email protected]>
Cc: Chris Redpath <[email protected]>
Cc: Lukasz Luba <[email protected]>
Cc: [email protected]

Qais Yousef (2):
   sched/uclamp: Fix initialization of strut uclamp_rq
   sched/uclamp: Protect uclamp fast path code with static key

  kernel/sched/core.c | 75 +++++++++++++++++++++++++++++++++++++++------
  1 file changed, 66 insertions(+), 9 deletions(-)



The results for this v3 series from mmtest netperf-udp (30x each UDP
size) are good.

v5.7-rc7-base-noucl v5.7-rc7-ucl-tsk-nofix v5.7-rc7-ucl-tsk-grp-fix_v3 Hmean send-64 62.15 ( 0.00%) 59.65 * -4.02%* 65.83 * 5.93%* Hmean send-128 122.88 ( 0.00%) 119.37 * -2.85%* 133.20 * 8.40%* Hmean send-256 244.85 ( 0.00%) 234.26 * -4.32%* 264.01 * 7.83%* Hmean send-1024 919.24 ( 0.00%) 880.67 * -4.20%* 1005.54 * 9.39%* Hmean send-2048 1689.45 ( 0.00%) 1647.54 * -2.48%* 1845.64 * 9.25%* Hmean send-3312 2542.36 ( 0.00%) 2485.23 * -2.25%* 2729.11 * 7.35%* Hmean send-4096 2935.69 ( 0.00%) 2861.09 * -2.54%* 3161.16 * 7.68%* Hmean send-8192 4800.35 ( 0.00%) 4680.09 * -2.51%* 5090.38 * 6.04%* Hmean send-16384 7473.66 ( 0.00%) 7349.60 * -1.66%* 7786.42 * 4.18%* Hmean recv-64 62.15 ( 0.00%) 59.65 * -4.03%* 65.82 * 5.91%* Hmean recv-128 122.88 ( 0.00%) 119.37 * -2.85%* 133.20 * 8.40%* Hmean recv-256 244.84 ( 0.00%) 234.26 * -4.32%* 264.01 * 7.83%* Hmean recv-1024 919.24 ( 0.00%) 880.67 * -4.20%* 1005.54 * 9.39%* Hmean recv-2048 1689.44 ( 0.00%) 1647.54 * -2.48%* 1845.06 * 9.21%* Hmean recv-3312 2542.36 ( 0.00%) 2485.23 * -2.25%* 2728.74 * 7.33%* Hmean recv-4096 2935.69 ( 0.00%) 2861.09 * -2.54%* 3160.74 * 7.67%* Hmean recv-8192 4800.35 ( 0.00%) 4678.15 * -2.55%* 5090.36 * 6.04%* Hmean recv-16384 7473.63 ( 0.00%) 7349.52 * -1.66%* 7786.25 * 4.18%*

I am happy to re-run v4 if there will be, but for now:

Tested-by: Lukasz Luba <[email protected]>

Regards,
Lukasz

Reply via email to