On Tue, May 12, 2020 at 02:41:03PM +0200, Peter Zijlstra wrote:
> This completes the ARM64 cap_user_time support.
> 
> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
> ---
>  arch/arm64/kernel/perf_event.c |   12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -1173,6 +1173,7 @@ void arch_perf_update_userpage(struct pe
>  
>       userpg->cap_user_time = 0;
>       userpg->cap_user_time_zero = 0;
> +     userpg->cap_user_time_short = 0;
>  
>       do {
>               rd = sched_clock_read_begin(&seq);
> @@ -1183,13 +1184,13 @@ void arch_perf_update_userpage(struct pe
>               userpg->time_mult = rd->mult;
>               userpg->time_shift = rd->shift;
>               userpg->time_zero = rd->epoch_ns;
> +             userpg->time_cycle = rd->epoch_cyc;

s/time_cycle/time_cycles, maybe consider to change the naming to
'time_cycle'.

This patch set looks good to me after I tested it on Arm64 board.

Thanks,
Leo

> +             userpg->time_mask = rd->sched_clock_mask;
>  
>               /*
> -              * This isn't strictly correct, the ARM64 counter can be
> -              * 'short' and then we get funnies when it wraps. The correct
> -              * thing would be to extend the perf ABI with a cycle and mask
> -              * value, but because wrapping on ARM64 is very rare in
> -              * practise this 'works'.
> +              * Subtract the cycle base, such that software that
> +              * doesn't know about cap_user_time_short still 'works'
> +              * assuming no wraps.
>                */
>               userpg->time_zero -= (rd->epoch_cyc * rd->mult) >> rd->shift;
>  
> @@ -1214,4 +1215,5 @@ void arch_perf_update_userpage(struct pe
>        */
>       userpg->cap_user_time = 1;
>       userpg->cap_user_time_zero = 1;
> +     userpg->cap_user_time_short = 1;
>  }
> 
> 

Reply via email to