On Wed, 2017-04-05 at 23:36 -0700, Wanpeng Li wrote:

> This patch offsets the tick to avert all ticks alignment in order 
> that the vtime sampling does not end up "in phase" with the jiffies 
> incrementing.
> 
> Reported-by: Luiz Capitulino <[email protected]>
> Suggested-by: Rik van Riel <[email protected]>
> Cc: Frederic Weisbecker <[email protected]>
> Cc: Rik van Riel <[email protected]>
> Cc: Mike Galbraith <[email protected]>
> Cc: Luiz Capitulino <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Signed-off-by: Wanpeng Li <[email protected]>

Reviewed-by: Rik van Riel <[email protected]>

> +++ b/kernel/time/tick-sched.c
> @@ -1197,8 +1197,12 @@ void tick_setup_sched_timer(void)
>       /* Get the next period (per-CPU) */
>       hrtimer_set_expires(&ts->sched_timer,
> tick_init_jiffy_update());
>  
> -     /* Offset the tick to avert jiffies_lock contention. */
> -     if (sched_skew_tick) {
> +     /*
> +      * Offset the tick to avert jiffies_lock contention, and all
> ticks
> +      * alignment in order that the vtime sampling does not end
> up "in
> +      * phase" with the jiffies incrementing.
> +      */

I feel like part of the explanation is missing from this
comment, but I am not sure how to make it better without
making it way too long :)

> +     if (sched_skew_tick || tick_nohz_full_enabled()) {
>               u64 offset = ktime_to_ns(tick_period) >> 1;
>               do_div(offset, num_possible_cpus());
>               offset *= smp_processor_id();

Reply via email to