On Tue, Oct 8, 2019 at 10:32 AM Eric Dumazet <eduma...@google.com> wrote:
>
> Followup to commit dd2261ed45aa ("hrtimer: Protect lockless access
> to timer->base")
>
> lock_hrtimer_base() fetches timer->base without lock exclusion.
>
> Compiler is allowed to read timer->base twice (even if considered dumb)
> and we could end up trying to lock migration_base and
> return &migration_base.
>
>   base = timer->base;
>   if (likely(base != &migration_base)) {
>
>        /* compiler reads timer->base again, and now (base == &migration_base)
>
>        raw_spin_lock_irqsave(&base->cpu_base->lock, *flags);
>        if (likely(base == timer->base))
>             return base; /* == &migration_base ! */
>
> Similarly the write sides should use WRITE_ONCE() to avoid
> store tearing.
>
> Signed-off-by: Eric Dumazet <eduma...@google.com>
> Cc: Julien Grall <julien.gr...@arm.com>
> Cc: Thomas Gleixner <t...@linutronix.de>
> ---
>  kernel/time/hrtimer.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
> index 
> 0d4dc241c0fb498036c91a571e65cb00f5d19ba6..65605530ee349c9682690c4fccb43aa9284d4287
>  100644
> --- a/kernel/time/hrtimer.c
> +++ b/kernel/time/hrtimer.c
> @@ -164,7 +164,7 @@ struct hrtimer_clock_base *lock_hrtimer_base(const struct 
> hrtimer *timer,
>         struct hrtimer_clock_base *base;
>
>         for (;;) {
> -               base = timer->base;
> +               base = READ_ONCE(timer->base);
>                 if (likely(base != &migration_base)) {
>                         raw_spin_lock_irqsave(&base->cpu_base->lock, *flags);
>                         if (likely(base == timer->base))
> @@ -244,7 +244,7 @@ switch_hrtimer_base(struct hrtimer *timer, struct 
> hrtimer_clock_base *base,
>                         return base;
>
>                 /* See the comment in lock_hrtimer_base() */
> -               timer->base = &migration_base;
> +               WRITE_ONCE(timer->base, &migration_base);
>                 raw_spin_unlock(&base->cpu_base->lock);
>                 raw_spin_lock(&new_base->cpu_base->lock);
>
> @@ -253,10 +253,10 @@ switch_hrtimer_base(struct hrtimer *timer, struct 
> hrtimer_clock_base *base,
>                         raw_spin_unlock(&new_base->cpu_base->lock);
>                         raw_spin_lock(&base->cpu_base->lock);
>                         new_cpu_base = this_cpu_base;
> -                       timer->base = base;
> +                       WRITE_ONCE(timer->base, base);
>                         goto again;
>                 }
> -               timer->base = new_base;
> +               WRITE_ONCE(timer->base, new_base);
>         } else {
>                 if (new_cpu_base != this_cpu_base &&
>                     hrtimer_check_target(timer, new_base)) {
> --
> 2.23.0.581.g78d2f28ef7-goog
>

Any news on this patch ?

If more information is needed, let me know.

Maybe I need to point to :

commit b831275a3553c32091222ac619cfddd73a5553fb timers: Plug locking
race vs. timer migration

Thanks

Reply via email to