On Tue, Apr 13 2021 at 21:36, Paul E. McKenney wrote:

Bah, hit send too quick.

> +     cpumask_clear(&cpus_ahead);
> +     cpumask_clear(&cpus_behind);
> +     preempt_disable();

Daft. 

> +     testcpu = smp_processor_id();
> +     pr_warn("Checking clocksource %s synchronization from CPU %d.\n", 
> cs->name, testcpu);
> +     for_each_online_cpu(cpu) {
> +             if (cpu == testcpu)
> +                     continue;
> +             csnow_begin = cs->read(cs);
> +             smp_call_function_single(cpu, clocksource_verify_one_cpu, cs, 
> 1);
> +             csnow_end = cs->read(cs);

As this must run with interrupts enabled, that's a pretty rough
approximation like measuring wind speed with a wet thumb.

Wouldn't it be smarter to let the remote CPU do the watchdog dance and
take that result? i.e. split out more of the watchdog code so that you
can get the nanoseconds delta on that remote CPU to the watchdog.

> +             delta = (s64)((csnow_mid - csnow_begin) & cs->mask);
> +             if (delta < 0)
> +                     cpumask_set_cpu(cpu, &cpus_behind);
> +             delta = (csnow_end - csnow_mid) & cs->mask;
> +             if (delta < 0)
> +                     cpumask_set_cpu(cpu, &cpus_ahead);
> +             delta = clocksource_delta(csnow_end, csnow_begin, cs->mask);
> +             cs_nsec = clocksource_cyc2ns(delta, cs->mult, cs->shift);

> +             if (firsttime || cs_nsec > cs_nsec_max)
> +                     cs_nsec_max = cs_nsec;
> +             if (firsttime || cs_nsec < cs_nsec_min)
> +                     cs_nsec_min = cs_nsec;
> +             firsttime = 0;

  int64_t cs_nsec_max = 0, cs_nsec_min = LLONG_MAX;

and then the firsttime muck is not needed at all.

Thanks,

        tglx

Reply via email to