On Fri, Apr 02 2021 at 15:49, paulmck wrote: > > +static void clocksource_verify_percpu_wq(struct work_struct *unused) > +{ > + int cpu; > + struct clocksource *cs; > + int64_t cs_nsec; > + u64 csnow_begin; > + u64 csnow_end; > + u64 delta;
Please use reverse fir tree ordering and stick variables of the same type together: u64 csnow_begin, csnow_end, delta; struct clocksource *cs; s64 cs_nsec; int cpu; > + > + cs = smp_load_acquire(&clocksource_verify_work_cs); // pairs with > release Please don't use tail comments. They are a horrible distraction. > + if (WARN_ON_ONCE(!cs)) > + return; > + pr_warn("Checking clocksource %s synchronization from CPU %d.\n", > + cs->name, smp_processor_id()); > + cpumask_clear(&cpus_ahead); > + cpumask_clear(&cpus_behind); > + csnow_begin = cs->read(cs); So this is invoked via work and the actual clocksource change is done via work too. Once the clocksource is not longer actively used for timekeeping it can go away. What's guaranteeing that this runs prior to the clocksource change and 'cs' is valid throughout this function? > + queue_work(system_highpri_wq, &clocksource_verify_work); This does not guarantee anything. So why does this need an extra work function which is scheduled seperately? Thanks, tglx