On Mon, Apr 12, 2021 at 10:37:10PM +0200, Thomas Gleixner wrote: > On Mon, Apr 12 2021 at 12:57, Paul E. McKenney wrote: > > On Mon, Apr 12, 2021 at 08:54:03PM +0200, Thomas Gleixner wrote: > >> > I will send a new series out later today, Pacific Time. > >> > >> Can you do me a favour and send it standalone and not as yet another > >> reply to this existing thread maze. A trivial lore link to the previous > >> version gives enough context. > > > > Will do! > > > > Of course, it turns out that lockdep also doesn't like waited-on > > smp_call_function_single() invocations from timer handlers, > > so I am currently looking at other options for dealing with that > > potential use-after-free. I am starting to like the looks of "only set > > CLOCK_SOURCE_VERIFY_PERCPU on statically allocated clocksource structures > > and let KASAN enforce this restriction", but I have not quite given up > > on making it more general. > > The simplest point is in the thread under the clocksource_mutex which > prevents anything from vanishing under your feet.
And lockdep is -much- happier with the setup shown below, so thank you again! Thanx, Paul ------------------------------------------------------------------------ diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c index f047c6cb056c..34dc38b6b923 100644 --- a/kernel/time/clocksource.c +++ b/kernel/time/clocksource.c @@ -519,6 +515,13 @@ static int __clocksource_watchdog_kthread(void) unsigned long flags; int select = 0; + /* Do any required per-CPU skew verification. */ + list_for_each_entry(cs, &watchdog_list, wd_list) { + if ((cs->flags & (CLOCK_SOURCE_UNSTABLE | CLOCK_SOURCE_VERIFY_PERCPU)) == + (CLOCK_SOURCE_UNSTABLE | CLOCK_SOURCE_VERIFY_PERCPU)) + clocksource_verify_percpu(cs); + } + spin_lock_irqsave(&watchdog_lock, flags); list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) { if (cs->flags & CLOCK_SOURCE_UNSTABLE) {