On Fri, Jan 09, 2026 at 04:02:02PM -0500, Steven Rostedt wrote: > On Fri, 9 Jan 2026 15:21:19 -0500 > Mathieu Desnoyers <[email protected]> wrote: > > > * preempt disable/enable pair: 1.1 ns > > * srcu-fast lock/unlock: 1.5 ns > > > > CONFIG_RCU_REF_SCALE_TEST=y > > * migrate disable/enable pair: 3.0 ns > > * calls to migrate disable/enable pair within noinline functions: 17.0 ns > > > > CONFIG_RCU_REF_SCALE_TEST=m > > * migrate disable/enable pair: 22.0 ns > > OUCH! So migrate disable/enable has a much larger overhead when executed in > a module than in the kernel? This means all spin_locks() in modules > converted to mutexes in PREEMPT_RT are taking this hit!
Not so, the migrate_disable() for PREEMPT_RT is still in core code -- kernel/locking/spinlock_rt.c is very much not build as a module.
