On 1/9/26 11:10 AM, Alexei Starovoitov wrote:
On Fri, Jan 9, 2026 at 6:45 AM Mathieu Desnoyers
<[email protected]> wrote:
On 2026-01-08 22:05, Steven Rostedt wrote:
From: "Paul E. McKenney" <[email protected]>
[...]

I disagree with many elements of the proposed approach.

On one end we have BPF wanting to hook on arbitrary tracepoints without
adding significant latency to PREEMPT RT kernels.

One the other hand, we have high-speed tracers which execute very short
critical sections to serialize trace data into ring buffers.

All of those users register to the tracepoint API.

We also have to consider that migrate disable is *not* cheap at all
compared to preempt disable.
Looks like your complaint comes from lack of engagement in kernel
development.
migrate_disable _was_ not cheap.
Try to benchmark it now.
It's inlined. It's a fraction of extra overhead on top of preempt_disable.

The following are related patches to inline migrate_disable():

35561bab768977c9e05f1f1a9bc00134c85f3e28 arch: Add the macro COMPILE_OFFSETS to 
all the asm-offsets.c
88a90315a99a9120cd471bf681515cc77cd7cdb8 rcu: Replace preempt.h with sched.h in 
include/linux/rcupdate.h
378b7708194fff77c9020392067329931c3fcc04 sched: Make migrate_{en,dis}able() 
inline


Reply via email to