On Thu, 8 Jan 2026 11:58:00 -0500 Steven Rostedt <[email protected]> wrote:
> On Thu, 8 Jan 2026 09:39:32 +0100 > Petr Tesarik <[email protected]> wrote: > > > > > Or we simply change it to: > > > > > > > > static inline void > > > > > > Actually, the above should be noinline, as it's in a slower path, and > > > should not be adding logic into the cache of the fast path. > > > > However, to be honest, I'm surprized this is considered slow path. My > > use case is to record a few selected trace events with "trace-cmd > > record", which spends most time polling trace_pipe_raw. Consequently, > > there is almost always a pending waiter that requires a wakeup. > > > > In short, irq_work_queue() is the hot path for me. > > > > OTOH I don't mind making it noinline, because on recent Intel and AMD > > systems, a function call (noinline) is often cheaper than an increase > > in L1 cache footprint (caused by inlining). But I'm confused. I have > > always thought most people use tracing same way as I do. > > The call to rb_wakeups() is a fast path, but the wakeup itself is a slow > path. This is the case even when you have user space in a loop that is just > waiting on data. > > User space tool: > > ring_buffer_wait() { > wake_event_interruptible(.., rb_wait_cond(..)); > } > > Writer: > > rb_wakeups() { > if (!full_hit()) > return; > } > > The full_hit() is the watermark check. If you look at the tracefs > directory, you'll see a "buffer_percent" file, which is default set to 50. > That means that full_hit() will not return true until the ring buffer is > around 50 percent full. This function is called thousands of times before > the first wakeup happens. > > Let's look at even a waiter that isn't using the buffer percent. This means > it will be woken up on any event in the buffer. > > rb_wakeups() { > if (buffer->irq_work.waiters_pending) { > buffer->irq_work.waiters_pending = false; > /* irq_work_queue() supplies it's own memory barriers */ > irq_work_queue(&buffer->irq_work.work); > > > So it clears the waiters_pending flag and wakes up the waiter. Now the > waiter wakes up and starts reading the ring buffer. While the ring buffer > has content, it will continue to read and doesn't block again until the > ring buffer is empty. This means that thousands of events are being > recorded with no waiters to wake up. > > See why this is a slow path? Thank you for the detailed explanation. So, yeah, most people use it differently from me, generating trace events fast enough that the reader does not consume the previous event before the next one arrives. I have removed both "inline" and "noinline" in v2, leaving it at the discretion of the compiler. If you believe it deserves a "noinline", feel free to add it. FWIW on x86-64, I didn't observe any measurable diference either in latency or instruction cache footprint. Petr T
