From: Vineeth Pillai <[email protected]> Replace trace_foo() with the new trace_call__foo() at sites already guarded by trace_foo_enabled(), avoiding a redundant static_branch_unlikely() re-evaluation inside the tracepoint. trace_call__foo() calls the tracepoint callbacks directly without utilizing the static branch again.
Original v2 series: https://lore.kernel.org/linux-trace-kernel/[email protected]/ Parts of the original v2 series have already been merged in mainline. This patch is being reposted as a follow-up cleanup for the remaining unmerged pieces. Suggested-by: Steven Rostedt <[email protected]> Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Vineeth Pillai (Google) <[email protected]> Assisted-by: Claude:claude-sonnet-4-6 --- io_uring/io_uring.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index e612a66ee80e..1b657b714373 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -312,7 +312,7 @@ static __always_inline bool io_fill_cqe_req(struct io_ring_ctx *ctx, } if (trace_io_uring_complete_enabled()) - trace_io_uring_complete(req->ctx, req, cqe); + trace_call__io_uring_complete(req->ctx, req, cqe); return true; } -- 2.54.0
