On 5/15/26 09:01, Vineeth Pillai (Google) wrote:
From: Vineeth Pillai <[email protected]>
Replace trace_foo() with the new trace_call__foo() at sites already
guarded by trace_foo_enabled(), avoiding a redundant
static_branch_unlikely() re-evaluation inside the tracepoint.
trace_call__foo() calls the tracepoint callbacks directly without
utilizing the static branch again.
Original v2 series:
https://lore.kernel.org/linux-trace-kernel/[email protected]/
Parts of the original v2 series have already been merged in mainline.
This patch is being reposted as a follow-up cleanup for the remaining
unmerged pieces.
Suggested-by: Steven Rostedt <[email protected]>
Suggested-by: Peter Zijlstra <[email protected]>
Signed-off-by: Vineeth Pillai (Google) <[email protected]>
Assisted-by: Claude:claude-sonnet-4-6
No concerns this going through another tree together.
Reviewed-by: Mario Limonciello <[email protected]>
---
drivers/cpufreq/amd-pstate.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 453084c67327..4722de25149b 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -368,7 +368,8 @@ static int amd_pstate_set_floor_perf(struct cpufreq_policy
*policy, u8 perf)
out_trace:
if (trace_amd_pstate_cppc_req2_enabled())
- trace_amd_pstate_cppc_req2(cpudata->cpu, perf, changed, ret);
+ trace_call__amd_pstate_cppc_req2(cpudata->cpu, perf, changed,
+ ret);
return ret;
}