This hook allows architecture specific code to be called right after perf_events' context switch but before the scheduler lock is released.
It will serve two uses in this patch series: 1) Calls CMT's cgroup context switch code that update the current RMID when no perf event is active (in continuous monitoring mode). 2) Calls __pqr_ctx_switch to perform the write with the final value to the slow PQR_ASSOC msr. This hook is different than the one used by Intel CAT in the series currently under review in LKML. CAT series simply adds a call to intel_rdt_sched_in in __switch_to (see "[PATCH v6 09/10] x86/intel_rdt: Add scheduler hook"). This series proposes a change to use finish_arch_pre_lock_switch instead. Since, for CMT, the integration with perf_events requires the context switch of the intel rdt common code to occur after perf's context switch and before releasing the switch lock, in order to perform (1) correctly. Signed-off-by: David Carrillo-Cisneros <davi...@google.com> --- kernel/sched/core.c | 1 + kernel/sched/sched.h | 3 +++ 2 files changed, 4 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 94732d1..2138ee6 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2766,6 +2766,7 @@ static struct rq *finish_task_switch(struct task_struct *prev) prev_state = prev->state; vtime_task_switch(prev); perf_event_task_sched_in(prev, current); + finish_arch_pre_lock_switch(); finish_lock_switch(rq, prev); finish_arch_post_lock_switch(); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 055f935..0a0208e 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1112,6 +1112,9 @@ static inline int task_on_rq_migrating(struct task_struct *p) #ifndef prepare_arch_switch # define prepare_arch_switch(next) do { } while (0) #endif +#ifndef finish_arch_pre_lock_switch +# define finish_arch_pre_lock_switch() do { } while (0) +#endif #ifndef finish_arch_post_lock_switch # define finish_arch_post_lock_switch() do { } while (0) #endif -- 2.8.0.rc3.226.g39d4020