On Thu, Nov 01, 2018 at 06:04:01PM +0800, Wei Wang wrote:
> Add x86_perf_mask_perf_counters to reserve counters from the host perf
> subsystem. The masked counters will not be assigned to any host perf
> events. This can be used by the hypervisor to reserve perf counters for
> a guest to use.
> 
> This function is currently supported on Intel CPUs only, but put in x86
> perf core because the counter assignment is implemented here and we need
> to re-enable the pmu which is defined in the x86 perf core in the case
> that a counter to be masked happens to be used by the host.
> 
> Signed-off-by: Wei Wang <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Andi Kleen <[email protected]>
> Cc: Paolo Bonzini <[email protected]>
> ---
>  arch/x86/events/core.c            | 37 +++++++++++++++++++++++++++++++++++++
>  arch/x86/include/asm/perf_event.h |  1 +
>  2 files changed, 38 insertions(+)
> 
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 106911b..e73135a 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -716,6 +716,7 @@ struct perf_sched {
>  static void perf_sched_init(struct perf_sched *sched, struct 
> event_constraint **constraints,
>                           int num, int wmin, int wmax, int gpmax)
>  {
> +     struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>       int idx;
>  
>       memset(sched, 0, sizeof(*sched));
> @@ -723,6 +724,9 @@ static void perf_sched_init(struct perf_sched *sched, 
> struct event_constraint **
>       sched->max_weight       = wmax;
>       sched->max_gp           = gpmax;
>       sched->constraints      = constraints;
> +#ifdef CONFIG_CPU_SUP_INTEL
> +     sched->state.used[0]    = cpuc->intel_ctrl_guest_mask;
> +#endif

NAK.  This completely undermines the whole purpose of event scheduling.


Reply via email to