Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-12-04 Thread Boris Ostrovsky

On 12/04/2014 03:51 AM, Jan Beulich wrote:

On 03.12.14 at 21:13,  wrote:

On 11/27/2014 03:57 AM, Jan Beulich wrote:

On 26.11.14 at 15:32,  wrote:

On 11/25/2014 08:49 AM, Jan Beulich wrote:

On 17.11.14 at 00:07,  wrote:

@@ -244,19 +256,19 @@ void vpmu_initialise(struct vcpu *v)
switch ( vendor )
{
case X86_VENDOR_AMD:
-if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-opt_vpmu_enabled = 0;
+if ( svm_vpmu_initialise(v) != 0 )
+vpmu_mode = XENPMU_MODE_OFF;
break;

case X86_VENDOR_INTEL:

-if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-opt_vpmu_enabled = 0;
+if ( vmx_vpmu_initialise(v) != 0 )
+vpmu_mode = XENPMU_MODE_OFF;
break;

So this turns off the vPMU globally upon failure of initializing
some random vCPU. Why is that? I see this was the case even
before your entire series, but shouldn't this be fixed _before_
enhancing the whole thing to support PV/PVH?

Yes, that's probably too strong. Do you want to fix this as an early
patch (before PV(H)) support gets in? I'd rather do it in the patch that
moves things into initcalls.

Yes, I think this should be fixed in a prereq patch, thus allowing it
to be easily backported if so desired.

I started to make this change and then I realized that the next patch
(12/21) is actually already taking care of this problem: most of the
*_vpmu_initialise() will be executed as initcalls during host boot and
if any of those fail then we do want to disable VPMU globally (those
failures would not be VCPU-specific).

Funny that you say that - it was actually while reviewing that next
patch when I made that observation: svm_vpmu_initialise() get a
-ENOMEM return _added_ there for example, which is contrary to
what I asked for above.



Oh, *that* initialization code (I was looking at vpmu_init()). Yes, VPMU 
should not be turned off there.


-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-12-04 Thread Jan Beulich
>>> On 03.12.14 at 21:13,  wrote:
> On 11/27/2014 03:57 AM, Jan Beulich wrote:
> On 26.11.14 at 15:32,  wrote:
>>> On 11/25/2014 08:49 AM, Jan Beulich wrote:
>>> On 17.11.14 at 00:07,  wrote:
> @@ -244,19 +256,19 @@ void vpmu_initialise(struct vcpu *v)
>switch ( vendor )
>{
>case X86_VENDOR_AMD:
> -if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
> -opt_vpmu_enabled = 0;
> +if ( svm_vpmu_initialise(v) != 0 )
> +vpmu_mode = XENPMU_MODE_OFF;
>break;
>
>case X86_VENDOR_INTEL:
> -if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
> -opt_vpmu_enabled = 0;
> +if ( vmx_vpmu_initialise(v) != 0 )
> +vpmu_mode = XENPMU_MODE_OFF;
>break;
 So this turns off the vPMU globally upon failure of initializing
 some random vCPU. Why is that? I see this was the case even
 before your entire series, but shouldn't this be fixed _before_
 enhancing the whole thing to support PV/PVH?
>>> Yes, that's probably too strong. Do you want to fix this as an early
>>> patch (before PV(H)) support gets in? I'd rather do it in the patch that
>>> moves things into initcalls.
>> Yes, I think this should be fixed in a prereq patch, thus allowing it
>> to be easily backported if so desired.
> 
> I started to make this change and then I realized that the next patch 
> (12/21) is actually already taking care of this problem: most of the 
> *_vpmu_initialise() will be executed as initcalls during host boot and 
> if any of those fail then we do want to disable VPMU globally (those 
> failures would not be VCPU-specific).

Funny that you say that - it was actually while reviewing that next
patch when I made that observation: svm_vpmu_initialise() get a
-ENOMEM return _added_ there for example, which is contrary to
what I asked for above.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-12-03 Thread Boris Ostrovsky

On 11/27/2014 03:57 AM, Jan Beulich wrote:

On 26.11.14 at 15:32,  wrote:

On 11/25/2014 08:49 AM, Jan Beulich wrote:

On 17.11.14 at 00:07,  wrote:

@@ -244,19 +256,19 @@ void vpmu_initialise(struct vcpu *v)
   switch ( vendor )
   {
   case X86_VENDOR_AMD:
-if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-opt_vpmu_enabled = 0;
+if ( svm_vpmu_initialise(v) != 0 )
+vpmu_mode = XENPMU_MODE_OFF;
   break;
   
   case X86_VENDOR_INTEL:

-if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-opt_vpmu_enabled = 0;
+if ( vmx_vpmu_initialise(v) != 0 )
+vpmu_mode = XENPMU_MODE_OFF;
   break;

So this turns off the vPMU globally upon failure of initializing
some random vCPU. Why is that? I see this was the case even
before your entire series, but shouldn't this be fixed _before_
enhancing the whole thing to support PV/PVH?

Yes, that's probably too strong. Do you want to fix this as an early
patch (before PV(H)) support gets in? I'd rather do it in the patch that
moves things into initcalls.

Yes, I think this should be fixed in a prereq patch, thus allowing it
to be easily backported if so desired.


I started to make this change and then I realized that the next patch 
(12/21) is actually already taking care of this problem: most of the 
*_vpmu_initialise() will be executed as initcalls during host boot and 
if any of those fail then we do want to disable VPMU globally (those 
failures would not be VCPU-specific).


-boris




___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-11-27 Thread Jan Beulich
>>> On 26.11.14 at 15:32,  wrote:

> On 11/25/2014 08:49 AM, Jan Beulich wrote:
> On 17.11.14 at 00:07,  wrote:
>>> @@ -244,19 +256,19 @@ void vpmu_initialise(struct vcpu *v)
>>>   switch ( vendor )
>>>   {
>>>   case X86_VENDOR_AMD:
>>> -if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>> -opt_vpmu_enabled = 0;
>>> +if ( svm_vpmu_initialise(v) != 0 )
>>> +vpmu_mode = XENPMU_MODE_OFF;
>>>   break;
>>>   
>>>   case X86_VENDOR_INTEL:
>>> -if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>> -opt_vpmu_enabled = 0;
>>> +if ( vmx_vpmu_initialise(v) != 0 )
>>> +vpmu_mode = XENPMU_MODE_OFF;
>>>   break;
>> So this turns off the vPMU globally upon failure of initializing
>> some random vCPU. Why is that? I see this was the case even
>> before your entire series, but shouldn't this be fixed _before_
>> enhancing the whole thing to support PV/PVH?
> 
> Yes, that's probably too strong. Do you want to fix this as an early 
> patch (before PV(H)) support gets in? I'd rather do it in the patch that 
> moves things into initcalls.

Yes, I think this should be fixed in a prereq patch, thus allowing it
to be easily backported if so desired.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-11-27 Thread Jan Beulich
>>> On 26.11.14 at 15:26,  wrote:

> On 11/25/2014 08:36 AM, Jan Beulich wrote:
>>
>>> +static long vpmu_sched_checkin(void *arg)
>>> +{
>>> +int cpu = cpumask_next(smp_processor_id(), &cpu_online_map);
>> unsigned int.
>>
>>> +int ret;
>>> +
>>> +/* Mode change shouldn't take more than a few (say, 5) seconds. */
>>> +if ( NOW() > vpmu_start_ctxt_switch + SECONDS(5) )
>>> +{
>>> +ret = -EBUSY;
>>> +goto fail;
>>> +}
>> So what does this guard against? Holding xenpmu_mode_lock for
>> 5 seconds is not a problem, plus I don't think you actually need a
>> lock at all. Just using a global, atomically updated flag ought to be
>> sufficient (the way you use the lock is really nothing else afaict).
> 
> I didn't put it here because of the lock. I just wanted to terminate 
> this operation (mode change) if it takes unreasonable amount of time. 
> And I thought 5 seconds would be unreasonable.

But the question you need to ask yourself is _why_ it could be
taking that long. Since all you do is repeatedly scheduling a
tasklet, it taking arbitrarily long can only be the effect of (a) a
huge system (extremely many CPUs) or (b) a hypervisor bug.
Neither of which is a reason for an arbitrary timeout like you put
in.

>>> +{
>>> +int ret;
>>> +
>>> +vpmu_start_ctxt_switch = NOW();
>>> +
>>> +ret = continue_hypercall_on_cpu(cpumask_first(&cpu_online_map),
>>> +vpmu_sched_checkin, (void *)old_mode);
>>> +if ( ret )
>>> +vpmu_start_ctxt_switch = 0;
>>> +
>>> +return ret;
>>> +}
>> I don't think it is guaranteed (independent of implementation details
>> of continue_hypercall_on_cpu()) that this way you went through the
>> context switch path once on each CPU if
>> cpumask_first(&cpu_online_map) == smp_processor_id() here. In
>> particular it looks like there being a problem calling vcpu_sleep_sync()
>> in the tasklet handler when v == current (which would be the case
>> if you started on the "correct" CPU and the tasklet softirq got
>> handled before the scheduler one). I think you instead want to use
>> cpumask_cycle() here and above, and finish the whole operation
>> once you're back on the CPU you started on (the single-pCPU case
>> may need some extra consideration).
> 
> So your concern is only because of initiating CPU? I can do a 
> force_save() on it so I don't really need a context switch here. This 
> will take care of single PCPU too.

Good.

>> I realize that you simply follow what microcode_update() does, but
>> I think we should rather correct that one than copy the latent issue
>> it has elsewhere.
>>
>>> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>>> +{
>>> +int ret;
>>> +struct xen_pmu_params pmu_params;
>>> +
>>> +ret = xsm_pmu_op(XSM_OTHER, current->domain, op);
>>> +if ( ret )
>>> +return ret;
>>> +
>>> +switch ( op )
>>> +{
>>> +case XENPMU_mode_set:
>>> +{
>>> +uint64_t old_mode;
>> Irrespective of the earlier comments I don't see why this isn't just
>> unsigned int (or typeof(vpmu_mode)).
> 
> This was because vpmu_force_context switch() takes uint64_t as an 
> argument. Now that it will become unsigned long this should too, no? 
> Yes, the compiler will promote it if it is an int but I thought this 
> would look cleaner.

Imo "cleaner" is when the variable is typed according to the value(s)
it is to hold, not according to the parameter types of functions it's
going to be passed to. And with operations on 32-bit types
(statistically) producing slightly smaller code, one could also argue
for using the smaller of the maximum width data source and the
maximum width consumer.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-11-26 Thread Boris Ostrovsky


On 11/25/2014 08:49 AM, Jan Beulich wrote:

On 17.11.14 at 00:07,  wrote:

@@ -244,19 +256,19 @@ void vpmu_initialise(struct vcpu *v)
  switch ( vendor )
  {
  case X86_VENDOR_AMD:
-if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-opt_vpmu_enabled = 0;
+if ( svm_vpmu_initialise(v) != 0 )
+vpmu_mode = XENPMU_MODE_OFF;
  break;
  
  case X86_VENDOR_INTEL:

-if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-opt_vpmu_enabled = 0;
+if ( vmx_vpmu_initialise(v) != 0 )
+vpmu_mode = XENPMU_MODE_OFF;
  break;

So this turns off the vPMU globally upon failure of initializing
some random vCPU. Why is that? I see this was the case even
before your entire series, but shouldn't this be fixed _before_
enhancing the whole thing to support PV/PVH?


Yes, that's probably too strong. Do you want to fix this as an early 
patch (before PV(H)) support gets in? I'd rather do it in the patch that 
moves things into initcalls.


-boris



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-11-26 Thread Boris Ostrovsky


On 11/25/2014 08:36 AM, Jan Beulich wrote:



+static long vpmu_sched_checkin(void *arg)
+{
+int cpu = cpumask_next(smp_processor_id(), &cpu_online_map);

unsigned int.


+int ret;
+
+/* Mode change shouldn't take more than a few (say, 5) seconds. */
+if ( NOW() > vpmu_start_ctxt_switch + SECONDS(5) )
+{
+ret = -EBUSY;
+goto fail;
+}

So what does this guard against? Holding xenpmu_mode_lock for
5 seconds is not a problem, plus I don't think you actually need a
lock at all. Just using a global, atomically updated flag ought to be
sufficient (the way you use the lock is really nothing else afaict).


I didn't put it here because of the lock. I just wanted to terminate 
this operation (mode change) if it takes unreasonable amount of time. 
And I thought 5 seconds would be unreasonable.






+{
+int ret;
+
+vpmu_start_ctxt_switch = NOW();
+
+ret = continue_hypercall_on_cpu(cpumask_first(&cpu_online_map),
+vpmu_sched_checkin, (void *)old_mode);
+if ( ret )
+vpmu_start_ctxt_switch = 0;
+
+return ret;
+}

I don't think it is guaranteed (independent of implementation details
of continue_hypercall_on_cpu()) that this way you went through the
context switch path once on each CPU if
cpumask_first(&cpu_online_map) == smp_processor_id() here. In
particular it looks like there being a problem calling vcpu_sleep_sync()
in the tasklet handler when v == current (which would be the case
if you started on the "correct" CPU and the tasklet softirq got
handled before the scheduler one). I think you instead want to use
cpumask_cycle() here and above, and finish the whole operation
once you're back on the CPU you started on (the single-pCPU case
may need some extra consideration).


So your concern is only because of initiating CPU? I can do a 
force_save() on it so I don't really need a context switch here. This 
will take care of single PCPU too.




I realize that you simply follow what microcode_update() does, but
I think we should rather correct that one than copy the latent issue
it has elsewhere.


+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+int ret;
+struct xen_pmu_params pmu_params;
+
+ret = xsm_pmu_op(XSM_OTHER, current->domain, op);
+if ( ret )
+return ret;
+
+switch ( op )
+{
+case XENPMU_mode_set:
+{
+uint64_t old_mode;

Irrespective of the earlier comments I don't see why this isn't just
unsigned int (or typeof(vpmu_mode)).


This was because vpmu_force_context switch() takes uint64_t as an 
argument. Now that it will become unsigned long this should too, no? 
Yes, the compiler will promote it if it is an int but I thought this 
would look cleaner.



-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-11-25 Thread Jan Beulich
>>> On 17.11.14 at 00:07,  wrote:
> @@ -244,19 +256,19 @@ void vpmu_initialise(struct vcpu *v)
>  switch ( vendor )
>  {
>  case X86_VENDOR_AMD:
> -if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
> -opt_vpmu_enabled = 0;
> +if ( svm_vpmu_initialise(v) != 0 )
> +vpmu_mode = XENPMU_MODE_OFF;
>  break;
>  
>  case X86_VENDOR_INTEL:
> -if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
> -opt_vpmu_enabled = 0;
> +if ( vmx_vpmu_initialise(v) != 0 )
> +vpmu_mode = XENPMU_MODE_OFF;
>  break;

So this turns off the vPMU globally upon failure of initializing
some random vCPU. Why is that? I see this was the case even
before your entire series, but shouldn't this be fixed _before_
enhancing the whole thing to support PV/PVH?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-11-25 Thread Jan Beulich
>>> On 17.11.14 at 00:07,  wrote:
> @@ -278,3 +290,139 @@ void vpmu_dump(struct vcpu *v)
>  vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
>  }
>  
> +static s_time_t vpmu_start_ctxt_switch;

Blank line here please.

> +static long vpmu_sched_checkin(void *arg)
> +{
> +int cpu = cpumask_next(smp_processor_id(), &cpu_online_map);

unsigned int.

> +int ret;
> +
> +/* Mode change shouldn't take more than a few (say, 5) seconds. */
> +if ( NOW() > vpmu_start_ctxt_switch + SECONDS(5) )
> +{
> +ret = -EBUSY;
> +goto fail;
> +}

So what does this guard against? Holding xenpmu_mode_lock for
5 seconds is not a problem, plus I don't think you actually need a
lock at all. Just using a global, atomically updated flag ought to be
sufficient (the way you use the lock is really nothing else afaict).

> +
> +if ( cpu < nr_cpu_ids )
> +{
> +ret = continue_hypercall_on_cpu(cpu, vpmu_sched_checkin, arg);
> +if ( ret )
> +goto fail;
> +}
> +else
> +vpmu_start_ctxt_switch = 0;
> +
> +return 0;
> +
> + fail:
> +vpmu_mode = (uint64_t)arg;

Even if we don't support x86-32 anymore, please avoid giving bad
examples: Casts between pointers and integers should always use
(unsigned) long on the integer side.

> +vpmu_start_ctxt_switch = 0;
> +return ret;
> +}
> +
> +static int vpmu_force_context_switch(uint64_t old_mode)

Same comment as above regarding the type.

> +{
> +int ret;
> +
> +vpmu_start_ctxt_switch = NOW();
> +
> +ret = continue_hypercall_on_cpu(cpumask_first(&cpu_online_map),
> +vpmu_sched_checkin, (void *)old_mode);
> +if ( ret )
> +vpmu_start_ctxt_switch = 0;
> +
> +return ret;
> +}

I don't think it is guaranteed (independent of implementation details
of continue_hypercall_on_cpu()) that this way you went through the
context switch path once on each CPU if
cpumask_first(&cpu_online_map) == smp_processor_id() here. In
particular it looks like there being a problem calling vcpu_sleep_sync()
in the tasklet handler when v == current (which would be the case
if you started on the "correct" CPU and the tasklet softirq got
handled before the scheduler one). I think you instead want to use
cpumask_cycle() here and above, and finish the whole operation
once you're back on the CPU you started on (the single-pCPU case
may need some extra consideration).

I realize that you simply follow what microcode_update() does, but
I think we should rather correct that one than copy the latent issue
it has elsewhere.

> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
> +{
> +int ret;
> +struct xen_pmu_params pmu_params;
> +
> +ret = xsm_pmu_op(XSM_OTHER, current->domain, op);
> +if ( ret )
> +return ret;
> +
> +switch ( op )
> +{
> +case XENPMU_mode_set:
> +{
> +uint64_t old_mode;

Irrespective of the earlier comments I don't see why this isn't just
unsigned int (or typeof(vpmu_mode)).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags

2014-11-16 Thread Boris Ostrovsky
Add runtime interface for setting PMU mode and flags. Three main modes are
provided:
* XENPMU_MODE_OFF:  PMU is not virtualized
* XENPMU_MODE_SELF: Guests can access PMU MSRs and receive PMU interrupts.
* XENPMU_MODE_HV: Same as XENPMU_MODE_SELF for non-proviledged guests, dom0
  can profile itself and the hypervisor.

Note that PMU modes are different from what can be provided at Xen's boot line
with 'vpmu' argument. An 'off' (or '0') value is equivalent to XENPMU_MODE_OFF.
Any other value, on the other hand, will cause VPMU mode to be set to
XENPMU_MODE_SELF during boot.

For feature flags only Intel's BTS is currently supported.

Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.

Signed-off-by: Boris Ostrovsky 
Acked-by: Kevin Tian 
Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Dietmar Hahn 
Tested-by: Dietmar Hahn 
---
 tools/flask/policy/policy/modules/xen/xen.te |   3 +
 xen/arch/x86/domain.c|   6 +-
 xen/arch/x86/hvm/svm/vpmu.c  |   4 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c|  10 +-
 xen/arch/x86/hvm/vpmu.c  | 164 +--
 xen/arch/x86/x86_64/compat/entry.S   |   4 +
 xen/arch/x86/x86_64/entry.S  |   4 +
 xen/include/asm-x86/hvm/vpmu.h   |  27 +++--
 xen/include/public/pmu.h |  45 
 xen/include/public/xen.h |   1 +
 xen/include/xen/hypercall.h  |   4 +
 xen/include/xlat.lst |   1 +
 xen/include/xsm/dummy.h  |  15 +++
 xen/include/xsm/xsm.h|   6 +
 xen/xsm/dummy.c  |   1 +
 xen/xsm/flask/hooks.c|  18 +++
 xen/xsm/flask/policy/access_vectors  |   2 +
 17 files changed, 288 insertions(+), 27 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.te 
b/tools/flask/policy/policy/modules/xen/xen.te
index d214470..ae7bf3c 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -68,6 +68,9 @@ allow dom0_t xen_t:xen2 {
 resource_op
 psr_cmt_op
 };
+allow dom0_t xen_t:xen2 {
+pmu_ctrl
+};
 allow dom0_t xen_t:mmu memorymap;
 
 # Allow dom0 to use these domctls on itself. For domctls acting on other
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index ae0a344..da5bdf4 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1544,7 +1544,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 if ( is_hvm_vcpu(prev) )
 {
 if (prev != next)
-vpmu_save(prev);
+vpmu_switch_from(prev, next);
 
 if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
 pt_save_timer(prev);
@@ -1587,9 +1587,9 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
!is_hardware_domain(next->domain));
 }
 
-if (is_hvm_vcpu(next) && (prev != next) )
+if ( is_hvm_vcpu(prev) && (prev != next) )
 /* Must be done with interrupts enabled */
-vpmu_load(next);
+vpmu_switch_to(prev, next);
 
 context_saved(prev);
 
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 0d30b37..61d56d5 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -478,14 +478,14 @@ struct arch_vpmu_ops amd_vpmu_ops = {
 .arch_vpmu_dump = amd_vpmu_dump
 };
 
-int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+int svm_vpmu_initialise(struct vcpu *v)
 {
 struct vpmu_struct *vpmu = vcpu_vpmu(v);
 uint8_t family = current_cpu_data.x86;
 int ret = 0;
 
 /* vpmu enabled? */
-if ( !vpmu_flags )
+if ( vpmu_mode == XENPMU_MODE_OFF )
 return 0;
 
 switch ( family )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c 
b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index a6cca38..23e4899 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -708,13 +708,13 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs 
*regs)
 return 1;
 }
 
-static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+static int core2_vpmu_initialise(struct vcpu *v)
 {
 struct vpmu_struct *vpmu = vcpu_vpmu(v);
 u64 msr_content;
 static bool_t ds_warned;
 
-if ( !(vpmu_flags & VPMU_BOOT_BTS) )
+if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
 goto func_out;
 /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
 while ( boot_cpu_has(X86_FEATURE_DS) )
@@ -829,7 +829,7 @@ struct arch_vpmu_ops core2_no_vpmu_ops = {
 .do_cpuid = core2_no_vpmu_do_cpuid,
 };
 
-int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+int vmx_vpmu_initialise(struct vcpu *v)
 {
 struct vpmu_struct *vpmu = vcpu_vpmu(v);
 uint8_t family = current_cpu_data.x86;
@@ -837,7 +837,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int 
vpmu_flags)
 int ret = 0;