Changes in v19:
* Do not allow changing mode to/from OFF/ALL while guests are
running. This significantly simplifies code due
to large number of corner cases that I had to deal with. Most of the
changes are in patch#5. This also makes patch 4 from last version
unnecessary
* Defer NMI support (drop patch#14 from last version)
* Make patch#15 from last series be patch#1 (vpmu init cleanup)
* Other changes are listed per patch
Changes in v18:
* Return 1 (i.e. "handled") in vpmu_do_interrupt() if PMU_CACHED is
set. This is needed since we can get an interrupt while this flag is
set on AMD processors when multiple counters are in use (**) (AMD
processor don't mask LVTPC when PMC interrupt happens and so there
is a window in vpmu_do_interrupt() until it sets the mask
bit). Patch #14
* Unload both current and last_vcpu (if different) vpmu and clear
this_cpu(last_vcpu) in vpmu_unload_all. Patch #5
* Make major version check for certain xenpmu_ops. Patch #5
* Make xenpmu_op()'s first argument unsigned. Patch #5
* Don't use format specifier for __stringify(). Patch #6
* Don't print generic error in vpmu_init(). Patch #6
* Don't test for VPMU existance in vpmu_initialise(). New patch #15
* Added vpmu_disabled flag to make sure VPMU doesn't get reenabled from
dom0 (for example when watchdog is active). Patch #5
* Updated tags on some patches to better reflect latest reviewed status)
(**) While testing this I discovered that AMD VPMU is quite broken for
HVM: when multiple counters are in use linux dom0 often gets
unexpected NMIs. This may have something to do with what I mentioned
in the first bullet. However, this doesn't appear to be related to
this patch series (or earlier VPMU patches) --- I can reproduce this
all the way back to 4.1
Changes in v17:
* Disable VPMU when unknown CPU vendor is detected (patch #2)
* Remove unnecessary vendor tests in vendor-specific init routines (patch #14)
* Remember first CPU that starts mode change and use it to stop the cycle
(patch #13)
* If vpmu ops is not present, return 0 as value for VPMU MSR read (as opposed
to
returning an error as was the case in previous patch.) (patch #18)
* Change slightly vpmu_do_msr() logic as result of this chage (patch #20)
* stringify VPMU version (patch #14)
* Use 'CS > 1' to mark sample as PMU_SAMPLE_USER (patch #19)
Changes in v16:
* Many changes in VPMU mode patch (#13):
* Replaced arguments to some vpmu routines (vcpu -> vpmu). New patch (#12)
* Added vpmu_unload vpmu op to completely unload vpmu data (e.g clear
MSR bitmaps). This routine may be called in context switch
(vpmu_switch_to()).
* Added vmx_write_guest_msr_vcpu() interface to write MSRs of non-current VCPU
* Use cpumask_cycle instead of cpumask_next
* Dropped timeout error
* Adjusted types of mode variables
* Don't allow oprofile to allocate its context on MSR access if VPMU context
has already been allocated (which may happen when VMPU mode was set to off
while the guest was running)
* vpmu_initialise() no longer turns off VPMU globally on failure. New patch (#2)
* vpmu_do_msr() will return 1 (failure) if vpmu_ops are not set. This is done to
prevent PV guests that are not VPMU-enabled from wrongly assuming that they
have
access to counters (Linux check_hw_exists() will make this assumption) (patch
#18)
* Add cpl field to shared structure that will be passed for HVM guests' samples
(instead of PMU_SAMPLE_USER flag). Add PMU_SAMPLE_PV flag to mark whose sample
is passed up. (Patches ## 10, 19, 22)
Changes in v15:
* Rewrote vpmu_force_context_switch() to use continue_hypercall_on_cpu()
* Added vpmu_init initcall that will call vendor-specific init routines
* Added a lock to vpmu_struct to serialize pmu_init()/pmu_finish()
* Use SS instead of CS for DPL (instead of RPL)
* Don't take lock for XENPMU_mode_get
* Make vmpu_mode/features an unsigned int (from uint64_t)
* Adjusted pvh_hypercall64_table[] order
* Replaced address range check [XEN_VIRT_START..XEN_VIRT_END] with guest_mode()
* A few style cleanups
Changes in v14:
* Moved struct xen_pmu_regs to pmu.h
* Moved CHECK_pmu_* to an earlier patch (when structures are first introduced)
* Added PMU_SAMPLE_REAL flag to indicate whether the sample was taken in real
mode
* Simplified slightly setting rules for xenpmu_data flags
* Rewrote vpmu_force_context_switch() to again use continuations. (Returning
EAGAIN
to user would mean that VPMU mode may get into inconsistent state (across
processors)
and dealing with that is more compicated than I'd like).
* Fixed msraddr_to_bitpos() and converted it into an inline
* Replaced address range check in vmpu_do_interrupt() with guest_mode()
* No error returns from __initcall
* Rebased on top of recent VPMU changes
* Various cleanups
Changes in v13:
* Rearranged data in xenpf_symdata to eliminate a hole (no change in
structure size)
* Removed unnecessary zeroing of last character in name string during
symbol re