On 2020/5/19 19:03, Peter Zijlstra wrote:
On Thu, May 14, 2020 at 04:30:51PM +0800, Like Xu wrote:
@@ -6698,6 +6698,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
if (vcpu_to_pmu(vcpu)->version)
atomic_switch_perf_msrs(vmx);
+
atomic_switch_umwait_contr
On 2020/5/19 19:01, Peter Zijlstra wrote:
On Thu, May 14, 2020 at 04:30:51PM +0800, Like Xu wrote:
+ struct perf_event_attr attr = {
+ .type = PERF_TYPE_RAW,
+ .size = sizeof(attr),
+ .pinned = true,
+ .exclude_host = true,
+
On 2020/5/19 19:00, Peter Zijlstra wrote:
On Thu, May 14, 2020 at 04:30:51PM +0800, Like Xu wrote:
+static inline bool event_is_oncpu(struct perf_event *event)
+{
+ return event && event->oncpu != -1;
+}
+/*
+ * It's safe to access LBR msrs from guest when they have not
+ * been passthr
On Thu, May 14, 2020 at 04:30:51PM +0800, Like Xu wrote:
> @@ -6698,6 +6698,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
>
> if (vcpu_to_pmu(vcpu)->version)
> atomic_switch_perf_msrs(vmx);
> +
> atomic_switch_umwait_control_msr(vmx);
>
> if (enable_
On Thu, May 14, 2020 at 04:30:51PM +0800, Like Xu wrote:
> + struct perf_event_attr attr = {
> + .type = PERF_TYPE_RAW,
> + .size = sizeof(attr),
> + .pinned = true,
> + .exclude_host = true,
> + .config = INTEL_FIXED_VLBR_EVENT,
> +
On Thu, May 14, 2020 at 04:30:51PM +0800, Like Xu wrote:
> +static inline bool event_is_oncpu(struct perf_event *event)
> +{
> + return event && event->oncpu != -1;
> +}
> +/*
> + * It's safe to access LBR msrs from guest when they have not
> + * been passthrough since the host would help res
VMX transition is much more frequent than vcpu switching, and saving/
restoring tens of LBR msrs (e.g. 32 LBR records entries) brings too much
overhead to the frequent vmx transition itself, which is not necessary.
So the guest LBR records only gets saved/restored on the vcpu context
switching via
7 matches
Mail list logo