On 7/9/2025 11:53 PM, Sean Christopherson wrote:
> On Mon, May 26, 2025, Sandipan Das wrote:
>>> @@ -212,6 +212,18 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
>>> bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters);
>>> }
>>>
>>> +static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
>>> +{
>>> + struct vcpu_svm *svm = to_svm(vcpu);
>>> +
>>> + __amd_pmu_refresh(vcpu);
>>> +
>>> + if (kvm_rdpmc_in_guest(vcpu))
>>> + svm_clr_intercept(svm, INTERCEPT_RDPMC);
>>> + else
>>> + svm_set_intercept(svm, INTERCEPT_RDPMC);
>>> +}
>>> +
>> After putting kprobes on kvm_pmu_rdpmc(), I noticed that RDPMC instructions
>> were
>> getting intercepted for the secondary vCPUs. This happens because when
>> secondary
>> vCPUs come up, kvm_vcpu_reset() gets called after guest CPUID has been
>> updated.
>> While RDPMC interception is initially disabled in the kvm_pmu_refresh()
>> path, it
>> gets re-enabled in the kvm_vcpu_reset() path as svm_vcpu_reset() calls
>> init_vmcb().
>> We should consider adding the following change to avoid that.
> Revisiting this code after the MSR interception rework, I think we should go
> for
> a more complete, big-hammer solution. Rather than manipulate intercepts
> during
> kvm_pmu_refresh(), do the updates as part of the "common" recalc intercepts
> flow.
> And then to trigger recalc on PERF_CAPABILITIES writes, turn
> KVM_REQ_MSR_FILTER_CHANGED
> into a generic KVM_REQ_RECALC_INTERCEPTS.
>
> That way there's one path for calculating dynamic intercepts, which should
> make it
> much more difficult for us to screw up things like reacting to MSR filter
> changes.
> And providing a single path avoids needing to have a series of back-and-forth
> calls
> between common x86 code, PMU code, and vendor code.
Sounds good to me.
BTW, Sean, may I know your plan about the mediated vPMU v5 patch set? Thanks.