> I like it. And AFAICT it largely Just Works, because the calls from
> svm_set_nested_state() will always be routed to gpat since the calls are
> already
> guarded with is_guest_mode() + nested_npt_enabled().
>
> Side topic, either as a prep patch (to duplicate code) or as a follow-up patch
> (to move the PAT handling in x86.c to vmx.c), the "common" handling of PAT
> should
> be fully forked between VMX and SVM. As of this patch, it's not just
> misleading,
> it's actively dangerous since calling kvm_get_msr_common() for SVM would get
> the
> wrong value.
+1 on both points.
> FWIW, this is what I ended up with when hacking on top of your patches to see
> how
> this played out.
>
> ---
> @@ -2838,16 +2876,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct
> msr_data *msr_info)
> msr_info->data = svm->msr_decfg;
> break;
> case MSR_IA32_CR_PAT:
> - /*
> - * When nested NPT is enabled, L2 has a separate PAT from
> - * L1. Guest accesses to IA32_PAT while running L2 target
> - * L2's gPAT; host-initiated accesses always target L1's
> - * hPAT for backward and forward KVM_GET_MSRS compatibility
> - * with older kernels.
> - */
> - WARN_ON_ONCE(msr_info->host_initiated && vcpu->wants_to_run);
> - if (!msr_info->host_initiated && is_guest_mode(vcpu) &&
> - nested_npt_enabled(svm))
> + if (svm_is_access_to_gpat(vcpu, msr_info->host_initiated))
> msr_info->data = svm->nested.save.g_pat;
> else
> msr_info->data = vcpu->arch.pat;
I'd go a step further here and add svm_get_pat(), then this just
becomes:
msr_info->data = svm_get_pat(vcpu, msr_info->host_initiated);
It's more consistent with svm_set_msr(), and completely abstracts the L1
vs. L2 PAT logic with the helpers.
> @@ -2937,19 +2966,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct
> msr_data *msr)
> case MSR_IA32_CR_PAT:
> if (!kvm_pat_valid(data))
> return 1;
> - /*
> - * When nested NPT is enabled, L2 has a separate PAT from
> - * L1. Guest accesses to IA32_PAT while running L2 target
> - * L2's gPAT; host-initiated accesses always target L1's
> - * hPAT for backward and forward KVM_SET_MSRS compatibility
> - * with older kernels.
> - */
> - WARN_ON_ONCE(msr->host_initiated && vcpu->wants_to_run);
> - if (!msr->host_initiated && is_guest_mode(vcpu) &&
> - nested_npt_enabled(svm))
> - svm_set_gpat(svm, data);
> - else
> - svm_set_hpat(svm, data);
> +
> + svm_set_pat(vcpu, data, msr->host_initiated);
> break;
> case MSR_IA32_SPEC_CTRL:
> if (!msr->host_initiated &&