On 21/06/19 14:48, Liran Alon wrote: > > >> On 21 Jun 2019, at 15:45, Paolo Bonzini <pbonz...@redhat.com> wrote: >> >> On 21/06/19 14:29, Liran Alon wrote: >>>> + max_nested_state_len = kvm_max_nested_state_length(); >>>> + if (max_nested_state_len > 0) { >>>> + assert(max_nested_state_len >= offsetof(struct kvm_nested_state, >>>> data)); >>>> + env->nested_state = g_malloc0(max_nested_state_len); >>>> + >>>> + env->nested_state->size = max_nested_state_len; >>>> + >>>> + if (IS_INTEL_CPU(env)) { >>> I think it’s better to change this to: “if (cpu_has_vmx(env))” { >>> >>>> + struct kvm_vmx_nested_state_hdr *vmx_hdr = >>>> + &env->nested_state->hdr.vmx; >>>> + >>>> + env->nested_state->format = KVM_STATE_NESTED_FORMAT_VMX; >>>> + vmx_hdr->vmxon_pa = -1ull; >>>> + vmx_hdr->vmcs12_pa = -1ull; >>>> + } >>>> + } >>> I think we should add here: >>> } else if (cpu_has_svm(env)) { >>> env->nested_state->format = KVM_STATE_NESTED_FORMAT_SVM; >>> } >> >> Or even force max_nested_state_len to 0 for AMD hosts, so that >> kvm_get/put_nested_state are dropped completely. >> >> Paolo >> > > On AMD hosts, KVM returns 0 for KVM_CAP_NESTED_STATE because > Kvm-and.ko have kvm_x86_ops->get_nested_state set to NULL. > See kvm_vm_ioctl_check_extension().
Yes, now it does, my idea was to force that behavior in QEMU until we know what SVM support actually looks like. In principle I don't like the idea of crossing fingers, even though there's an actual possibility that it could work. But it couldn't be worse than what we have now, so maybe it *is* actually a good idea, just with some comment that explains the rationale. Paolo > I just thought it will be nicer to add what I proposed above as when kernel > adds support > for nested state on AMD host, QEMU would maybe just work. > (Because maybe all state required for AMD nSVM is just flags in > env->nested_state->flags). > > -Liran >