On Fri, Jun 05, 2020 at 01:59:05PM +0200, Vitaly Kuznetsov wrote:
> Introduce vmx_handle_memory_failure() as an interim solution.

Heh, "interim".  I'll take the over on that :-D.

> Note, nested_vmx_get_vmptr() now has three possible outcomes: OK, PF,
> KVM_EXIT_INTERNAL_ERROR and callers need to know if userspace exit is
> needed (for KVM_EXIT_INTERNAL_ERROR) in case of failure. We don't seem
> to have a good enum describing this tristate, just add "int *ret" to
> nested_vmx_get_vmptr() interface to pass the information.
> 
> Reported-by: [email protected]
> Suggested-by: Sean Christopherson <[email protected]>
> Signed-off-by: Vitaly Kuznetsov <[email protected]>
> ---

...

> +/*
> + * Handles kvm_read/write_guest_virt*() result and either injects #PF or 
> returns
> + * KVM_EXIT_INTERNAL_ERROR for cases not currently handled by KVM. Return 
> value
> + * indicates whether exit to userspace is needed.
> + */
> +int vmx_handle_memory_failure(struct kvm_vcpu *vcpu, int r,
> +                           struct x86_exception *e)
> +{
> +     if (r == X86EMUL_PROPAGATE_FAULT) {
> +             kvm_inject_emulated_page_fault(vcpu, e);
> +             return 1;
> +     }
> +
> +     /*
> +      * In case kvm_read/write_guest_virt*() failed with X86EMUL_IO_NEEDED
> +      * while handling a VMX instruction KVM could've handled the request

A nit similar to your observation on the shortlog, this isn't limited to VMX
instructions.

> +      * correctly by exiting to userspace and performing I/O but there
> +      * doesn't seem to be a real use-case behind such requests, just return
> +      * KVM_EXIT_INTERNAL_ERROR for now.
> +      */
> +     vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +     vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION;
> +     vcpu->run->internal.ndata = 0;
> +
> +     return 0;
> +}

Reply via email to