2017-07-11 14:24-0400, Bandan Das:
> Bandan Das <b...@redhat.com> writes:
> > If there's a triple fault, I think it's a good idea to inject it
> > back. Basically, there's no need to take care of damage control
> > that L1 is intentionally doing.
> >
> >>> +                 goto fail;
> >>> +         kvm_mmu_unload(vcpu);
> >>> +         vmcs12->ept_pointer = address;
> >>> +         kvm_mmu_reload(vcpu);
> >>
> >> I was thinking about something like this:
> >>
> >> kvm_mmu_unload(vcpu);
> >> old = vmcs12->ept_pointer;
> >> vmcs12->ept_pointer = address;
> >> if (kvm_mmu_reload(vcpu)) {
> >>    /* pointer invalid, restore previous state */
> >>    kvm_clear_request(KVM_REQ_TRIPLE_FAULT, vcpu);
> >>    vmcs12->ept_pointer = old;
> >>    kvm_mmu_reload(vcpu);
> >>    goto fail;
> >> }
> >>
> >> The you can inherit the checks from mmu_check_root().
> 
> Actually, thinking about this a bit more, I agree with you. Any fault
> with a vmfunc operation should end with a vmfunc vmexit, so this
> is a good thing to have. Thank you for this idea! :)

SDM says

  IF tent_EPTP is not a valid EPTP value (would cause VM entry to fail
  if in EPTP) THEN VMexit;

and no other mentions of a VM exit, so I think that the VM exit happens
only under these conditions:

  — The EPT memory type (bits 2:0) must be a value supported by the
    processor as indicated in the IA32_VMX_EPT_VPID_CAP MSR (see
    Appendix A.10).
  — Bits 5:3 (1 less than the EPT page-walk length) must be 3, indicating
    an EPT page-walk length of 4; see Section 28.2.2.
  — Bit 6 (enable bit for accessed and dirty flags for EPT) must be 0 if
    bit 21 of the IA32_VMX_EPT_VPID_CAP MSR (see Appendix A.10) is read
    as 0, indicating that the processor does not support accessed and
    dirty flags for EPT.
  — Reserved bits 11:7 and 63:N (where N is the processor’s
    physical-address width) must all be 0.

And it looks like we need parts of nested_ept_init_mmu_context() to
properly handle VMX_EPT_AD_ENABLE_BIT.

The KVM_REQ_TRIPLE_FAULT can be handled by kvm_mmu_reload in vcpu_run if
we just invalidate the MMU.

Reply via email to