On Mon, Sep 23, 2024, Jack Allister wrote:
> On Mon, 2024-09-23 at 10:04 -0700, Sean Christopherson wrote:
> > 
> > On Mon, Sep 23, 2024, Ivan Orlov wrote:
> > > Currently, KVM may return a variety of internal errors to VMM when
> > > accessing MMIO, and some of them could be gracefully handled on the
> > > KVM
> > > level instead. Moreover, some of the MMIO-related errors are
> > > handled
> > > differently in VMX in comparison with SVM, which produces certain
> > > inconsistency and should be fixed. This patch series introduces
> > > KVM-level handling for the following situations:
> > > 
> > > 1) Guest is accessing MMIO during event delivery: triple fault
> > > instead
> > > of internal error on VMX and infinite loop on SVM
> > > 
> > > 2) Guest fetches an instruction from MMIO: inject #UD and resume
> > > guest
> > > execution without internal error
> > 
> > No.  This is not architectural behavior.  It's not even remotely close to
> > architectural behavior.  KVM's behavior isn't great, but making up _guest
> > visible_ behavior is not going to happen.
> 
> Is this a no to the whole series or from the cover letter? 

The whole series.

> For patch 1 we have observed that if a guest has incorrectly set it's
> IDT base to point inside of an MMIO region it will result in a triple
> fault (bare metal Cascake Lake Intel).

The triple fault occurs because the MMIO read returns garbage, e.g. because it
gets back master abort semantics.

> Yes a sane operating system is not really going to be doing setting it's IDT
> or GDT base to point into an MMIO region, but we've seen occurrences.

Sure, but that doesn't make it architecturally correct to synthesize arbitrary
faults.

> Normally when other external things have gone horribly wrong.
> 
> Ivan can clarify as to what's been seen on AMD platforms regarding the
> infinite loop for patch one.

So it sounds like what you really want to do is not put the vCPU into an 
infinite
loop.  Have you tried kvm/next or kvm-x86/next, which has fixes for infinite
loops on TDP faults?  Specifically, these commits:

  98a69b96caca3e07aff57ca91fd7cc3a3853871a KVM: x86/mmu: WARN on MMIO cache hit 
when emulating write-protected gfn
  d859b16161c81ee929b7b02a85227b8e3250bc97 KVM: x86/mmu: Detect if unprotect 
will do anything based on invalid_list
  6b3dcabc10911711eba15816d808e2a18f130406 KVM: x86/mmu: Subsume 
kvm_mmu_unprotect_page() into the and_retry() version
  2876624e1adcd9a3a3ffa8c4fe3bf8dbba969d95 KVM: x86: Rename 
reexecute_instruction()=>kvm_unprotect_and_retry_on_failure()
  4df685664bed04794ad72b58d8af1fa4fcc60261 KVM: x86: Update retry protection 
fields when forcing retry on emulation failure
  dabc4ff70c35756bc107bc5d035d0f0746396a9a KVM: x86: Apply retry protection to 
"unprotect on failure" path
  19ab2c8be070160be70a88027b3b93106fef7b89 KVM: x86: Check 
EMULTYPE_WRITE_PF_TO_SP before unprotecting gfn
  620525739521376a65a690df899e1596d56791f8 KVM: x86: Remove manual pfn lookup 
when retrying #PF after failed emulation
  b299c273c06f005976cdc1b9e9299d492527607e KVM: x86/mmu: Move event 
re-injection unprotect+retry into common path
  29e495bdf847ac6ad0e0d03e5db39a3ed9f12858 KVM: x86/mmu: Always walk guest PTEs 
with WRITE access when unprotecting
  b7e948898e772ac900950c0dac4ca90e905cd0c0 KVM: x86/mmu: Don't try to unprotect 
an INVALID_GPA
  2df354e37c1398a85bb43cbbf1f913eb3f91d035 KVM: x86: Fold retry_instruction() 
into x86_emulate_instruction()
  41e6e367d576ce1801dc5c2b106e14cde35e3c80 KVM: x86: Move 
EMULTYPE_ALLOW_RETRY_PF to x86_emulate_instruction()
  dfaae8447c53819749cf3ba10ce24d3c609752e3 KVM: x86/mmu: Try "unprotect for 
retry" iff there are indirect SPs
  01dd4d319207c4cfd51a1c9a1812909e944d8c86 KVM: x86/mmu: Apply retry protection 
to "fast nTDP unprotect" path
  9c19129e535bfff85bdfcb5a804e19e5aae935b2 KVM: x86: Store gpa as gpa_t, not 
unsigned long, when unprotecting for retry
  019f3f84a40c88b68ca4d455306b92c20733e784 KVM: x86: Get RIP from vCPU state 
when storing it to last_retry_eip
  c1edcc41c3603c65f34000ae031a20971f4e56f9 KVM: x86: Retry to-be-emulated insn 
in "slow" unprotect path iff sp is zapped
  2fb2b7877b3a4cac4de070ef92437b38f13559b0 KVM: x86/mmu: Skip emulation on page 
fault iff 1+ SPs were unprotected
  989a84c93f592e6b288fb3b96d2eeec827d75bef KVM: x86/mmu: Trigger unprotect 
logic only on write-protection page faults
  4ececec19a0914873634ad69bbaca5557c33e855 KVM: x86/mmu: Replace 
PFERR_NESTED_GUEST_PAGE with a more descriptive helper

> This was also tested on bare metal hardware. Injection of the #UD within
> patch 2 may be debatable but I believe Ivan has some more data from
> experiments backing this up.

Heh, it's not debatable.  Fetching from MMIO is perfectly legal.  Again, any #UD
you see on bare metal is all but guaranteed to be due to fetching garbage.

Reply via email to