Ping: [PATCH v3] x86/HVM: more consistently set I/O completion

2020-09-09 Thread Jan Beulich
On 07.09.2020 11:58, Paul Durrant wrote:
> With the comment addition...
> 
> Reviewed-by: Paul Durrant 

With Paul's R-b I'm intending to time out waiting for a VMX
maintainer ack early next week.

Jan



RE: Ping: [PATCH v3] x86/HVM: more consistently set I/O completion

2020-09-04 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich 
> Sent: 04 September 2020 09:16
> To: Paul Durrant 
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper 
> ; Wei Liu ;
> Roger Pau Monné ; Jun Nakajima 
> ; Kevin Tian
> ; George Dunlap 
> Subject: Ping: [PATCH v3] x86/HVM: more consistently set I/O completion
> 
> On 27.08.2020 09:09, Jan Beulich wrote:
> > Doing this just in hvm_emulate_one_insn() is not enough.
> > hvm_ud_intercept() and hvm_emulate_one_vm_event() can get invoked for
> > insns requiring one or more continuations, and at least in principle
> > hvm_emulate_one_mmio() could, too. Without proper setting of the field,
> > handle_hvm_io_completion() will do nothing completion-wise, and in
> > particular the missing re-invocation of the insn emulation paths will
> > lead to emulation caching not getting disabled in due course, causing
> > the ASSERT() in {svm,vmx}_vmenter_helper() to trigger.
> >
> > Reported-by: Don Slutz 
> >
> > Similar considerations go for the clearing of vio->mmio_access, which
> > gets moved as well.
> >
> > Additionally all updating of vio->mmio_* now gets done dependent upon
> > the new completion value, rather than hvm_ioreq_needs_completion()'s
> > return value. This is because it is the completion chosen which controls
> > what path will be taken when handling the completion, not the simple
> > boolean return value. In particular, PIO completion doesn't involve
> > going through the insn emulator, and hence emulator state ought to get
> > cleared early (or it won't get cleared at all).
> >
> > The new logic, besides allowing for a caller override for the
> > continuation type to be set (for VMX real mode emulation), will also
> > avoid setting an MMIO completion when a simpler PIO one will do. This
> > is a minor optimization only as a side effect - the behavior is strictly
> > needed at least for hvm_ud_intercept(), as only memory accesses can
> > successfully complete through handle_mmio(). Care of course needs to be
> > taken to correctly deal with "mixed" insns (doing both MMIO and PIO at
> > the same time, i.e. INS/OUTS). For this, hvmemul_validate() now latches
> > whether the insn being emulated is a memory access, as this information
> > is no longer easily available at the point where we want to consume it.
> >
> > Note that the presence of non-NULL .validate fields in the two ops
> > structures in hvm_emulate_one_mmio() was really necessary even before
> > the changes here: Without this, passing non-NULL as middle argument to
> > hvm_emulate_init_once() is meaningless.
> >
> > The restrictions on when the #UD intercept gets actually enabled are why
> > it was decided that this is not a security issue:
> > - the "hvm_fep" option to enable its use is a debugging option only,
> > - for the cross-vendor case is considered experimental, even if
> >   unfortunately SUPPORT.md doesn't have an explicit statement about
> >   this.
> > The other two affected functions are
> > - hvm_emulate_one_vm_event(), used for introspection,
> > - hvm_emulate_one_mmio(), used for Dom0 only,
> > which aren't qualifying this as needing an XSA either.
> >
> > Signed-off-by: Jan Beulich 
> > Tested-by: Don Slutz 
> 
> Paul (in particular)?
> 

Sorry... been on my TODO list since you posted it. Will try and take a look 
today.

  Paul




Ping: [PATCH v3] x86/HVM: more consistently set I/O completion

2020-09-04 Thread Jan Beulich
On 27.08.2020 09:09, Jan Beulich wrote:
> Doing this just in hvm_emulate_one_insn() is not enough.
> hvm_ud_intercept() and hvm_emulate_one_vm_event() can get invoked for
> insns requiring one or more continuations, and at least in principle
> hvm_emulate_one_mmio() could, too. Without proper setting of the field,
> handle_hvm_io_completion() will do nothing completion-wise, and in
> particular the missing re-invocation of the insn emulation paths will
> lead to emulation caching not getting disabled in due course, causing
> the ASSERT() in {svm,vmx}_vmenter_helper() to trigger.
> 
> Reported-by: Don Slutz 
> 
> Similar considerations go for the clearing of vio->mmio_access, which
> gets moved as well.
> 
> Additionally all updating of vio->mmio_* now gets done dependent upon
> the new completion value, rather than hvm_ioreq_needs_completion()'s
> return value. This is because it is the completion chosen which controls
> what path will be taken when handling the completion, not the simple
> boolean return value. In particular, PIO completion doesn't involve
> going through the insn emulator, and hence emulator state ought to get
> cleared early (or it won't get cleared at all).
> 
> The new logic, besides allowing for a caller override for the
> continuation type to be set (for VMX real mode emulation), will also
> avoid setting an MMIO completion when a simpler PIO one will do. This
> is a minor optimization only as a side effect - the behavior is strictly
> needed at least for hvm_ud_intercept(), as only memory accesses can
> successfully complete through handle_mmio(). Care of course needs to be
> taken to correctly deal with "mixed" insns (doing both MMIO and PIO at
> the same time, i.e. INS/OUTS). For this, hvmemul_validate() now latches
> whether the insn being emulated is a memory access, as this information
> is no longer easily available at the point where we want to consume it.
> 
> Note that the presence of non-NULL .validate fields in the two ops
> structures in hvm_emulate_one_mmio() was really necessary even before
> the changes here: Without this, passing non-NULL as middle argument to
> hvm_emulate_init_once() is meaningless.
> 
> The restrictions on when the #UD intercept gets actually enabled are why
> it was decided that this is not a security issue:
> - the "hvm_fep" option to enable its use is a debugging option only,
> - for the cross-vendor case is considered experimental, even if
>   unfortunately SUPPORT.md doesn't have an explicit statement about
>   this.
> The other two affected functions are
> - hvm_emulate_one_vm_event(), used for introspection,
> - hvm_emulate_one_mmio(), used for Dom0 only,
> which aren't qualifying this as needing an XSA either.
> 
> Signed-off-by: Jan Beulich 
> Tested-by: Don Slutz 

Paul (in particular)?

Jan

> ---
> v3: Add comment ahead of _hvm_emulate_one(). Add parentheses in a
> conditional expr. Justify why this does not need an XSA.
> v2: Make updating of vio->mmio_* fields fully driven by the new
> completion value.
> ---
> I further think that the entire tail of _hvm_emulate_one() (everything
> past the code changed/added there by this patch) wants skipping in case
> a completion is needed, at the very least for the mmio and realmode
> cases, where we know we'll come back here.
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1683,9 +1683,11 @@ static int hvmemul_validate(
>  const struct x86_emulate_state *state,
>  struct x86_emulate_ctxt *ctxt)
>  {
> -const struct hvm_emulate_ctxt *hvmemul_ctxt =
> +struct hvm_emulate_ctxt *hvmemul_ctxt =
>  container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>  
> +hvmemul_ctxt->is_mem_access = x86_insn_is_mem_access(state, ctxt);
> +
>  return !hvmemul_ctxt->validate || hvmemul_ctxt->validate(state, ctxt)
> ? X86EMUL_OKAY : X86EMUL_UNHANDLEABLE;
>  }
> @@ -2610,8 +2612,13 @@ static const struct x86_emulate_ops hvm_
>  .vmfunc= hvmemul_vmfunc,
>  };
>  
> +/*
> + * Note that passing HVMIO_no_completion into this function serves as kind
> + * of (but not fully) an "auto select completion" indicator.
> + */
>  static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
> -const struct x86_emulate_ops *ops)
> +const struct x86_emulate_ops *ops,
> +enum hvm_io_completion completion)
>  {
>  const struct cpu_user_regs *regs = hvmemul_ctxt->ctxt.regs;
>  struct vcpu *curr = current;
> @@ -2642,16 +2649,31 @@ static int _hvm_emulate_one(struct hvm_e
>  rc = X86EMUL_RETRY;
>  
>  if ( !hvm_ioreq_needs_completion(>io_req) )
> +completion = HVMIO_no_completion;
> +else if ( completion == HVMIO_no_completion )
> +completion = (vio->io_req.type != IOREQ_TYPE_PIO ||
> +  hvmemul_ctxt->is_mem_access) ? HVMIO_mmio_completion
> +   : HVMIO_pio_completion;
> +
> +switch ( vio->io_completion =