On 22/11/17 12:39, Jan Beulich wrote:
> See the code comment being added for why we need this.
>
> This is being placed here to balance between the desire to prevent
> future similar issues (the risk of which would grow if it was put
> further down the call stack, e.g. in vmx_vcpu_destroy()) and the
> intention to limit the performance impact (otherwise it could also go
> into rcu_do_batch(), paralleling the use in do_tasklet_work()).
>
> Reported-by: Igor Druzhinin <igor.druzhi...@citrix.com>
> Signed-off-by: Jan Beulich <jbeul...@suse.com>

Acked-by: Andrew Cooper <andrew.coop...@citrix.com>

> ---
> v2: Move from vmx_vcpu_destroy() to complete_domain_destroy().
>
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -794,6 +794,14 @@ static void complete_domain_destroy(stru
>      struct vcpu *v;
>      int i;
>  
> +    /*
> +     * Flush all state for the vCPU previously having run on the current CPU.
> +     * This is in particular relevant for x86 HVM ones on VMX, so that this
> +     * flushing of state won't happen from the TLB flush IPI handler behind
> +     * the back of a vmx_vmcs_enter() / vmx_vmcs_exit() section.
> +     */
> +    sync_local_execstate();
> +
>      for ( i = d->max_vcpus - 1; i >= 0; i-- )
>      {
>          if ( (v = d->vcpu[i]) == NULL )
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to