On 14/02/2019 14:19, Jan Beulich wrote:
>>>> On 14.02.19 at 13:49, <paul.durr...@citrix.com> wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3964,26 +3964,28 @@ static void hvm_s3_resume(struct domain *d)
>>      }
>>  }
>>  
>> -static int hvmop_flush_tlb_all(void)
>> +bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
>> +                        void *ctxt)
>>  {
>> +    static DEFINE_PER_CPU(cpumask_t, flush_cpumask);
>> +    cpumask_t *mask = &this_cpu(flush_cpumask);
>>      struct domain *d = current->domain;
>>      struct vcpu *v;
>>  
>> -    if ( !is_hvm_domain(d) )
>> -        return -EINVAL;
>> -
>>      /* Avoid deadlock if more than one vcpu tries this at the same time. */
>>      if ( !spin_trylock(&d->hypercall_deadlock_mutex) )
>> -        return -ERESTART;
>> +        return false;
>>  
>>      /* Pause all other vcpus. */
>>      for_each_vcpu ( d, v )
>> -        if ( v != current )
>> +        if ( v != current && flush_vcpu(ctxt, v) )
>>              vcpu_pause_nosync(v);
>>  
>> +    cpumask_clear(mask);
> 
> I'd prefer if this was pulled further down as well, in particular outside the
> locked region. With this, which is easy enough to do while committing,
> Reviewed-by: Jan Beulich <jbeul...@suse.com>
> 
> Cc-ing Jürgen in the hopes for his R-a-b.

Release-acked-by: Juergen Gross <jgr...@suse.com>


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to