On Fri, 2016-05-20 at 16:53 +0800, Feng Wu wrote:
> We need to make sure the bocking vcpu is not in any per-cpu blocking
> list
> when the associated domain is going to be destroyed.
> 
> Signed-off-by: Feng Wu <feng...@intel.com>
> ---
> 
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -248,6 +248,36 @@ void vmx_pi_hooks_deassign(struct domain *d)
>      d->arch.hvm_domain.vmx.pi_switch_to = NULL;
>  }
>  
> +static void vmx_pi_blocking_list_cleanup(struct domain *d)
> +{
> +    unsigned int cpu;
> +
> +    for_each_online_cpu ( cpu )
> +    {
> +        struct vcpu *v;
> +        unsigned long flags;
> +        struct arch_vmx_struct *vmx, *tmp;
> +        spinlock_t *lock = &per_cpu(vmx_pi_blocking, cpu).lock;
> +        struct list_head *blocked_vcpus = &per_cpu(vmx_pi_blocking,
> cpu).list;
> +
> +        spin_lock_irqsave(lock, flags);
> +
> +        list_for_each_entry_safe(vmx, tmp, blocked_vcpus,
> pi_blocking.list)
> +        {
> +            v = container_of(vmx, struct vcpu, arch.hvm_vmx);
> +
> +            if (v->domain == d)
> +            {
> +                list_del(&vmx->pi_blocking.list);
> +                ASSERT(vmx->pi_blocking.lock == lock);
> +                vmx->pi_blocking.lock = NULL;
> +            }
> +        }
> +
> +        spin_unlock_irqrestore(lock, flags);
> +    }
>
So, I'm probably missing something very ver basic, but I don't see
what's the reason why we need this loop... can't we arrange for
checking

 list_empty(&v->arch.hvm_vmx.pi_blocking.list)

?

:-O

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to