On 24/06/15 06:18, Feng Wu wrote:
> @@ -1848,6 +1869,33 @@ static struct hvm_function_table __initdata 
> vmx_function_table = {
>      .enable_msr_exit_interception = vmx_enable_msr_exit_interception,
>  };
>  
> +/*
> + * Handle VT-d posted-interrupt when VCPU is blocked.
> + */
> +static void pi_wakeup_interrupt(struct cpu_user_regs *regs)
> +{
> +    struct arch_vmx_struct *vmx;
> +    unsigned int cpu = smp_processor_id();
> +
> +    spin_lock(&per_cpu(pi_blocked_vcpu_lock, cpu));
> +
> +    /*
> +     * FIXME: The length of the list depends on how many
> +     * vCPU is current blocked on this specific pCPU.
> +     * This may hurt the interrupt latency if the list
> +     * grows to too many entries.
> +     */
> +    list_for_each_entry(vmx, &per_cpu(pi_blocked_vcpu, cpu),
> +                        pi_blocked_vcpu_list)
> +        if ( vmx->pi_desc.on )
> +            tasklet_schedule(&vmx->pi_vcpu_wakeup_tasklet);

There is a logical bug here.  If we have two NV's delivered to this
pcpu, we will kick the first vcpu twice.

On finding desc.on, a kick should be scheduled, then the vcpu removed
from this list.  With desc.on set, we know for certain that another NV
will not arrive for it until it has been scheduled again and the
interrupt posted.

~Andrew

> +
> +    spin_unlock(&per_cpu(pi_blocked_vcpu_lock, cpu));
> +
> +    ack_APIC_irq();
> +    this_cpu(irq_count)++;
> +}
> +
>  const struct hvm_function_table * __init start_vmx(void)
>  {
>      set_in_cr4(X86_CR4_VMXE);
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to