> +/* Handle VT-d posted-interrupt when VCPU is blocked. */
> +static void pi_wakeup_interrupt(struct cpu_user_regs *regs)
> +{
> +    struct arch_vmx_struct *vmx, *tmp;
> +    spinlock_t *lock = &per_cpu(vmx_pi_blocking, smp_processor_id()).lock;
> +    struct list_head *blocked_vcpus =
> +             &per_cpu(vmx_pi_blocking, smp_processor_id()).list;
> +
> +    ack_APIC_irq();
> +    this_cpu(irq_count)++;
> +
> +    spin_lock(lock);
> +
> +    /*
> +     * XXX: The length of the list depends on how many vCPU is current
> +     * blocked on this specific pCPU. This may hurt the interrupt latency
> +     * if the list grows to too many entries.
> +     */
> +    list_for_each_entry_safe(vmx, tmp, blocked_vcpus, pi_blocking.list)
> +    {


My recollection of the 'most-horrible' case of this being really bad is when
the scheduler puts the vCPU0 and VCPU1 of the guest on the same pCPU (as an 
example)
and they round-robin all the time.

<handwaving>
Would it be perhaps possible to have an anti-affinity flag to deter the
scheduler from this? That is whichever struct vcpu has 'anti-affinity' flag
set - the scheduler will try as much as it can _to not_ schedule the 'struct 
vcpu'
if the previous 'struct vcpu' had this flag as well on this pCPU?

And then try to schedule 'normal' guests.

[I am ignoring the toolstack plumbing for this and so on]

My naive thinking is that while it may result in a lot of a guest vCPU
moving around (as the prev 'struct vcpu' would disallow this new vCPU to run
on a CPU that already has this type of guest) it would 'spread' out the guests
with 'anti-affinity' flag across all the pCPUS.

It would suck for over-subscriptions but <handwaving>.

And maybe this enforcment need not been so strict. Perhaps it can allow
one 'prev' of an 'struct vpu' which has this flag enabled but not more than
two?

</handwaving>

/me goes off to the pub.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to