On Wed, Sep 30, 2020 at 12:57:31PM +0100, Paul Durrant wrote: > > -----Original Message----- > > From: Roger Pau Monne <roger....@citrix.com> > > Sent: 30 September 2020 11:41 > > To: xen-devel@lists.xenproject.org > > Cc: Roger Pau Monne <roger....@citrix.com>; Jan Beulich > > <jbeul...@suse.com>; Andrew Cooper > > <andrew.coop...@citrix.com>; Wei Liu <w...@xen.org>; Paul Durrant > > <p...@xen.org> > > Subject: [PATCH v2 04/11] x86/vmsi: use the newly introduced EOI callbacks > > > > Remove the unconditional call to hvm_dpci_msi_eoi in vlapic_handle_EOI > > and instead use the newly introduced EOI callback mechanism in order > > to register a callback for MSI vectors injected from passed through > > devices. > > > > Signed-off-by: Roger Pau Monné <roger....@citrix.com> > > --- > > xen/arch/x86/hvm/vlapic.c | 2 -- > > xen/arch/x86/hvm/vmsi.c | 36 ++++++++++++++++++++++-------------- > > xen/drivers/passthrough/io.c | 2 +- > > xen/include/asm-x86/hvm/io.h | 2 +- > > 4 files changed, 24 insertions(+), 18 deletions(-) > > > > diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c > > index 38c62a02e6..8a18b33428 100644 > > --- a/xen/arch/x86/hvm/vlapic.c > > +++ b/xen/arch/x86/hvm/vlapic.c > > @@ -496,8 +496,6 @@ void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector) > > if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) ) > > vioapic_update_EOI(vector); > > > > - hvm_dpci_msi_eoi(vector); > > - > > spin_lock_irqsave(&vlapic->callback_lock, flags); > > callback = vlapic->callbacks[index].callback; > > vlapic->callbacks[index].callback = NULL; > > diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c > > index 7ca19353ab..e192c4c6da 100644 > > --- a/xen/arch/x86/hvm/vmsi.c > > +++ b/xen/arch/x86/hvm/vmsi.c > > @@ -44,11 +44,9 @@ > > #include <asm/event.h> > > #include <asm/io_apic.h> > > > > -static void vmsi_inj_irq( > > - struct vlapic *target, > > - uint8_t vector, > > - uint8_t trig_mode, > > - uint8_t delivery_mode) > > +static void vmsi_inj_irq(struct vlapic *target, uint8_t vector, > > + uint8_t trig_mode, uint8_t delivery_mode, > > + vlapic_eoi_callback_t *callback, void *data) > > { > > HVM_DBG_LOG(DBG_LEVEL_VLAPIC, "vmsi_inj_irq: vec %02x trig %d dm %d\n", > > vector, trig_mode, delivery_mode); > > @@ -57,17 +55,17 @@ static void vmsi_inj_irq( > > { > > case dest_Fixed: > > case dest_LowestPrio: > > - vlapic_set_irq(target, vector, trig_mode); > > + vlapic_set_irq_callback(target, vector, trig_mode, callback, data); > > break; > > default: > > BUG(); > > } > > } > > > > -int vmsi_deliver( > > - struct domain *d, int vector, > > - uint8_t dest, uint8_t dest_mode, > > - uint8_t delivery_mode, uint8_t trig_mode) > > +static int vmsi_deliver_callback(struct domain *d, int vector, uint8_t > > dest, > > + uint8_t dest_mode, uint8_t delivery_mode, > > + uint8_t trig_mode, > > + vlapic_eoi_callback_t *callback, void > > *data) > > { > > struct vlapic *target; > > struct vcpu *v; > > @@ -78,7 +76,8 @@ int vmsi_deliver( > > target = vlapic_lowest_prio(d, NULL, 0, dest, dest_mode); > > if ( target != NULL ) > > { > > - vmsi_inj_irq(target, vector, trig_mode, delivery_mode); > > + vmsi_inj_irq(target, vector, trig_mode, delivery_mode, > > callback, > > + data); > > break; > > } > > HVM_DBG_LOG(DBG_LEVEL_VLAPIC, "null MSI round robin: > > vector=%02x\n", > > @@ -89,8 +88,8 @@ int vmsi_deliver( > > for_each_vcpu ( d, v ) > > if ( vlapic_match_dest(vcpu_vlapic(v), NULL, > > 0, dest, dest_mode) ) > > - vmsi_inj_irq(vcpu_vlapic(v), vector, > > - trig_mode, delivery_mode); > > + vmsi_inj_irq(vcpu_vlapic(v), vector, trig_mode, > > delivery_mode, > > + callback, data); > > break; > > > > default: > > @@ -103,6 +102,14 @@ int vmsi_deliver( > > return 0; > > } > > > > + > > Stray blank line > > > +int vmsi_deliver(struct domain *d, int vector, uint8_t dest, uint8_t > > dest_mode, > > + uint8_t delivery_mode, uint8_t trig_mode) > > +{ > > + return vmsi_deliver_callback(d, vector, dest, dest_mode, delivery_mode, > > + trig_mode, NULL, NULL); > > +} > > + > > void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci > > *pirq_dpci) > > { > > uint32_t flags = pirq_dpci->gmsi.gflags; > > @@ -119,7 +126,8 @@ void vmsi_deliver_pirq(struct domain *d, const struct > > hvm_pirq_dpci *pirq_dpci) > > > > ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI); > > > > - vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode); > > + vmsi_deliver_callback(d, vector, dest, dest_mode, delivery_mode, > > trig_mode, > > + hvm_dpci_msi_eoi, NULL); > > I think there are more efficiencies possible here. E.g. if > is_hardware_domain(d) is true then hvm_dpci_msi_eoi() will do nothing, so no > point in setting up the callback in that case.
No, I don't think that's true, hvm_dpci_msi_eoi will execute the call to pt_pirq_iterate and _hvm_dpci_msi_eoi for the hardware domain, because MSI vectors need to be acked from the physical lapic even in the hardware domain case. This was fixed by commit 7b653a245ff. Thanks, Roger.