On 2020-11-23 06:54, Shenming Lu wrote:
From: Zenghui Yu <yuzeng...@huawei.com>

When setting the forwarding path of a VLPI, it is more consistent to

I'm not sure it is more consistent. It is a *new* behaviour, because it only
matters for migration, which has been so far unsupported.

also transfer the pending state from irq->pending_latch to VPT (especially
in migration, the pending states of VLPIs are restored into kvm’s vgic
first). And we currently send "INT+VSYNC" to trigger a VLPI to pending.

Signed-off-by: Zenghui Yu <yuzeng...@huawei.com>
Signed-off-by: Shenming Lu <lushenm...@huawei.com>
---
 arch/arm64/kvm/vgic/vgic-v4.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
index b5fa73c9fd35..cc3ab9cea182 100644
--- a/arch/arm64/kvm/vgic/vgic-v4.c
+++ b/arch/arm64/kvm/vgic/vgic-v4.c
@@ -418,6 +418,18 @@ int kvm_vgic_v4_set_forwarding(struct kvm *kvm, int virq,
        irq->host_irq        = virq;
        atomic_inc(&map.vpe->vlpi_count);

+       /* Transfer pending state */
+       ret = irq_set_irqchip_state(irq->host_irq,
+                                   IRQCHIP_STATE_PENDING,
+                                   irq->pending_latch);
+       WARN_RATELIMIT(ret, "IRQ %d", irq->host_irq);
+
+       /*
+        * Let it be pruned from ap_list later and don't bother
+        * the List Register.
+        */
+       irq->pending_latch = false;

It occurs to me that calling into irq_set_irqchip_state() for a large
number of interrupts can take a significant amount of time. It is also
odd that you dump the VPT with the VPE unmapped, but rely on the VPE
being mapped for the opposite operation.

Shouldn't these be symmetric, all performed while the VPE is unmapped?
It would also save a lot of ITS traffic.

+
 out:
        mutex_unlock(&its->its_lock);
        return ret;

Thanks,

        M.
--
Jazz is not dead. It just smells funny...

Reply via email to