Hi Will,

No worries, I was busy until a couple days ago anyway.

On 25/03/2021 04.57, Will Deacon wrote:
+#define pr_fmt(fmt) "%s: " fmt, __func__

General nit: but I suspect many of the prints in here probably want to be
using the *_ratelimited variants.

You're right, everything complaining about spurious/unsupported stuff is probably safer rate-limited. I also made them all pr_err_ratelimited, since nothing should be trying to use that hardware before we get around to supporting it in this driver.

+static void __exception_irq_entry aic_handle_irq(struct pt_regs *regs)
+{
+       struct aic_irq_chip *ic = aic_irqc;
+       u32 event, type, irq;
+
+       do {
+               /*
+                * We cannot use a relaxed read here, as DMA needs to be
+                * ordered with respect to the IRQ firing.
+                */

I think this could be a bit clearer: the readl() doesn't order any DMA
accesses, but instead means that subsequent reads by the CPU are ordered
(which may be from a buffer which was DMA'd to) are ordered after the
read of the MMIO register.

Good point, reworded it for v4:

/*
 * We cannot use a relaxed read here, as reads from DMA buffers
 * need to be ordered after the IRQ fires.
 */


+               event = readl(ic->base + AIC_EVENT);
+               type = FIELD_GET(AIC_EVENT_TYPE, event);
+               irq = FIELD_GET(AIC_EVENT_NUM, event);
+
+               if (type == AIC_EVENT_TYPE_HW)
+                       handle_domain_irq(aic_irqc->hw_domain, irq, regs);
+               else if (type == AIC_EVENT_TYPE_IPI && irq == 1)
+                       aic_handle_ipi(regs);
+               else if (event != 0)
+                       pr_err("Unknown IRQ event %d, %d\n", type, irq);
+       } while (event);
+
+       /*
+        * vGIC maintenance interrupts end up here too, so we need to check
+        * for them separately. Just report and disable vGIC for now, until
+        * we implement this properly.
+        */
+       if ((read_sysreg_s(SYS_ICH_HCR_EL2) & ICH_HCR_EN) &&
+               read_sysreg_s(SYS_ICH_MISR_EL2) != 0) {
+               pr_err("vGIC IRQ fired, disabling.\n");
+               sysreg_clear_set_s(SYS_ICH_HCR_EL2, ICH_HCR_EN, 0);
+       }

What prevents all these system register accesses being speculated up before
the handler?

Nothing, but that's not a problem, is it? If the condition is met, it means the vGIC IRQ *is* firing and needs clearing. We don't particularly care if this happens before, after, or during the rest of the IRQ handling.

I changed the message to this, because we actually should never hit this path with correctly-working KVM code (it takes care of it before this handler runs):

pr_err_ratelimited("vGIC IRQ fired and not handled by KVM, disabling.\n");

+static struct irq_chip aic_chip = {
+       .name = "AIC",
+       .irq_mask = aic_irq_mask,
+       .irq_unmask = aic_irq_unmask,

I know these are driven by the higher-level irq chip code, but I'm a bit
confused as to what provides ordering if, e.g. something ends up calling:

        aic_chip.irq_mask(d);
        ...
        aic_chip.irq_unmask(d);

I can't see any ISBs in here and they're writing to two different registers,
so can we end up with the IRQ masked after this sequence?

Wait, aren't MMIO writes to the same peripheral using device-nGnRnE memory modes always ordered with respect to each other? I thought the _relaxed versions were only trouble when mixed with memory/DMA buffers, and MMIO for any given peripheral always takes effect in program order.

+static void aic_ipi_mask(struct irq_data *d)
+{
+       u32 irq_bit = BIT(irqd_to_hwirq(d));
+       int this_cpu = smp_processor_id();
+
+       /* No specific ordering requirements needed here. */
+       atomic_andnot(irq_bit, &aic_vipi_enable[this_cpu]);
+}

Why not use a per-cpu variable here instead of an array of atomics? The pcpu
API has things for atomic updates (e.g. or, and, xchg).

One CPU still needs to be able to mutate the flags of another CPU to fire an IPI; AIUI the per-cpu ops are *not* atomic for concurrent access by multiple CPUs, and in fact there is no API for that, only for "this CPU".

+static void aic_ipi_unmask(struct irq_data *d)
+{
+       struct aic_irq_chip *ic = irq_data_get_irq_chip_data(d);
+       u32 irq_bit = BIT(irqd_to_hwirq(d));
+       int this_cpu = smp_processor_id();
+
+       /*
+        * This must complete before the atomic_read_acquire() below to avoid
+        * racing aic_ipi_send_mask(). Use a dummy fetch op with release
+        * semantics for this. This is arch-specific: ARMv8 B2.3.3 specifies
+        * that writes with Release semantics are Barrier-ordered-before reads
+        * with Acquire semantics, even though the Linux arch-independent
+        * definition of these atomic ops does not.
+        */

I think a more idiomatic (and portable) way to do this would be to use
the relaxed accessors, but with smp_mb__after_atomic() between them. Do you
have a good reason for _not_ doing it like that?

Not particularly, other than symmetry with the case below.

+               /*
+                * This sequence is the mirror of the one in aic_ipi_unmask();
+                * see the comment there. Additionally, release semantics
+                * ensure that the vIPI flag set is ordered after any shared
+                * memory accesses that precede it. This therefore also pairs
+                * with the atomic_fetch_andnot in aic_handle_ipi().
+                */
+               pending = atomic_fetch_or_release(irq_bit, &aic_vipi_flag[cpu]);

We do need the return data here, and the release semantics (or another barrier before it). But the read below can be made relaxed and a barrier used instead, and then the same patern above except with a plain atomic_or().

+               if (!(pending & irq_bit) && 
(atomic_read_acquire(&aic_vipi_enable[cpu]) & irq_bit))
+                       send |= AIC_IPI_SEND_CPU(cpu);
+       }

[...]

+       /*
+        * Clear the IPIs we are about to handle. This pairs with the
+        * atomic_fetch_or_release() in aic_ipi_send_mask(), and needs to be
+        * ordered after the aic_ic_write() above (to avoid dropping vIPIs) and
+        * before IPI handling code (to avoid races handling vIPIs before they
+        * are signaled). The former is taken care of by the release semantics
+        * of the write portion, while the latter is taken care of by the
+        * acquire semantics of the read portion.
+        */
+       firing = atomic_fetch_andnot(enabled, &aic_vipi_flag[this_cpu]) & 
enabled;

Does this also need to be ordered after the Ack? For example, if we have
something like:

CPU 0                                           CPU 1
                                                <some other IPI>
aic_ipi_send_mask()
                                                atomic_fetch_andnot(flag)
        atomic_fetch_or_release(flag)
        aic_ic_write(AIC_IPI_SEND)
                                                aic_ic_write(AIC_IPI_ACK)

sorry if it's a stupid question, I'm just not sure about the cases in which
the hardware will pend things for you.

It is ordered, right? As the comment says, it "needs to be ordered after the aic_ic_write() above". atomic_fetch_andnot() is *supposed* to be fully ordered and that should include against the writel_relaxed() on AIC_IPI_FLAG. On ARM it turns out it's not quite fully ordered, but the acquire semantics of the read half are sufficient for this case, as they guarantee the flags are always read after the FIQ has been ACKed.

Cheeers,
--
Hector Martin (mar...@marcan.st)
Public Key: https://mrcn.st/pub

Reply via email to