On Mon, Mar 12, 2007 at 12:05:46PM -0700, Mark Huth wrote: > Since the interrupts are enabled as the NAPI-callback exits, and the > interrupts are disabled in the isr after the callback is scheduled, this > fully avoids the potential race conditions, and requires no locking. If
I've benchmarked both approaches and they appear to be much of a muchness for performance in my tests so I've updated my patch to use this approach instead. Thanks for the suggestion, it is simpler. > If you would like me to prepare formal patches based on any of these > concepts, let me know. That's OK - unless any further suggestions I'll submit the patch below after further testing. The improvements to trace and correctness of the interupt handler seem useful. Subject: natsemi: Fix NAPI for interrupt sharing The interrupt status register for the natsemi chips is clear on read and was read unconditionally from both the interrupt and from the NAPI poll routine, meaning that if the interrupt service routine was called (for example, due to a shared interrupt) while a NAPI poll was scheduled interrupts could be missed. This patch fixes that by ensuring that the interrupt status register is only read by the interrupt handler when interrupts are enabled from the chip. It also reverts a workaround for this problem from the netpoll hook and improves the trace for interrupt events. Thanks to Sergei Shtylyov <[EMAIL PROTECTED]> for spotting the issue, Mark Huth <[EMAIL PROTECTED]> for a simpler method and Simon Blake <[EMAIL PROTECTED]> for testing resources. Signed-Off-By: Mark Brown <[EMAIL PROTECTED]> Index: linux-2.6/drivers/net/natsemi.c =================================================================== --- linux-2.6.orig/drivers/net/natsemi.c 2007-03-11 02:32:43.000000000 +0000 +++ linux-2.6/drivers/net/natsemi.c 2007-03-13 00:12:29.000000000 +0000 @@ -2119,10 +2119,12 @@ struct netdev_private *np = netdev_priv(dev); void __iomem * ioaddr = ns_ioaddr(dev); - if (np->hands_off) + /* Reading IntrStatus automatically acknowledges so don't do + * that while interrupts are disabled, (for example, while a + * poll is scheduled). */ + if (np->hands_off || !readl(ioaddr + IntrEnable)) return IRQ_NONE; - /* Reading automatically acknowledges. */ np->intr_status = readl(ioaddr + IntrStatus); if (netif_msg_intr(np)) @@ -2131,17 +2133,23 @@ dev->name, np->intr_status, readl(ioaddr + IntrMask)); - if (!np->intr_status) - return IRQ_NONE; - - prefetch(&np->rx_skbuff[np->cur_rx % RX_RING_SIZE]); + if (np->intr_status) { + prefetch(&np->rx_skbuff[np->cur_rx % RX_RING_SIZE]); - if (netif_rx_schedule_prep(dev)) { /* Disable interrupts and register for poll */ - natsemi_irq_disable(dev); - __netif_rx_schedule(dev); + if (netif_rx_schedule_prep(dev)) { + natsemi_irq_disable(dev); + __netif_rx_schedule(dev); + } else + printk(KERN_WARNING + "%s: Ignoring interrupt, status %#08x, mask %#08x.\n", + dev->name, np->intr_status, + readl(ioaddr + IntrMask)); + + return IRQ_HANDLED; } - return IRQ_HANDLED; + + return IRQ_NONE; } /* This is the NAPI poll routine. As well as the standard RX handling @@ -2156,6 +2164,12 @@ int work_done = 0; do { + if (netif_msg_intr(np)) + printk(KERN_DEBUG + "%s: Poll, status %#08x, mask %#08x.\n", + dev->name, np->intr_status, + readl(ioaddr + IntrMask)); + if (np->intr_status & (IntrTxDone | IntrTxIntr | IntrTxIdle | IntrTxErr)) { spin_lock(&np->lock); @@ -2399,19 +2413,8 @@ #ifdef CONFIG_NET_POLL_CONTROLLER static void natsemi_poll_controller(struct net_device *dev) { - struct netdev_private *np = netdev_priv(dev); - disable_irq(dev->irq); - - /* - * A real interrupt might have already reached us at this point - * but NAPI might still haven't called us back. As the interrupt - * status register is cleared by reading, we should prevent an - * interrupt loss in this case... - */ - if (!np->intr_status) - intr_handler(dev->irq, dev); - + intr_handler(dev->irq, dev); enable_irq(dev->irq); } #endif -- "You grabbed my hand and we fell into it, like a daydream - or a fever."
signature.asc
Description: Digital signature