From: Krishna Kumar <[EMAIL PROTECTED]> Date: Wed, 14 Nov 2007 11:34:12 +0530
> Hi Peter, > > Peter wrote on 11/13/2007 11:14:50 PM: > > @@ -134,7 +134,7 @@ static inline int qdisc_restart(struct net_device *dev) > > { > > struct Qdisc *q = dev->qdisc; > > struct sk_buff *skb; > > - int ret; > > + int ret = NETDEV_TX_BUSY; > > > > /* Dequeue packet */ > > if (unlikely((skb = dev_dequeue_skb(dev, q)) == NULL)) > > @@ -145,7 +145,8 @@ static inline int qdisc_restart(struct net_device *dev) > > spin_unlock(&dev->queue_lock); > > > > HARD_TX_LOCK(dev, smp_processor_id()); > > - ret = dev_hard_start_xmit(skb, dev); > > + if (!netif_subqueue_stopped(dev, skb)) > > + ret = dev_hard_start_xmit(skb, dev); > > HARD_TX_UNLOCK(dev); > > You could optimize this by getting HARD_TX_LOCK after the check. I > assume that netif_stop_subqueue (from another CPU) would always be > called by the driver xmit, and that is not possible since we hold > the __LINK_STATE_QDISC_RUNNING bit. Does that sound correct? I don't think this is a critical optimization at this time, but something to certainly do along with the surgery we'll undoubtedly be doing here in the future :-) - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html