From: jamal <[EMAIL PROTECTED]> Date: Mon, 08 Oct 2007 16:48:50 -0400
> On Mon, 2007-08-10 at 12:46 -0700, Waskiewicz Jr, Peter P wrote: > > > I still have concerns how this will work with Tx multiqueue. > > The way the batching code looks right now, you will probably send a > > batch of skb's from multiple bands from PRIO or RR to the driver. For > > non-Tx multiqueue drivers, this is fine. For Tx multiqueue drivers, > > this isn't fine, since the Tx ring is selected by the value of > > skb->queue_mapping (set by the qdisc on {prio|rr}_classify()). If the > > whole batch comes in with different queue_mappings, this could prove to > > be an interesting issue. > > true, that needs some resolution. Heres a hand-waving thought: > Assuming all packets of a specific map end up in the same qdiscn queue, > it seems feasible to ask the qdisc scheduler to give us enough packages > (ive seen people use that terms to refer to packets) for each hardware > ring's available space. With the patches i posted, i do that via > dev->xmit_win that assumes only one view of the driver; essentially a > single ring. > If that is doable, then it is up to the driver to say > "i have space for 5 in ring[0], 10 in ring[1] 0 in ring[2]" based on > what scheduling scheme the driver implements - the dev->blist can stay > the same. Its a handwave, so there may be issues there and there could > be better ways to handle this. Add xmit_win to struct net_device_subqueue, problem solved. - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html