On Tue, 2006-26-12 at 23:06 +0100, Arjan van de Ven wrote:

> it is; that's why irqbalance tries really hard (with a few very rare
> exceptions) to keep networking irqs to the same cpu all the time...
> 

The problem with irqbalance when i last used it is it doesnt take into
consideration CPU utilization. 
With NAPI, if i have a few interupts it likely implies i have a huge
network load (and therefore CPU use) and would be much more happier if
you didnt start moving more interupt load to that already loaded CPU....
So if you start considering CPU load sampled over a period of time, you
could make some progress. 

> but if your use case is kernel level packet processing of < MTU packets
> then I can see why you would at some point would run out of cpu
> power ... 

Of course, otherwise there would be not much value in "balancing" ..

Note < MTU sized packets is not unusual for firewall/router middle boxen
and theres plenty of those out there. But these days for VOIP endpoints
(RTP and SIP) which may process such packets in user space (and handle
thousands of such flows).
Additional note: the average packet size on the internet today (and for
many years) is way below your standard ethernet MTU of 1500 bytes.
 
> esp on multicore where you share the cache between cores you
> probably can do a little better for that very specific use case.

Indeed - thats why i proposed to tie the IRQs statically. Modern
machines have much larger caches, so static config is less of a
nuisance.

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to