> The issue with cutting interrupts below 1 per packet is that most network
> controllers use ring structures and you have to guarantee that the ring is
> not filled or you lose data. You also typically have multiple high speed
If you are overloaded you want to lose packets. Its good for you as an
end host, less hot as a router but still fine.
> forces. If you force interrupt delays to allow for multiple events per
> interupt then you introduce latencies into the data stream (which end users
> dont like even if they are very small). So you are robbing Peter to pay Paul.
Indeed. Gigabit ethernet and many 100Mbit ethernet cards do this however and
it is a big win. The exchange rate between peter and paul is rather
favourable for most applications.
The biggest win is cutting down transmit complete interrupt overhead. That
is basically free given enough buffer memory and adds almost no latency if
any at all.
> The best strategy (so far) is to process data as it arrives and hope that
> our buddies at Intel can produce faster processors as data rates increase.
Faster memory busses and faster apic message busses. The CPU isnt it seems
generally the bottleneck right now.
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]