At 07:56 PM 2/28/00 +0000, Alan Cox wrote:
>> The issue with cutting interrupts below 1 per packet is that most network
>> controllers use ring structures and you have to guarantee that the ring is
>> not filled or you lose data. You also typically have multiple high speed
>
>If you are overloaded you want to lose packets. Its good for you as an
>end host, less hot as a router but still fine.
As a host you have half the interrupts, so it is less critical. The problem
with your "point" is that the definition of "overload" changes if you dont
process interrupts. You are effectively causing an overload by not
processing the data as it arrives if you disable the per-packet interrupt,
so you are discarding data that doesn't need to be to save cpu cycles that
very well may not need to be saved.
Real overloads take care of themselves in the form of ring overruns...you
can tune the size of the rings in most cases as well....causing overloads
by not processing the data properly is not a "win"
>Indeed. Gigabit ethernet and many 100Mbit ethernet cards do this however and
>it is a big win. The exchange rate between peter and paul is rather
>favourable for most applications.
>The biggest win is cutting down transmit complete interrupt overhead. That
>is basically free given enough buffer memory and adds almost no latency if
>any at all.
its "free" if you can keep your transmit ring full without them. Perhaps on
ethernet mediums where you have forced gaps this "philosophy" is more
plausible, but on a serial medium with continuous single or double flag
separation you are losing bandwidth if you introduce gaps due to delays in
filling the buffers. Several frames can be transmited faster than you can
get an interrupt and fill the buffer at very high speed, so you need to
keep a few available at all times to fill the bandwidth.
dennis
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]