At 07:50 AM 2/28/00 -0800, Stephen Satchell wrote:
>At 07:31 AM 2/28/00 -0500, jamal wrote:
>>The packet rate is the best measure. But the real Mccoy is to
>>see how many interupts/sec your system can withstand.
>>Traditionaly people map a single packet arrival to an interupt. Looking at
>>the interupt per sec (as in using vmstat -i) is definetly one way to
>>measure things. At high packet rates it becomes too burdensome on the
>>system. One solution is to have multiple packets giving you one
>>interupt. This is of course at the expense of increasing your node latency
>>processing.
>
>Again, I beg to differ.  Having worked on design of high-speed 
>communications systems and in particular dealing with real-time issues, I 
>can tell you that limiting interrupt count can be very, very 
>important.  For an early microcomputer system, we had to deal with just the 
>sort of issue that you are talking about.
>
>The "network" was a 13-wire parallel-bus system (not unlike IEEE-488) that 
>could transfer several megabytes per second among several machines, using 

[snip]

The issue with cutting interrupts below 1 per packet is that most network
controllers use ring structures and you have to guarantee that the ring is
not filled or you lose data. You also typically have multiple high speed
devices on the bus that need to be serviced, so you really need to get the
data out of the ring quickly as interrupts may be delayed by outside
forces. If you force interrupt delays to allow for multiple events per
interupt then you introduce latencies into the data stream (which end users
dont like even if they are very small). So you are robbing Peter to pay Paul. 

The best strategy (so far) is to process data as it arrives and hope that
our buddies at Intel can produce faster processors as data rates increase.

dennis
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]

Reply via email to