At 07:31 AM 2/28/00 -0500, jamal wrote:
>The packet rate is the best measure. But the real Mccoy is to
>see how many interupts/sec your system can withstand.
>Traditionaly people map a single packet arrival to an interupt. Looking at
>the interupt per sec (as in using vmstat -i) is definetly one way to
>measure things. At high packet rates it becomes too burdensome on the
>system. One solution is to have multiple packets giving you one
>interupt. This is of course at the expense of increasing your node latency
>processing.
Again, I beg to differ. Having worked on design of high-speed
communications systems and in particular dealing with real-time issues, I
can tell you that limiting interrupt count can be very, very
important. For an early microcomputer system, we had to deal with just the
sort of issue that you are talking about.
The "network" was a 13-wire parallel-bus system (not unlike IEEE-488) that
could transfer several megabytes per second among several machines, using
an on-board buffer so we could overlap transfer with other activity. In
our first attempt, we would generate a machine interrupt at the end of each
transfer, which resulted in an interrupt blizzard. Our second attempt was
to hook the real-time interrupt, but the 100 ms tick was too long. The
final result was to put an NE555 with a pot on the board and a flop. When
the board thought it needed to interrupt, it would set the flop. When the
NE555 fired, it would trigger the interrupt and reset the flop. The
software protocol avoided overrunning the on-board buffer.
By fiddling with the pot, we were able to literally tune the boards to
provide the best response without the interrupt blizzard. As I recall, we
ended up with those pots set to generate a maximum of 350 interrupts per
second.
The interrupt routine, when fired, would process the multiple packets
sitting in the buffer. This processing routine would also be fired when
the driver was called to transmit data. Because of the size of the buffer,
we could have as many as ten inbound packets sitting in the input buffer
when the routine finally got around to emptying it.
Another real-time system running on a minicomputer would not withstand more
than 1000 interrupts per second, given the way we wrote our device driver
interrupt routines. Because of the restrictions, we used microcomputers
(first the SMS 9000 chip, then the Z80, and eventually the x86) on our
device boards to keep the main processor from being flooded. This also
made the systems scalable, and that's a good thing when your complete
system has a seven-digit price tag. :)
This sort of thing can be fun, if you have the appropriate amount of time
to play with them.
Satch
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]