On Wed, 17 May 2000 [EMAIL PROTECTED] wrote:
> > Date: Mon, 15 May 2000 12:20:43 -0700
> >cards in this machine, a 100basetx 3com 3c905b and a 10baset 3com isa 3c509.
> >There is also a modem in the machine and it seems that when the modem is
> >receiving data as well as the 10baset network card, serial input overruns
>
> The 3c509 has a configuration parameter (not the driver, but the
> card itself): modem baud rate. Maybe it blocks interrupts or bus
> during its working and its "time slot" is too long?
> The parameter can be set by 3c5x9cfg.exe (run it from DOS).
> (just today I tested a card with damaged setup so I remember)
The Linux driver does not use this parameter.
This EEPROM field was primarily for slow MS-DOS systems running the early
IPX protocol.
The early IPX protocol was a very primitive, no-window, receive-reply scheme.
The performance was *very* dependent on the network latency.
Much like the marketplace in video cards or CPU Mhz today, people made
large purchasing decisions based on a single performance number.
3Com (and others -- 3Com didn't invent this) started using two techniques
that significantly increased the IPX performance number. When the just the
header of the packet arrived, the card raised an EarlyRx interrupt. The
driver copied over just the header to find the size and start decoding the
protocol. If the packet was accepted, the driver allocated a new buffer.
Then, while the packet data was still arriving, the driver started copying
the packet to the buffer. With a slow machine and the ISA bus the entire
packet might have arrived by the time the driver caught up with the incoming
packet.
On the transmit side there was a similar approach to reduce latency. The
3Com chip implemented a Tx FIFO threshold technique. The driver started
transferring bytes to the FIFO, and transmission began when the threshold
was reached. The ISA bus is significantly faster than 10Mbps Ethernet, so
normally the driver kept well ahead of the transmitter. But if some other
device was raised an interrupt, the transmit routine might be prematurely
interrupted and the FIFO would underrun. A FIFO underrun resulted in a
corrupted packet, and a non-trivial recovery. So the 3Com driver both
blocked interrupts to minimize underruns, and dynamically increased the Tx
threshold when underruns did occur. The "modem baud" rate setting was used
to decide how often to unblock interrupts.
This is a very clever scheme. It was such a big win on the simple IPX
benchmark that 3Com named it "Parallel Tasking", a 'special sauce' marketing
term that is still around. You could even make a reasonable claim that this
performance edge gave 3Com the critical leverage that put them into the much
more profitable enterprise market that they are in today.
Given that I've described this driver structure in glowing terms, you might
ask why didn't my Linux driver use the same chip features? It's because
all of this was targeted to do well against benchmarks that were already
obsolete.
As I mentioned before, most old proprietary protocols for PCs were pretty
simplistic. We had the example of routable, reliable, connection-oriented,
windowed TCP/IP. But given a Ethernet-like LAN, and a desire to lock your
customer in to a single-vendor network, the obvious weekend hack is to write
a single-threaded, non-routable, non-windowed, request-response protocol.
Guess what most PC networking looked like in the '80s and early '90s? It
was a single-threaded, non-routable, non-windowed, request-response
proprietary protocol..
With a request-response protocol, the only performance parameter is latency.
The only way you get better performance is by decreasing the latency. With
a windowed protocol, e.g. TCP, latency is almost irrelevant, as long as the
window is large enough to hide it.
You might think IP control messages have a latency sensitivity, but they
usually fit within a minimum size Ethernet frame and thus don't see any
benefit from the 3Com scheme. UDP datagrams might benefit, but the most
common UDP messages are minimum sized or 9KB and thus use windowed
transmission.
The 3Com scheme, as good as it sounds, has several additional drawbacks
On a faster machine packets might take several interrupts to process,
significantly increasing overhead.
On a MS-DOS old-IPX client, there was nothing else to do while
transmitting or waiting for a response, so the extra load occurs while the
machine is idle. In any other environment there are likely a few others
tasks that were expecting to get a slice of the CPU, such as preparing the
next packet.
A CRC errors from transmit underruns in normal operation is a Very Bad
Thing. The CRC check isn't intended to be used as flow control, and doing
so reduces its effectiveness as an error check. A CRC error on the
network usually indicates bad hardware, which should be replaced before an
error slips through. But now CRC errors are common things.
Processing the EarlyRx header predicts that the packet will be valid. But
the possibility of CRC errors in a packet is now much higher.
Donald Becker [EMAIL PROTECTED]
Scyld Computing Corporation
410 Severn Ave. Suite 210
Annapolis MD 21403
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]