While using Netperf to benchmark the fast-ethernet network performance
of our Intel Linux cluster, we identified a disturbing behavior when
using TCP to transmit fragmented packets.  With a 2.0.32 kernel, data
transmission ground to a near halt when a fragmented packet was less
than 100% full.  Kernel 2.2.10 was also effected, but less seriously
and only for fragmented packets containing 723 or fewer bytes of data.

I am aware of the Nagle rule, but, with the 2.0.32 kernel, we're
consistently seeing delays of .5 second.  With the 2.2.10 kernel there
are no half second delays, just consistent delays of 3 to 5 ms before
ACKs.  Running the tests with TCP_NODELAY eliminates the problem.
Using very small window sizes is also effective, but not a general
solution.

The upshot is that throughput drops to a single round-trip packet per
second with the 2.0.32 kernel, about 100 packets/sec with the 2.2.10
kernel.  I ran the same tests on two Solaris 2.4 machines, and they
did not exhibit this behavior.

I apologize if this has question has already been addressed, and for
using the 2.0.32 rather than the latest 2.0 kernel.  Nonetheless, I'm
interested in understanding what's going on here.

[I also noticed that the MSS for 2.0.32 was 1460, but was 1448 with
2.2.10, which seemed strange]

Thanks,
        Chance

--
Chance Reschke
University of Washington Astronomy



-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]

Reply via email to