Hi!

> Hello, I am trying to find the reason for very, very poor network
> performance with sustained data transfers on Linux 2.4.1. I found
> a work-around, but don't think user-mode code should have to provide
> such work-arounds.
> 
>    In the following, with Linux 2.4.1, on a dedicated 100/Base
>    link:
> 
>         s = socket connected to DISCARD (null-sink) server.
> 
>         while(len)
>         {
>             stat = write(s, buf, min(len, MTU));
>             /* Yes, I do check for an error */
>             buf += stat;
>             len -= stat;
>         }
> 
> Data length is 0x00010000 bytes.
> 
> MTU              Average trans rate   Fastest trans rate
> ----             -----------------    -----------------
> 65536            0.468 Mb/s           0.902 Mb/s
> 32768            0.684 Mb/s           0.813 Mb/s
> 16384            2.989 Mb/s           3.121 Mb/s
> 8192             5.211 Mb/s           6.160 Mb/s
> 4094             8.212 Mb/s           9.101 Mb/s
> 2048             8.561 Mb/s           9.280 Mb/s
> 1024             7.250 Mb/s           7.500 Mb/s
> 512              4.818 Mb/s           5.107 Mb/s
> 
> As you can see, there is a maximum data length that can be
> handled with reasonable speed from a socket. Trying to find
> out what that was, I discovered that the best MTU was 3924.
> I don't know why. It shows:

Looks like that's page_size - epsilon.

> MTU              Average trans rate   Fastest trans rate
> ----             -----------------    -----------------
> 3924             8.920 Mb/s           9.31 Mb/s

But even this is *not* reasonable speed for 100MBit ethernet!

> If the user's data length is higher than this, there is a 1/100th
> of a second wait between packets.  The larger the user's data length,
> the more the data gets chopped up into 1/100th of a second intervals.
> 
> It looks as though user data that can't fit into two Ethernet packets
> is queued until the next time-slice on a 100 Hz system. This severely
> hurts sustained data performance. The performance with a single
> 64k data buffer is abysmal. If it gets chopped up into 2048 byte
> blocks in user-space, it's reasonable.
> 
> Both machines are Dual Pentium 600 MHz machines with identical eepro100
> Ethernet boards. I substituted, LANCE (Hewlett Packard), and 3COM boards
> (3c59x) with essentially no change.

Strange. Do you have interrupts working okay? [I'm able to get 4Mbit with
ne2000 hooked on timer IRQ, so this is not totally stupid Question.]

> Does this point out a problem? Or should user-mode code be required

Definitely problem.

> to chop up data lengths to something more "reasonable" for the kernel?
> If so, how does the user know what "reasonable" is?

-- 
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to