On Tue, Mar 18, 2003 at 08:51:29PM +0100, Borje Josefsson wrote:
[snip scenario]
> 
> The hosts are connected directly (no LAN equipment inbetween) to high 
> capacity backbone routers (10 Gbit/sec backbone), and are approx 1000 
> km/625 miles(!) apart. Measuring RTT gives:
> RTTmax = 20.64 ms. Buffer size needed = 3.69 Mbytes, so I add 25% and set:
> 
> sysctl net.inet.tcp.sendspace=4836562 
> sysctl net.inet.tcp.recvspace=4836562
> 
> MTU=4470 all the way.
> 
> OS = FreeBSD 4-STABLE (as of today).
> 
> **** Now the problem:
> 
> The receiver works fine, but on the *sender* I run out if CPU (doesn't 
> matter if host a or host b is sender). Measuring bandwidth with ttcp gives:
> 
> ttcp-t: buflen=61440, nbuf=30517, align=16384/0, port=5001  tcp
> ttcp-t: 1874964480 bytes in 22.39 real seconds = 638.82 Mbit/sec +++
> ttcp-t: 30517 I/O calls, msec/call = 0.75, calls/sec = 1362.82
> ttcp-t: 0.0user 20.8sys 0:22real 93% 16i+382d 326maxrss 0+15pf 9+280csw
> 
> This is very repeatable (within a few %), and is the same regardless of 
> which direction I use.
> 
> During that period, the sender shows:
> 
> 0.0% user,  0.0% nice, 94.6% system,  5.4% interrupt,  0.0% idle

I had something vaguely similar happen while I was porting the FreeBSD
4.2 networking stack to LynxOS. It turned out the culprit was sbappend().
It does a linear pointer chase down the mbuf chain each time you do
a write() or send(). With a high bandwidth-delay product, that chain
can get very long.

This topic came up on freebsd-net last July, and Luigi Rizzo provided
the following URL for a patch to cache the end of the mbuf chain, so
sbappend() stays O(1) instead of O(n).

http://docs.freebsd.org/cgi/getmsg.cgi?fetch=366972+0+archive/2001/freebsd-net/20010211.freebsd-net

The subject of the July thread was 'the incredible shrinking socket', if
you want to hunt through the archives.

Hope this helps.

-- 
Ed Mooring ([EMAIL PROTECTED])

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to