Re: tcp bw in 2.6

2007-10-15 Thread Daniel Schaffrath

On 2007/10/02  , at 18:47, Stephen Hemminger wrote:


On Tue, 2 Oct 2007 09:25:34 -0700
[EMAIL PROTECTED] (Larry McVoy) wrote:

If the server side is the source of the data, i.e, it's transfer  
is a

write loop, then I get the bad behaviour.
...
So is this a bug or intentional?


For whatever it is worth, I believed that we used to get better  
performance
from the same hardware.  My guess is that it changed somewhere  
between

2.6.15-1-k7 and 2.6.18-5-k7.


For the period from 2.6.15 to 2.6.18, the kernel by default enabled  
TCP
Appropriate Byte Counting. This caused bad performance on  
applications that

did small writes.

Stephen, maybe you can provide me with some specifics here?

Thanks a lot!!
Daniel

-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-15 Thread Stephen Hemminger
On Mon, 15 Oct 2007 14:40:25 +0200
Daniel Schaffrath [EMAIL PROTECTED] wrote:

 On 2007/10/02  , at 18:47, Stephen Hemminger wrote:
 
  On Tue, 2 Oct 2007 09:25:34 -0700
  [EMAIL PROTECTED] (Larry McVoy) wrote:
 
  If the server side is the source of the data, i.e, it's transfer  
  is a
  write loop, then I get the bad behaviour.
  ...
  So is this a bug or intentional?
 
  For whatever it is worth, I believed that we used to get better  
  performance
  from the same hardware.  My guess is that it changed somewhere  
  between
  2.6.15-1-k7 and 2.6.18-5-k7.
 
  For the period from 2.6.15 to 2.6.18, the kernel by default enabled  
  TCP
  Appropriate Byte Counting. This caused bad performance on  
  applications that
  did small writes.
 Stephen, maybe you can provide me with some specifics here?
 
 Thanks a lot!!
 Daniel
 

Read the RFC3465 for explanation of TCP ABC.
What happens is that applications that do multiple small writes
will end up using up their window. Typically these applications are not
streaming enough data to grow the congestion window so they get held
after 4 writes until an ACK comes back.  The fix for the application
(which also helps on all OS's and TCP versions as well) is to use a call
like writev() or sendmsg() to aggregate the small header blocks together
into a single send.

-- 
Stephen Hemminger [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-03 Thread Bill Fink
Tangential aside:

On Tue, 02 Oct 2007, Rick Jones wrote:

 *) depending on the quantity of CPU around, and the type of test one is 
 running, 
 results can be better/worse depending on the CPU to which you bind the 
 application.  Latency tends to be best when running on the same core as takes 
 interrupts from the NIC, bulk transfer can be better when running on a 
 different 
 core, although generally better when a different core on the same chip.  These
 days the throughput stuff is more easily seen on 10G, but the netperf service 
 demand changes are still visible on 1G.

Interesting.  I was going to say that I've generally had the opposite
experience when it comes to bulk data transfers, which is what I would
expect due to CPU caching effects, but that perhaps it's motherboard/NIC/
driver dependent.  But in testing I just did I discovered it's even
MTU dependent (most of my normal testing is always with 9000-byte
jumbo frames).

With Myricom 10-GigE NICs, NIC interrupts on CPU 0 and nuttcp app
running on CPU 1 (both transmit and receive sides), and using 9000-byte
jumbo frames:

[EMAIL PROTECTED] ~]# nuttcp -w10m 192.168.88.16
10078.5000 MB /  10.02 sec = 8437.5396 Mbps 100 %TX 99 %RX

With Myricom 10-GigE NICs, and both NIC interrupts and nuttcp app
on CPU 0 (both transmit and receive sides), again using 9000-byte
jumbo frames:

[EMAIL PROTECTED] ~]# nuttcp -w10m 192.168.88.16
11817.8750 MB /  10.00 sec = 9909.7537 Mbps 100 %TX 74 %RX

Same tests repeated with standard 1500-byte Ethernet MTU:

With Myricom 10-GigE NICs, NIC interrupts on CPU 0 and nuttcp app
running on CPU 1 (both transmit and receive sides), and using
standard 1500-byte Ethernet MTU:

[EMAIL PROTECTED] ~]# nuttcp -M1460 -w10m 192.168.88.16
 5685.9375 MB /  10.00 sec = 4768.0951 Mbps 99 %TX 98 %RX

With Myricom 10-GigE NICs, and both NIC interrupts and nuttcp app
on CPU 0 (both transmit and receive sides), again using standard
1500-byte Ethernet MTU:

[EMAIL PROTECTED] ~]# nuttcp -M1460 -w10m 192.168.88.16
 4974.0625 MB /  10.03 sec = 4161.6015 Mbps 100 %TX 100 %RX

Now back to your regularly scheduled programming.  :-)

-Bill
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-03 Thread David Miller
From: [EMAIL PROTECTED] (Larry McVoy)
Date: Tue, 2 Oct 2007 15:36:44 -0700

 On Tue, Oct 02, 2007 at 03:32:16PM -0700, David Miller wrote:
  I'm starting to have a theory about what the bad case might
  be.
  
  A strong sender going to an even stronger receiver which can
  pull out packets into the process as fast as they arrive.
  This might be part of what keeps the receive window from
  growing.
 
 I can back you up on that.  When I straced the receiving side that goes
 slowly, all the reads were short, like 1-2K.  The way that works the 
 reads were a lot larger as I recall.

My issue turns out to be hardware specific too.

The two Broadcom 5714 onboard NICs on my Niagara t1000 give bad packet
receive performance for some reason, the other two which are Broadcom
5704's are perfectly fine.  I'll figure out what the problem is,
probably some misprogramed register in either the chip or the bridge
it's behind.

The UDP stream test of netperf is great for isolating TCP/TSO vs.
hardware issues.  If you can't saturate the pipe or the cpu with
the UDP stream test, it's likely a hardware issue.

The cpu utilization and service demand numbers provided, on both
send and receive, are really useful for diagnosing problems like
this.

Rick deserves several beers for his work on this cool toy. :)
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-03 Thread Larry McVoy
 A few notes to the discussion. I've seen one e1000 bug that ended up being
 a crappy AMD pre-opteron SMP chipset with a totally useless PCI bus
 implementation, which limited performance quite a bit-totally depending on
 what you plugged in and in which slot. 10e milk-and-bread-store 
 32/33 gige nics actually were better than server-class e1000's 
 in those, but weren't that great either.

That could well be my problem, this is a dual processor (not core) athlon
(not opteron) tyan motherboard if I recall correctly.

 Check your interrupt rates for the interface. You shouldn't be getting
 anywhere near 1 interrupt/packet. If you are, something is badly wrong :).

The acks (because I'm sending) are about 1.5 packets/interrupt.
When this box is receiving it's moving about 3x ass much data
and has a _lower_ (absolute, not per packet) interrupt load.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-03 Thread Pekka Pietikainen
On Tue, Oct 02, 2007 at 02:21:32PM -0700, Larry McVoy wrote:
 More data, sky2 works fine (really really fine, like 79MB/sec) between
 Linux dylan.bitmover.com 2.6.18.1 #5 SMP Mon Oct 23 17:36:00 PDT 2006 i686
 Linux steele 2.6.20-16-generic #2 SMP Sun Sep 23 18:31:23 UTC 2007 x86_64
 
 So this is looking like a e1000 bug.  I'll try to upgrade the kernel on 
 the ia64 box and see what happens.
A few notes to the discussion. I've seen one e1000 bug that ended up being
a crappy AMD pre-opteron SMP chipset with a totally useless PCI bus
implementation, which limited performance quite a bit-totally depending on
what you plugged in and in which slot. 10e milk-and-bread-store 
32/33 gige nics actually were better than server-class e1000's 
in those, but weren't that great either.

A few things worth trying out is using recv(.., MSG_TRUNC ) on the receiver,
that tests the theoretical sender maximum performance much better (but memory
bandwidth vs. GigE is much higher these days than it was in 2001 so maybe
not that useful anymore).

Check your interrupt rates for the interface. You shouldn't be getting
anywhere near 1 interrupt/packet. If you are, something is badly wrong :).

Running getsockopt(...TCP_INFO) every few secs on the socket and printing
that out can be useful too. That gives you both sides' idea on what the
tcp windows etc. are.

My favourite tool is a home-made thing called yantt btw. 
( http://www.ee.oulu.fi/~pp/yantt.tgz . Needs lots of cleanup love, 
it mucks with the window sizes by default, since in the 2.4 days you really
had to do that to get any kind of performance and the help text is wrong.
But it's pretty easy to hack to try out new ideas, use 
sendfile/MSG_TRUNC/TCP_INFO etc.

Netperf is the kitchen sink of network benchmark tools. But trying out a few
tiny things with it is not fun at all, I tried and quickly decided to 
write my own tool for my master's thesis work ;-)

Oh. Don't measure CPU usage with top. Use a cyclesoaker (google for
cyclesoak, I included akpm's with yantt) :-)

And yes. TCP stacks do have bugs, especially when things get outside the
equipment most people have. Having a dedicated transatlantic 2.5Gbps
connection found a really fun one a long time ago ;)

-- 
Pekka Pietikainen
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-03 Thread Pekka Pietikainen
On Wed, Oct 03, 2007 at 02:23:58PM -0700, Larry McVoy wrote:
  A few notes to the discussion. I've seen one e1000 bug that ended up being
  a crappy AMD pre-opteron SMP chipset with a totally useless PCI bus
  implementation, which limited performance quite a bit-totally depending on
  what you plugged in and in which slot. 10e milk-and-bread-store 
  32/33 gige nics actually were better than server-class e1000's 
  in those, but weren't that great either.
 
 That could well be my problem, this is a dual processor (not core) athlon
 (not opteron) tyan motherboard if I recall correctly.
If it's AMD760/768MPX, here's some relevant discussion:

http://lkml.org/lkml/2002/7/18/292  
http://www.ussg.iu.edu/hypermail/linux/kernel/0307.1/1109.html  
http://www.ussg.iu.edu/hypermail/linux/kernel/0307.1/1154.html  
http://www.ussg.iu.edu/hypermail/linux/kernel/0307.1/1212.html 
http://forums.2cpu.com/showthread.php?s=threadid=31211

 
  Check your interrupt rates for the interface. You shouldn't be getting
  anywhere near 1 interrupt/packet. If you are, something is badly wrong :).
 
 The acks (because I'm sending) are about 1.5 packets/interrupt.
 When this box is receiving it's moving about 3x ass much data
 and has a _lower_ (absolute, not per packet) interrupt load.
Probably not a problem then, since those acks probably cover many 
sent packets. Current interrupt mitigation schemes are pretty 
dynamic, balancing between latency and bulk performance so the acks
might be fine (thousands vs. tens of thousands/sec)

-- 
Pekka Pietikainen
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Herbert Xu
Larry McVoy [EMAIL PROTECTED] wrote:

 One of my clients also has gigabit so I played around with just that
 one and it (itanium running hpux w/ broadcom gigabit) can push the load
 as well.  One weird thing is that it is dependent on the direction the
 data is flowing.  If the hp is sending then I get 46MB/sec, if linux is
 sending then I get 18MB/sec.  Weird.  Linux is debian, running 

First of all check the CPU load on both sides to see if either
of them is saturating.  If the CPU's fine then look at the tcpdump
output to see if both receivers are using the same window settings.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} [EMAIL PROTECTED]
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread John Heffner

Larry McVoy wrote:

A short summary is can someone please post a test program that sources
and sinks data at the wire speed?  because apparently I'm too old and
clueless to write such a thing.


Here's a simple reference tcp source/sink that's I've used for years. 
For example, on a couple gigabit machines:


$ ./tcpsend -t10 dew
Sent 1240415312 bytes in 10.033101 seconds
Throughput: 123632294 B/s

  -John

/*
 * discard.c
 * A simple discard server.
 *
 * Copyright 2003 John Heffner.
 */

#include stdio.h
#include signal.h
#include unistd.h
#include string.h
#include stdlib.h
#include errno.h
#include sys/types.h
#include sys/socket.h
#include sys/poll.h
#include sys/wait.h
#include sys/time.h
#include sys/param.h
#include netinet/in.h

#if 0
#define RATELIMIT
#define RATE10  /* bytes/sec */
#define WAIT_TIME   (100/HZ-1)
#define READ_SIZE   (RATE/HZ)
#else
#define READ_SIZE   (1024*1024)
#endif

void child_handler(int sig)
{
int status;

wait(status);
}

int main(int argc, char *argv[])
{
int port = 9000;
int lfd;
struct sockaddr_in laddr;
int newfd;
struct sockaddr_in newaddr;
int pid;
socklen_t len;

if (argc  2) {
fprintf(stderr, usage: discard [port]\n);
exit(1);
}
if (argc == 2) {
if (sscanf(argv[1], %d, port) != 1 || port  0 || port  
65535) {
fprintf(stderr, discard: error: not a port number\n);
exit(1);
}
}

if (signal(SIGCHLD, child_handler) == SIG_ERR) {
perror(signal);
exit(1);
}

memset(laddr, 0, sizeof (laddr));
laddr.sin_family = AF_INET;
laddr.sin_port = htons(port);
laddr.sin_addr.s_addr = INADDR_ANY;

if ((lfd = socket(PF_INET, SOCK_STREAM, 0))  0) {
perror(socket);
exit(1);
}
if (bind(lfd, (struct sockaddr *)laddr, sizeof (laddr)) != 0) {
perror(bind);
exit(1);
}
if (listen(lfd, 5) != 0) {
perror(listen);
exit(1);
}

for (;;) {
if ((newfd = accept(lfd, (struct sockaddr *)newaddr, len))  
0) {
if (errno == EINTR)
continue;
perror(accept);
exit(1);
}

if ((pid = fork())  0) {
perror(fork);
exit(1);
} else if (pid == 0) {
int n;
char buf[READ_SIZE];
int64_t data_rcvd = 0;
struct timeval stime, etime;
float time;

gettimeofday(stime, NULL);
while ((n = read(newfd, buf, READ_SIZE))  0) {
data_rcvd += n;
#ifdef RATELIMIT
usleep(WAIT_TIME);
#endif
}
gettimeofday(etime, NULL);
close(newfd);

time = (float)(100*(etime.tv_sec - stime.tv_sec) + 
etime.tv_usec - stime.tv_usec) / 100.0;
printf(Received %lld bytes in %f seconds\n, (long 
long)data_rcvd, time);
printf(Throughput: %d B/s\n, (int)((float)data_rcvd / 
time));

exit(0);
}

close(newfd);
}

return 1;
}
/*
 * tcpsend.c
 * Send pseudo-random data through a TCP connection.
 *
 * Copyright 2003 John Heffner.
 */

#include stdio.h
#include stdlib.h
#include string.h
#include unistd.h
#include netdb.h
#include signal.h
#include errno.h
#include fcntl.h
#include netinet/in.h
#include sys/types.h
#include sys/socket.h
#include sys/time.h
#include sys/stat.h
#ifdef __linux__
#include sys/sendfile.h
#endif

#define SNDSIZE (1024 * 10)
#define BUFSIZE (1024 * 1024)

#define max(a,b)(a  b ? a : b)
#define min(a,b)(a  b ? a : b)

int time_done = 0;
int interrupt_done = 0;

struct timeval starttime;

void int_handler(int sig)
{
interrupt_done = 1;
}

void alarm_handler(int sig)
{
time_done = 1;
}

static void usage_error(int err) {
fprintf(stderr, usage: tcpsend [-z] [-b max_bytes] [-t max_time] 
hostname [port]\n);
exit(err);
}

static void cleanup_exit(int fd, char *filename, int status)
{
if (fd  0)
close(fd);
if (filename)
unlink(filename);
exit(status);
}

int main(int argc, char *argv[])
{
char *hostname = localhost;
int port = 9000;

Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
On Tue, Oct 02, 2007 at 06:52:54PM +0800, Herbert Xu wrote:
  One of my clients also has gigabit so I played around with just that
  one and it (itanium running hpux w/ broadcom gigabit) can push the load
  as well.  One weird thing is that it is dependent on the direction the
  data is flowing.  If the hp is sending then I get 46MB/sec, if linux is
  sending then I get 18MB/sec.  Weird.  Linux is debian, running 
 
 First of all check the CPU load on both sides to see if either
 of them is saturating.  If the CPU's fine then look at the tcpdump
 output to see if both receivers are using the same window settings.

tcpdump is a good idea, take a look at this.  The window starts out
at 46 and never opens up in my test case, but in the rsh case it 
starts out the same but does open up.  Ideas?

08:08:06.033305 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: S 2756874880:2756874880(0) win 32768 mss 
1460,wscale 0,nop
08:08:06.05 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: S 3360532803:3360532803(0) ack 2756874881 win 5840 
mss 1460,nop,wscale 7
08:08:06.047924 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 1 win 32768
08:08:06.048218 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 1:2921(2920) ack 1 win 46
08:08:06.048426 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 1461 win 32768
08:08:06.048446 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 2921:5841(2920) ack 1 win 46
08:08:06.048673 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 4381 win 32768
08:08:06.048684 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 5841:10221(4380) ack 1 win 46
08:08:06.049047 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 8761 win 32768
08:08:06.049057 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 10221:16061(5840) ack 1 win 46
08:08:06.049422 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 14601 win 32768
08:08:06.049429 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: P 16061:18981(2920) ack 1 win 46
08:08:06.049462 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 18981:20441(1460) ack 1 win 46
08:08:06.049484 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 20441:23361(2920) ack 1 win 46
08:08:06.049924 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 21901 win 32768
08:08:06.049943 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 23361:32121(8760) ack 1 win 46
08:08:06.050549 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 30661 win 32768
08:08:06.050559 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: P 32121:39421(7300) ack 1 win 46
08:08:06.050592 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 39421:40881(1460) ack 1 win 46
08:08:06.050614 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 40881:42341(1460) ack 1 win 46
08:08:06.051170 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 40881 win 32768
08:08:06.051188 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 42341:54021(11680) ack 1 win 46
08:08:06.051923 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 52561 win 32768
08:08:06.051932 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: P 54021:58401(4380) ack 1 win 46
08:08:06.051942 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 58401:67161(8760) ack 1 win 46
08:08:06.052671 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 65701 win 32768
08:08:06.052680 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: P 67161:74461(7300) ack 1 win 46
08:08:06.052719 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 74461:77381(2920) ack 1 win 46
08:08:06.052752 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 77381:81761(4380) ack 1 win 46
08:08:06.053549 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 80301 win 32768
08:08:06.053566 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: P 81761:97821(16060) ack 1 win 46
08:08:06.054423 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 96361 win 32768
08:08:06.054433 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: P 97821:113881(16060) ack 1 win 46
08:08:06.054476 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: . 113881:115341(1460) ack 1 win 46
08:08:06.055422 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 113881 win 32768
08:08:06.055438 IP work-cluster.bitmover.com.31235  
hp-ia64.bitmover.com.49614: P 115341:131401(16060) ack 1 win 46
08:08:06.056421 IP hp-ia64.bitmover.com.49614  
work-cluster.bitmover.com.31235: . ack 131401 win 32768
08:08:06.056432 IP work-cluster.bitmover.com.31235  

Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
Interesting data point.  My test case is like this:

server
bind
listen
while (newsock = accept...)
transfer()

client
connect
transfer

If the server side is the source of the data, i.e, it's transfer is a 
write loop, then I get the bad behaviour.  If I switch them so the data
flows in the other direction, then it works, I go from about 14K pkt/sec
to 43K pkt/sec.

Can anyone else reproduce this?  I can extract the test case from lmbench
so it is standalone but I suspect that any test case will do it.  I'll
try with the one that John sent.  Yup, s/read/write/ and s/write/read/
in his two files at the appropriate places and I get exactly the same
behaviour.

So is this a bug or intentional?
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
 If the server side is the source of the data, i.e, it's transfer is a 
 write loop, then I get the bad behaviour.  
 ...
 So is this a bug or intentional?

For whatever it is worth, I believed that we used to get better performance
from the same hardware.  My guess is that it changed somewhere between
2.6.15-1-k7 and 2.6.18-5-k7.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Linus Torvalds


On Tue, 2 Oct 2007, Larry McVoy wrote:

 Interesting data point.  My test case is like this:
 
 server
   bind
   listen
   while (newsock = accept...)
   transfer()
 
 client
   connect
   transfer
 
 If the server side is the source of the data, i.e, it's transfer is a 
 write loop, then I get the bad behaviour.  If I switch them so the data
 flows in the other direction, then it works, I go from about 14K pkt/sec
 to 43K pkt/sec.

Sounds like accept() possibly initializes slightly different socket 
parameters than connect() does. 

On the other hand, different network cards will simply have different 
behaviour (some due to hardware, some due to driver differences), so I 
hope you also switched the processes around and/or used identically 
configured machines (and the port configuration on switches could matter, 
of course, so it's really best to switch the processes around, to make 
sure that the *only* difference is whether the socket was set up by 
accept() vs connect()).

 So is this a bug or intentional?

Sounds like a bug to me, modulo the above caveat of making sure that it's 
not some hw/driver/switch kind of difference.

Linus
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Stephen Hemminger
On Tue, 2 Oct 2007 09:25:34 -0700
[EMAIL PROTECTED] (Larry McVoy) wrote:

  If the server side is the source of the data, i.e, it's transfer is a 
  write loop, then I get the bad behaviour.  
  ...
  So is this a bug or intentional?
 
 For whatever it is worth, I believed that we used to get better performance
 from the same hardware.  My guess is that it changed somewhere between
 2.6.15-1-k7 and 2.6.18-5-k7.

For the period from 2.6.15 to 2.6.18, the kernel by default enabled TCP
Appropriate Byte Counting. This caused bad performance on applications that
did small writes.

-- 
Stephen Hemminger [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
Isn't this something so straightforward that you would have tests for it?
This is the basic FTP server loop, doesn't someone have a big machine with
10gig cards and test that sending/recving data doesn't regress?

 Sounds like a bug to me, modulo the above caveat of making sure that it's 
 not some hw/driver/switch kind of difference.

Pretty unlikely given that we've changed the switch, the card works fine
in the other direction, and I'm 95% sure that we used to get better perf
before we switched to a more recent kernel.

I'll try and find some other gig ether cards and try them.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
On Tue, Oct 02, 2007 at 09:47:26AM -0700, Stephen Hemminger wrote:
 On Tue, 2 Oct 2007 09:25:34 -0700
 [EMAIL PROTECTED] (Larry McVoy) wrote:
 
   If the server side is the source of the data, i.e, it's transfer is a 
   write loop, then I get the bad behaviour.  
   ...
   So is this a bug or intentional?
  
  For whatever it is worth, I believed that we used to get better performance
  from the same hardware.  My guess is that it changed somewhere between
  2.6.15-1-k7 and 2.6.18-5-k7.
 
 For the period from 2.6.15 to 2.6.18, the kernel by default enabled TCP
 Appropriate Byte Counting. This caused bad performance on applications that
 did small writes.

It's doing 1MB writes.

Is there a sockopt to turn that off?  Or /proc or something?
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Ben Greear

Larry McVoy wrote:

Interesting data point.  My test case is like this:

server
bind
listen
while (newsock = accept...)
transfer()

client
connect
transfer

If the server side is the source of the data, i.e, it's transfer is a 
write loop, then I get the bad behaviour.  If I switch them so the data

flows in the other direction, then it works, I go from about 14K pkt/sec
to 43K pkt/sec.

Can anyone else reproduce this?  I can extract the test case from lmbench
so it is standalone but I suspect that any test case will do it.  I'll
try with the one that John sent.  Yup, s/read/write/ and s/write/read/
in his two files at the appropriate places and I get exactly the same
behaviour.

So is this a bug or intentional?
  
I have a more complex configuration  application, but I don't see this 
problem in

my testing.  Using e1000 nics and modern hardware I can set up a connection
between two machines and run 800+Mbps in both directions, or near line speed
in one direction if the other direction is mostly silent.

I am purposefully setting the socket send/rx buffers, as well has 
twiddling with
the tcp and netdev related tunables.  If you want, I can email these 
tweaks to you.


NICs and busses have a huge impact on performance, so make sure those 
are good.


Thanks,
Ben


--
Ben Greear [EMAIL PROTECTED] 
Candela Technologies Inc  http://www.candelatech.com



-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
 I have a more complex configuration  application, but I don't see this 
 problem in my testing.  Using e1000 nics and modern hardware 

I'm using a similar setup, what kernel are you using?

 I am purposefully setting the socket send/rx buffers, as well has 
 twiddling with the tcp and netdev related tunables.  

Ben sent those to me, see below, they didn't make any difference.
I tried diddling the socket send/recv buffers to 10MB, that didn't
help.  The defaults didn't help.  1MB didn't help and 64K didn't
help.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Stephen Hemminger
On Tue, 2 Oct 2007 09:49:52 -0700
[EMAIL PROTECTED] (Larry McVoy) wrote:

 On Tue, Oct 02, 2007 at 09:47:26AM -0700, Stephen Hemminger wrote:
  On Tue, 2 Oct 2007 09:25:34 -0700
  [EMAIL PROTECTED] (Larry McVoy) wrote:
  
If the server side is the source of the data, i.e, it's transfer is a 
write loop, then I get the bad behaviour.  
...
So is this a bug or intentional?
   
   For whatever it is worth, I believed that we used to get better 
   performance
   from the same hardware.  My guess is that it changed somewhere between
   2.6.15-1-k7 and 2.6.18-5-k7.
  
  For the period from 2.6.15 to 2.6.18, the kernel by default enabled TCP
  Appropriate Byte Counting. This caused bad performance on applications that
  did small writes.
 
 It's doing 1MB writes.
 
 Is there a sockopt to turn that off?  Or /proc or something?

sysctl -w net.ipv4.tcp_abc=0

-- 
Stephen Hemminger [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Rick Jones

Larry McVoy wrote:

A short summary is can someone please post a test program that sources
and sinks data at the wire speed?  because apparently I'm too old and
clueless to write such a thing.


http://www.netperf.org/svn/netperf2/trunk/

:)

WRT the different speeds in each direction talking with HP-UX, perhaps there is 
an interaction between the Linux TCP stack (TSO perhaps) and HP-UX's ACK 
avoidance heuristics. If that is the case, tweaking tcp_deferred_ack_max with 
ndd on the HP-UX system might yield different results.


I don't recall if the igelan (broadcom) driver in HP-UX attempts to auto-tune 
the interrupt throttling.  I do believe the iether (intel) driver in HP-UX does. 
 That can be altered via lanadmin -X mumble... commands.


Later (although later than a 2.6.18 kernel IIRC) e1000 drivers do try to 
auto-tune the interrupt throttling and one can see oscilations when an e1000 
driver is talking to an e1000 driver.  I think that can only be changed via the 
InterruptThrotleRate e1000 module parameter in that era of kernel - not sure if 
the Intel folks have that available via ethtool on contemporary kernels now or not.


WRT the small program making a setsockopt(SO_*BUF) call going slower than the 
rsh, does rsh make the setsockopt() call, or does it bend itself to the will of 
the linux stack's autotuning?  What happens if your small program does not make 
 setsockopt(SO_*BUF) calls?


Other misc observations of variable value:

*) depending on the quantity of CPU around, and the type of test one is running, 
results can be better/worse depending on the CPU to which you bind the 
application.  Latency tends to be best when running on the same core as takes 
interrupts from the NIC, bulk transfer can be better when running on a different 
core, although generally better when a different core on the same chip.  These 
days the throughput stuff is more easily seen on 10G, but the netperf service 
demand changes are still visible on 1G.


*) agreement with the observation that the small recv calls suggest that the 
application is staying-up with the network.  I doubt that SO_BUF settings would 
change that, but perhaps setting watermarks might (wild ass guess).  The 
watermarks will do nothing on HP-UX though (IIRC).


rick jones
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Ben Greear

Larry McVoy wrote:
I have a more complex configuration  application, but I don't see this 
problem in my testing.  Using e1000 nics and modern hardware 



I'm using a similar setup, what kernel are you using?
  

I'm currently on 2.6.20, and have also tried 10gbe nics on 2.6.23 with
good results.  At least for my app, performance has been pretty steady
at least as far back as the .18 kernels, and probably before

I do 64k or smaller writes  reads, and non-blocking IO (not sure if that
would matter..but I do :)

Have you tried something like ttcp, iperf, or even regular ftp?

Checked your nics to make sure they have no errors and are negotiated
to full duplex?

Thanks,
Ben

--
Ben Greear [EMAIL PROTECTED] 
Candela Technologies Inc  http://www.candelatech.com



-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
On Tue, Oct 02, 2007 at 10:14:11AM -0700, Rick Jones wrote:
 Larry McVoy wrote:
 A short summary is can someone please post a test program that sources
 and sinks data at the wire speed?  because apparently I'm too old and
 clueless to write such a thing.
 
 WRT the different speeds in each direction talking with HP-UX, perhaps 
 there is an interaction between the Linux TCP stack (TSO perhaps) and 
 HP-UX's ACK avoidance heuristics. If that is the case, tweaking 
 tcp_deferred_ack_max with ndd on the HP-UX system might yield different 
 results.

I doubt it because I see the same sort of behaviour when I have a group
of Linux clients talking to the server.  The HP box is in the mix
simply because it has a gigabit card and that makes driving the load
simpler.  But if I do several loads from 100Mbit clients I get the same
packet throughput.

 WRT the small program making a setsockopt(SO_*BUF) call going slower than 
 the rsh, does rsh make the setsockopt() call, or does it bend itself to the 
 will of the linux stack's autotuning?  What happens if your small program 
 does not make setsockopt(SO_*BUF) calls?

I haven't tracked down if rsh does that but I've tried doing it with 
values of default, 64K, 1MB, and 10MB with no difference.

 *) depending on the quantity of CPU around, and the type of test one is 

These are fast CPUs and they are running at 93% idle while running the test.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
 I'm currently on 2.6.20, and have also tried 10gbe nics on 2.6.23 with

My guess is that it is a bug in the debian 2.6.18 kernel.

 Have you tried something like ttcp, iperf, or even regular ftp?

Yeah, I've factored out the code since BitKeeper, my test program,
and John's test program all exhibit the same behaviour.  Also switched
switches.

 Checked your nics to make sure they have no errors and are negotiated
 to full duplex?

Yup and yup.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Stephen Hemminger
On Tue, 2 Oct 2007 10:21:55 -0700
[EMAIL PROTECTED] (Larry McVoy) wrote:

  I'm currently on 2.6.20, and have also tried 10gbe nics on 2.6.23 with
 
 My guess is that it is a bug in the debian 2.6.18 kernel.
 
  Have you tried something like ttcp, iperf, or even regular ftp?
 
 Yeah, I've factored out the code since BitKeeper, my test program,
 and John's test program all exhibit the same behaviour.  Also switched
 switches.
 
  Checked your nics to make sure they have no errors and are negotiated
  to full duplex?
 
 Yup and yup.

Make sure you don't have slab debugging turned on. It kills performance.

-- 
Stephen Hemminger [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Rick Jones

has anyone already asked whether link-layer flow-control is enabled?

rick jones
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread John Heffner

Larry McVoy wrote:

On Tue, Oct 02, 2007 at 06:52:54PM +0800, Herbert Xu wrote:

One of my clients also has gigabit so I played around with just that
one and it (itanium running hpux w/ broadcom gigabit) can push the load
as well.  One weird thing is that it is dependent on the direction the
data is flowing.  If the hp is sending then I get 46MB/sec, if linux is
sending then I get 18MB/sec.  Weird.  Linux is debian, running 

First of all check the CPU load on both sides to see if either
of them is saturating.  If the CPU's fine then look at the tcpdump
output to see if both receivers are using the same window settings.


tcpdump is a good idea, take a look at this.  The window starts out
at 46 and never opens up in my test case, but in the rsh case it 
starts out the same but does open up.  Ideas?


(Binary tcpdumps are always better than ascii.)

The window on the sender (linux box) starts at 46.  It doesn't open up, 
but it's not receiving data so it doesn't matter, and you don't expect 
it to.  The HP box always announces a window of 32768.


Looks like you have TSO enabled.  Does it behave differently if it's 
disabled?  I think Rick Jones is on to something with the HP ack 
avoidance.  Looks like a pretty low ack ratio, and it might not be 
interacting well with TSO, especially at such a small window size.


  -John
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
 Make sure you don't have slab debugging turned on. It kills performance.

It's a stock debian kernel, so unless they turn it on it's off.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
On Tue, Oct 02, 2007 at 11:01:47AM -0700, Rick Jones wrote:
 has anyone already asked whether link-layer flow-control is enabled?

I doubt it, the same test works fine in one direction and poorly in the other.
Wouldn't the flow control squelch either way?
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
 Looks like you have TSO enabled.  Does it behave differently if it's 
 disabled?  

It cranks the interrupts/sec up to 8K instead of 5K.  No difference in
performance other than that.

 I think Rick Jones is on to something with the HP ack avoidance.  

I sincerely doubt it.  I'm only using the HP box because it has gigabit
so it's a single connection.  I can produce almost identical results by
doing the same sorts of tests with several linux clients.  One direction
goes fast and the other goes slow.

3x performance difference depending on the direction of data flow:

# Server is receiving, goes fast
$ for i in 22 24 25 26; do rsh -n glibc$i dd if=/dev/zero|dd of=/dev/null  done
load free cach swap pgin  pgou dk0 dk1 dk2 dk3 ipkt opkt  int  ctx  usr sys idl
0.98   0000 00   0   0   0   30K  15K 8.1K  68K  12  66  22
0.98   0000 00   0   0   0   29K  15K 8.2K  67K  11  64  25
0.98   0000 00   0   0   0   29K  15K 8.2K  67K  12  66  22

# Server is sending, goes slow
$ for i in 22 24 25 26; do dd if=/dev/zero|rsh glibc$i dd of=/dev/null  done
load free cach swap pgin  pgou dk0 dk1 dk2 dk3 ipkt opkt  int  ctx  usr sys idl
1.06   0000 00   0   0   0  5.0K  10K 4.4K 8.4K  21  17  62
0.97   0000 00   0   0   0  5.1K  10K 4.4K 8.9K   2  15  83
0.97   0000 00   0   0   0  5.0K  10K 4.4K 8.6K  21  26  53

$ for i in 22 24 25 26; do rsh glibc$i cat /etc/motd; done | grep Welcome
Welcome to redhat71.bitmover.com, a 2Ghz Athlon running Red Hat 7.1.
Welcome to glibc24.bitmover.com, a 1.2Ghz Athlon running SUSE 10.1.
Welcome to glibc25.bitmover.com, a 2Ghz Athlon running Fedora Core 6
Welcome to glibc26.bitmover.com, a 2Ghz Athlon running Fedora Core 7

$ for i in 22 24 25 26; do rsh glibc$i uname -r; done
2.4.2-2
2.6.16.13-4-default
2.6.18-1.2798.fc6
2.6.22.4-65.fc7

No HP in the mix.  It's got nothing to do with hp, nor to do with rsh, it 
has everything to do with the direction the data is flowing.  
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Linus Torvalds


On Tue, 2 Oct 2007, Larry McVoy wrote:
 
 tcpdump is a good idea, take a look at this.  The window starts out
 at 46 and never opens up in my test case, but in the rsh case it 
 starts out the same but does open up.  Ideas?

I don't think that's an issue, since you only send one way. The window 
opening up only matters for the receiver. Also, you missed the wscale=7 
at the beginning, so the window of 46 looks like it actually is 5888 (ie 
fits four segments - and it's not grown because it never gets any data).

However, I think this is some strange TSO artifact:

...
 08:08:18.843942 IP work-cluster.bitmover.com.31235  
 hp-ia64.bitmover.com.49614: P 48181:64241(16060) ack 0 win 46
 08:08:18.844681 IP hp-ia64.bitmover.com.49614  
 work-cluster.bitmover.com.31235: . ack 48181 win 32768
 08:08:18.844690 IP work-cluster.bitmover.com.31235  
 hp-ia64.bitmover.com.49614: P 64241:80301(16060) ack 0 win 46
 08:08:18.845556 IP hp-ia64.bitmover.com.49614  
 work-cluster.bitmover.com.31235: . ack 64241 win 32768
 08:08:18.845566 IP work-cluster.bitmover.com.31235  
 hp-ia64.bitmover.com.49614: . 80301:96361(16060) ack 0 win 46
 08:08:18.846304 IP hp-ia64.bitmover.com.49614  
 work-cluster.bitmover.com.31235: . ack 80301 win 32768
...

We see a single packet containing 16060 bytes, which seems to be because 
of TSO on the sending side (you did your tcpdump on the sender, no?), so 
it will actually be broken up into 11 1460-byte regular frames by the 
network card, since they started out agreeing on a standard 1460-byte MSS. 
So the above is not a jumbo frame, it just kind of looks like one when you 
capture it on the sender side.

And maybe a 32kB window is not big enough when it causes the networking 
code to basically just have a single packet outstanding.

I also would have expected more ACK's from the HP box. It's been a long 
time since I did TCP, but I thought the rule was still that you were 
supposed to ACK at least every other full frame - but the HP box is acking 
roughly every 16K (and it's *not* always at TSO boundaries: the earlier 
ACK's in the sequence are at 1460-byte packet boundaries, but it does seem 
to end up getting into that pattern later on).

So I'm wondering if we get into some bad pattern with the networking code 
trying to make big TSO packets for e1000, but because they are *so* big 
that there's only room for two such packets per window, you don't get into 
any smooth pattern with lots of outstanding packets, but it starts 
stuttering.

Larry, try turning off TSO. Or rather, make the kernel use a smaller limit 
for the large packets. The easiest way to do that should be to just change 
the value in /proc/sys/net/ipv4/tcp_tso_win_divisor. It defaults to 3, try 
doing

echo 6  /proc/sys/net/ipv4/tcp_tso_win_divisor

and see if that changes anything.

And maybe I'm just whistling in the dark. In fact, it looks like for you 
it's not 3, but 2 (window of 32768, but the TSO frames are half the size). 
So maybe I'm just totally confused and I'm not reading that tcp dump 
correctly at all!

Linus

-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Linus Torvalds


On Tue, 2 Oct 2007, Larry McVoy wrote:
 
 No HP in the mix.  It's got nothing to do with hp, nor to do with rsh, it 
 has everything to do with the direction the data is flowing.  

Can you tcpdump both cases and send snippets (both of steady-state, and 
the initial connect)? 

Linus
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
More data, we've conclusively eliminated the card / cpu from the mix.
We've got 2 ia64 boxes with e1000 interfaces.  One box is running
linux 2.6.12 and the other is running hpux 11.

I made sure the linux one was running at gigabit and reran the tests
from the linux/ia64 = hp/ia64.  Same results, when linux sends
it is slow, when it receives it is fast.

And note carefully: we've removed hpux from the equation, we can do
the same tests from linux to multiple linux clients and see the same
thing, sending from the server is slow, receiving on the server is
fast.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Rick Jones

Larry McVoy wrote:

On Tue, Oct 02, 2007 at 11:01:47AM -0700, Rick Jones wrote:


has anyone already asked whether link-layer flow-control is enabled?



I doubt it, the same test works fine in one direction and poorly in the other.
Wouldn't the flow control squelch either way?


While I am often guilty of it, a wise old engineer tried to teach me that the 
proper spelling is ass-u-me :)  I wouldn't count on it hitting in both 
directions, depends on the specifics of the situation.


WRT the HP-UX ACK avoidance heuristic, the default HP-UX socket buffer/window is 
32768, and tcp_deferred_ack_max defaults to 22.  That isn't really all that good 
a combination - with a window of 32768 11 for the deferred ack would be better. 
 You could also go ahead and try it with a value of 2.  Or, bump the window 
size defaults - tcp_recv_hiwater_def and tcp_xmit_hiwater_def - to say 65535 or 
128K or something - or use the setsockopt() calls to effect that.


rick jones
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread John Heffner

Larry McVoy wrote:

More data, we've conclusively eliminated the card / cpu from the mix.
We've got 2 ia64 boxes with e1000 interfaces.  One box is running
linux 2.6.12 and the other is running hpux 11.

I made sure the linux one was running at gigabit and reran the tests
from the linux/ia64 = hp/ia64.  Same results, when linux sends
it is slow, when it receives it is fast.

And note carefully: we've removed hpux from the equation, we can do
the same tests from linux to multiple linux clients and see the same
thing, sending from the server is slow, receiving on the server is
fast.



I think I'm still missing some basic data here (probably because this 
thread did not originate on netdev).  Let me try to nail down some of 
the basics.  You have a linux ia64 box (running 2.6.12 or 2.6.18?) that 
sends slowly, and receives faster, but not quite a 1 Gbps?  And this is 
true regardless of which peer it sends or receives from?  And the 
behavior is different depending on which kernel?  How, and which kernel 
versions?  Do you have other hardware running the same kernel that 
behaves the same or differently?


Have you done ethernet cable tests?  Have you tried measuring the udp 
sending rate?  (Iperf can do this.)  Are there any error counters on the 
interface?


  -John
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Rick Jones
I also would have expected more ACK's from the HP box. It's been a long 
time since I did TCP, but I thought the rule was still that you were 
supposed to ACK at least every other full frame - but the HP box is acking 
roughly every 16K (and it's *not* always at TSO boundaries: the earlier 
ACK's in the sequence are at 1460-byte packet boundaries, but it does seem 
to end up getting into that pattern later on).


Drift...

The RFC's say SHOULD (emphasis theirs) rather than MUST.

Both HP-UX and Solaris have rather robust ACK avoidance heuristics to cut-down 
on the CPU overhead of bulk transfers.  (That they both have them stems from 
their being cousins, sharing a common TCP stack ancestor long ago - both of 
course have been diverging since then).


rick jones
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
 I think I'm still missing some basic data here (probably because this 
 thread did not originate on netdev).  Let me try to nail down some of 
 the basics.  You have a linux ia64 box (running 2.6.12 or 2.6.18?) that 
 sends slowly, and receives faster, but not quite a 1 Gbps?  And this is 
 true regardless of which peer it sends or receives from?  And the 
 behavior is different depending on which kernel?  How, and which kernel 
 versions?  Do you have other hardware running the same kernel that 
 behaves the same or differently?

just got off the phone with Linus and he thinks it is the side that does
the accept is the problem side, i.e., if you are the server, you do the
accept, and you send the data, you'll go slow.  But as I'm writing this
I realize he's wrong, because it is the combination of accept  send.
accept  recv goes fast.

A trivial way to see the problem is to take two linux boxes, on each
apt-get install rsh-client rsh-server
set up your .rhosts,
and then do

dd if=/dev/zero count=10 | rsh OTHER_BOX dd of=/dev/null
rsh OTHER_BOX dd if=/dev/zero count=10 | dd of=/dev/null

See if you get balanced results.  For me, I get 45MB/sec one way, and
15-19MB/sec the other way.

I've tried the same test linux - linux and linux - hpux.  Same results.
The test setup I have is

work:   2ghz x 2 Athlons, e1000, 2.6.18
ia64:   900mhz Itanium, e1000, 2.6.12
hp-ia64:900mhz Itanium, e1000, hpux 11
glibc*: 1-2ghz athlons running various linux releases

all connected through a netgear 724T 10/100/1000 switch (a linksys showed
identical results).

I tested 

work - hp-ia64
work - ia64
ia64 - hp-ia64

and in all cases, one direction worked fast and the other didn't.

It would be good if people tried the same simple test.  You have to
use rsh, ssh will slow things down way too much.

Alternatively, take your favorite test programs, such as John's,
and make a second pair that reverses the direction the data is 
sent.  So one pair is server sends, the other is server receives,
try both.  That's where we started, BitKeeper, my stripped down test,
and John's test all exhibit the same behavior.  And the rsh test
is just a really simple way to demonstrate it.

Wayne, Linus asked for tcp dumps from just one side, with the first 100
packets and then wait 10 seconds or so for the window to open up, and then
a snap shot of the another 100 packets.  Do that for both directions
and send them to the list.  Can you do that?  I want to get lunch, I'm
starving.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread David Miller
From: Linus Torvalds [EMAIL PROTECTED]
Date: Tue, 2 Oct 2007 12:29:50 -0700 (PDT)

 On Tue, 2 Oct 2007, Larry McVoy wrote:
  
  No HP in the mix.  It's got nothing to do with hp, nor to do with rsh, it 
  has everything to do with the direction the data is flowing.  
 
 Can you tcpdump both cases and send snippets (both of steady-state, and 
 the initial connect)? 

Another thing I'd like to see is if something more recent than 2.6.18
also reproduces the problem.

It could be just some bug we've fixed in the past year :)
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread David Miller
From: Linus Torvalds [EMAIL PROTECTED]
Date: Tue, 2 Oct 2007 12:27:53 -0700 (PDT)

 We see a single packet containing 16060 bytes, which seems to be because 
 of TSO on the sending side (you did your tcpdump on the sender, no?), so 
 it will actually be broken up into 11 1460-byte regular frames by the 
 network card, since they started out agreeing on a standard 1460-byte MSS. 
 So the above is not a jumbo frame, it just kind of looks like one when you 
 capture it on the sender side.
 
 And maybe a 32kB window is not big enough when it causes the networking 
 code to basically just have a single packet outstanding.

We fixed a lot of bugs in TSO last year.

It would be really great to see numbers with a more recent kernel
than 2.6.18

-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Rick Jones

Alternatively, take your favorite test programs, such as John's,
and make a second pair that reverses the direction the data is 
sent.  So one pair is server sends, the other is server receives,

try both.  That's where we started, BitKeeper, my stripped down test,
and John's test all exhibit the same behavior.  And the rsh test
is just a really simple way to demonstrate it.


Netperf TCP_STREAM - server receives.  TCP_MAERTS (STREAM backwards) - server 
sends:

[EMAIL PROTECTED] ~]# netperf -H 192.168.2.107
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.107 
(192.168.2.107) port 0 AF_INET : demo

Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec

 87380  87380  8738010.17 941.46
[EMAIL PROTECTED] ~]# netperf -H 192.168.2.107 -t TCP_MAERTS
TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.107 
(192.168.2.107) port 0 AF_INET : demo

Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec

 87380  87380  8738010.15 941.35

The above took all the defaults for socket buffers and such.

 [EMAIL PROTECTED] ~]# uname -a
Linux hpcpc106.cup.hp.com 2.6.18-8.el5 #1 SMP Fri Jan 26 14:16:09 EST 2007 ia64 
ia64 ia64 GNU/Linux


[EMAIL PROTECTED] ~]# ethtool -i eth2
driver: e1000
version: 7.2.7-k2-NAPI
firmware-version: N/A
bus-info: :06:01.0

between a pair of 1.6 GHz itanium2 montecito rx2660's with a dual-port HP A9900A 
(Intel 82546GB) in slot 3 of the io cage on each.  Connection is actually 
back-to-back rather than through a switch.  I'm afraid I've nothing older installed.


sysctl settings attached

Where I do have things connected via a switch (HP ProCurve 3500 IIRC, perhaps a 
2724) is through the core BCM5704:


[EMAIL PROTECTED] netperf2_work]# netperf -H hpcpc107
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to hpcpc107.cup.hp.com 
(16.89.84.107) port 0 AF_INET : demo

Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec

 87380  87380  8738010.03 941.41

[EMAIL PROTECTED] netperf2_work]# netperf -H hpcpc107 -t TCP_MAERTS
TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to hpcpc107.cup.hp.com 
(16.89.84.107) port 0 AF_INET : demo

Recv   SendSend
Socket Socket  Message  Elapsed
Size   SizeSize Time Throughput
bytes  bytes   bytessecs.10^6bits/sec

 87380  87380  8738010.03 941.37

[EMAIL PROTECTED] netperf2_work]# ethtool -i eth0
driver: tg3
version: 3.65-rh
firmware-version: 5704-v3.27
bus-info: :01:02.0

rick jones
net.ipv6.conf.eth2.router_probe_interval = 60
net.ipv6.conf.eth2.accept_ra_rtr_pref = 1
net.ipv6.conf.eth2.accept_ra_pinfo = 1
net.ipv6.conf.eth2.accept_ra_defrtr = 1
net.ipv6.conf.eth2.max_addresses = 16
net.ipv6.conf.eth2.max_desync_factor = 600
net.ipv6.conf.eth2.regen_max_retry = 5
net.ipv6.conf.eth2.temp_prefered_lft = 86400
net.ipv6.conf.eth2.temp_valid_lft = 604800
net.ipv6.conf.eth2.use_tempaddr = 0
net.ipv6.conf.eth2.force_mld_version = 0
net.ipv6.conf.eth2.router_solicitation_delay = 1
net.ipv6.conf.eth2.router_solicitation_interval = 4
net.ipv6.conf.eth2.router_solicitations = 3
net.ipv6.conf.eth2.dad_transmits = 1
net.ipv6.conf.eth2.autoconf = 1
net.ipv6.conf.eth2.accept_redirects = 1
net.ipv6.conf.eth2.accept_ra = 1
net.ipv6.conf.eth2.mtu = 1500
net.ipv6.conf.eth2.hop_limit = 64
net.ipv6.conf.eth2.forwarding = 0
net.ipv6.conf.eth0.router_probe_interval = 60
net.ipv6.conf.eth0.accept_ra_rtr_pref = 1
net.ipv6.conf.eth0.accept_ra_pinfo = 1
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.max_addresses = 16
net.ipv6.conf.eth0.max_desync_factor = 600
net.ipv6.conf.eth0.regen_max_retry = 5
net.ipv6.conf.eth0.temp_prefered_lft = 86400
net.ipv6.conf.eth0.temp_valid_lft = 604800
net.ipv6.conf.eth0.use_tempaddr = 0
net.ipv6.conf.eth0.force_mld_version = 0
net.ipv6.conf.eth0.router_solicitation_delay = 1
net.ipv6.conf.eth0.router_solicitation_interval = 4
net.ipv6.conf.eth0.router_solicitations = 3
net.ipv6.conf.eth0.dad_transmits = 1
net.ipv6.conf.eth0.autoconf = 1
net.ipv6.conf.eth0.accept_redirects = 1
net.ipv6.conf.eth0.accept_ra = 1
net.ipv6.conf.eth0.mtu = 1500
net.ipv6.conf.eth0.hop_limit = 64
net.ipv6.conf.eth0.forwarding = 0
net.ipv6.conf.default.router_probe_interval = 60
net.ipv6.conf.default.accept_ra_rtr_pref = 1
net.ipv6.conf.default.accept_ra_pinfo = 1
net.ipv6.conf.default.accept_ra_defrtr = 1
net.ipv6.conf.default.max_addresses = 16
net.ipv6.conf.default.max_desync_factor = 600
net.ipv6.conf.default.regen_max_retry = 5
net.ipv6.conf.default.temp_prefered_lft = 86400
net.ipv6.conf.default.temp_valid_lft = 604800
net.ipv6.conf.default.use_tempaddr = 0
net.ipv6.conf.default.force_mld_version = 0
net.ipv6.conf.default.router_solicitation_delay = 1

Re: tcp bw in 2.6

2007-10-02 Thread Roland Dreier
  It would be really great to see numbers with a more recent kernel
  than 2.6.18

FWIW Debian has binaries for 2.6.21 in testing and for 2.6.22 in
unstable so it should be very easy for Larry to try at least those.

 - R.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread David Miller
From: [EMAIL PROTECTED] (Larry McVoy)
Date: Tue, 2 Oct 2007 09:48:58 -0700

 Isn't this something so straightforward that you would have tests for it?
 This is the basic FTP server loop, doesn't someone have a big machine with
 10gig cards and test that sending/recving data doesn't regress?

Nobody is really doing this, or they aren't talking about it.
Sometimes the crash fixes and other work completely consumes us.  Add
in travel to conferences and real life, and it's no surprise stuff
like this slips through the cracks.

We absolutely depend upon people like you to report when there are
anomalies like this.  It's the only thing that scales.

FWIW I have a t1000 Niagara box and an Ultra45 going through a netgear
gigabit switch.  I'm getting 85MB/sec in one direction and 10MB/sec in
the other (using bw_tcp from lmbench3).  Both are using identical
broadcom tigon3 gigabit chips and identical current kernels so that is
a truly strange result.

I'll investigate, it may be the same thing you're seeing.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
 We fixed a lot of bugs in TSO last year.
 
 It would be really great to see numbers with a more recent kernel
 than 2.6.18

More data, sky2 works fine (really really fine, like 79MB/sec) between
Linux dylan.bitmover.com 2.6.18.1 #5 SMP Mon Oct 23 17:36:00 PDT 2006 i686
Linux steele 2.6.20-16-generic #2 SMP Sun Sep 23 18:31:23 UTC 2007 x86_64

So this is looking like a e1000 bug.  I'll try to upgrade the kernel on 
the ia64 box and see what happens.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
On Tue, Oct 02, 2007 at 02:16:56PM -0700, David Miller wrote:
 We absolutely depend upon people like you to report when there are
 anomalies like this.  It's the only thing that scales.

Well cool, finally doing something useful :)

Is this issue no test setup?  Because this does seem like something we'd
want to have work well.

 FWIW I have a t1000 Niagara box and an Ultra45 going through a netgear
 gigabit switch.  I'm getting 85MB/sec in one direction and 10MB/sec in
 the other (using bw_tcp from lmbench3).  

Note that bw_tcp mucks with SND/RCVBUF.  It probably shouldn't, it's been
12 years since that code went in there and I dunno if it is still needed.

 Both are using identical
 broadcom tigon3 gigabit chips and identical current kernels so that is
 a truly strange result.
 
 I'll investigate, it may be the same thing you're seeing.

Wow, sounds very similar.  In my case I was seeing pretty close to 3x
consistently.  You're more like 8x, but I was all e1000 not broadcom.

And note that sky2 doesn't have this problem.  Does the broadcom do TSO?
And sky2 not?  I noticed a much higher CPU load for sky2.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread David Miller
From: [EMAIL PROTECTED] (Larry McVoy)
Date: Tue, 2 Oct 2007 14:26:08 -0700

 And note that sky2 doesn't have this problem.  Does the broadcom do TSO?
 And sky2 not?  I noticed a much higher CPU load for sky2.

Yes the broadcoms (the revisions I have) do TSO and it is enabled
on both sides.

Which makes the mis-matched performance even stranger :)
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread David Miller
From: [EMAIL PROTECTED] (Larry McVoy)
Date: Tue, 2 Oct 2007 11:40:32 -0700

 I doubt it, the same test works fine in one direction and poorly in the other.
 Wouldn't the flow control squelch either way?

HW controls for these things are typically:

1) Generates flow control flames
2) Listens for them

So you can have flow control operational in one direction
and not the other.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Linus Torvalds


On Tue, 2 Oct 2007, Wayne Scott wrote:
 
 The slow set was done like this:
 
  on ia64:  netcat -l -p  /dev/null
  on work:  netcat ia64   /dev/zero

That sounds wrong. Larry claims the slow case is when the side that did 
accept() does the sending, the above has the listener just reading.

 The fast set was done like this:
 
  on work:  netcat -l -p  /dev/null
  on ia64:  netcat ia64   /dev/zero

This one is guaranteed wrong too, since you have the listener reading 
(fine), but the sener now doesn't go over the network at all, but sends to 
itself.

That said, let's assume that only your description was bogus, the TCP 
dumps themselves are ok. 

I find the window scaling differences interesting. This is the opening of 
the fast sequence from the receiver:

13:35:13.929349 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: S 
2592471184:2592471184(0) ack 3363219397 win 5792 mss 1460,sackOK,timestamp 
174966955 3714830794,nop,wscale 7
13:35:13.929702 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 1449 win 
68 nop,nop,timestamp 174966955 3714830795
13:35:13.929712 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 2897 win 
91 nop,nop,timestamp 174966955 3714830795
13:35:13.929724 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 4345 win 
114 nop,nop,timestamp 174966955 3714830795
13:35:13.929941 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 5793 win 
136 nop,nop,timestamp 174966955 3714830795
13:35:13.929951 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 7241 win 
159 nop,nop,timestamp 174966955 3714830795
13:35:13.929960 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 8689 win 
181 nop,nop,timestamp 174966955 3714830795
13:35:13.929970 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 10137 
win 204 nop,nop,timestamp 174966955 3714830795
13:35:13.929981 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 11585 
win 227 nop,nop,timestamp 174966955 3714830795
13:35:13.929992 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 13033 
win 249 nop,nop,timestamp 174966955 3714830795
13:35:13.930331 IP 10.3.1.1.ddi-tcp-1  10.3.1.10.58415: . ack 14481 
win 272 nop,nop,timestamp 174966955 3714830795
 ...

ie we use a window scale of 7, and we started with a window of 5792 bytes, 
and after ten packets it has grown to 2727 (34816) bytes.

The slow case is 

13:34:16.761034 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: S 
3299922549:3299922549(0) ack 2548837296 win 5792 mss 1460,sackOK,timestamp 
3714772254 174952667,nop,wscale 2
13:34:16.761533 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 1449 win 
2172 nop,nop,timestamp 3714772255 174952667
13:34:16.761553 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 2897 win 
2896 nop,nop,timestamp 3714772255 174952667
13:34:16.761782 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 4345 win 
3620 nop,nop,timestamp 3714772255 174952667
13:34:16.761908 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 5793 win 
4344 nop,nop,timestamp 3714772255 174952667
13:34:16.761916 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 7241 win 
5068 nop,nop,timestamp 3714772255 174952667
13:34:16.762157 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 8689 win 
5792 nop,nop,timestamp 3714772255 174952667
13:34:16.762164 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 10137 
win 6516 nop,nop,timestamp 3714772255 174952667
13:34:16.762283 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 11585 
win 7240 nop,nop,timestamp 3714772256 174952667
13:34:16.762290 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 13033 
win 7964 nop,nop,timestamp 3714772256 174952667
13:34:16.762303 IP 10.3.1.10.ddi-tcp-1  10.3.1.1.49864: . ack 14481 
win 8688 nop,nop,timestamp 3714772256 174952667
...

so after the same ten packets, it too has grown to about the same 
size (86882 = 34752 bytes). 

But the slow case has a smaller window scale, and it actually stops 
opening the window at that point: the window stays at 86882 for a long 
time (and eventually grows to 94122 and then 166522 in the steady 
case, and is basically limited at that 66kB window size).

But the fast one that had a window scale of 7 can keep growing, and will 
do so quite aggressively. It grows the window to (14427 = 180kB) in the 
first fifty packets.

But in your dump, it doesn't seem to be about who is listening and who is 
connecting. It seems to be about the fact that your machine 10.3.1.10 uses 
a window scale of 2, while 10.3.1.1 uses a scale of 7.

Linus
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Rick Jones

David Miller wrote:

From: [EMAIL PROTECTED] (Larry McVoy)
Date: Tue, 2 Oct 2007 14:26:08 -0700



And note that sky2 doesn't have this problem.  Does the broadcom do TSO?
And sky2 not?  I noticed a much higher CPU load for sky2.



Yes the broadcoms (the revisions I have) do TSO and it is enabled
on both sides.

Which makes the mis-matched performance even stranger :)


Stranger still, with a mix of a 2.6.23-rc5ish kernel and a net-2.6.24 one 
(pulled oh middle of last week?) I get link-rate and I see no asymmetry between 
TCP_STREAM and TCP_MAERTS over an e1000 link with no switch or tg3 with a 
ProCurve on my rx2660's.


I can also run bw_tcp from lmbench 3.0a8 and get 106 MB/s.

I don't have a netgear switch to try in all this...

rick jones
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread David Miller
From: Rick Jones [EMAIL PROTECTED]
Date: Tue, 02 Oct 2007 15:17:35 -0700

 Stranger still, with a mix of a 2.6.23-rc5ish kernel and a net-2.6.24 one 
 (pulled oh middle of last week?) I get link-rate and I see no asymmetry 
 between 
 TCP_STREAM and TCP_MAERTS over an e1000 link with no switch or tg3 with a 
 ProCurve on my rx2660's.
 
 I can also run bw_tcp from lmbench 3.0a8 and get 106 MB/s.
 
 I don't have a netgear switch to try in all this...

I'm starting to have a theory about what the bad case might
be.

A strong sender going to an even stronger receiver which can
pull out packets into the process as fast as they arrive.
This might be part of what keeps the receive window from
growing.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Larry McVoy
On Tue, Oct 02, 2007 at 03:32:16PM -0700, David Miller wrote:
 I'm starting to have a theory about what the bad case might
 be.
 
 A strong sender going to an even stronger receiver which can
 pull out packets into the process as fast as they arrive.
 This might be part of what keeps the receive window from
 growing.

I can back you up on that.  When I straced the receiving side that goes
slowly, all the reads were short, like 1-2K.  The way that works the 
reads were a lot larger as I recall.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-02 Thread Rick Jones

Larry McVoy wrote:

On Tue, Oct 02, 2007 at 03:32:16PM -0700, David Miller wrote:


I'm starting to have a theory about what the bad case might
be.

A strong sender going to an even stronger receiver which can
pull out packets into the process as fast as they arrive.
This might be part of what keeps the receive window from
growing.



I can back you up on that.  When I straced the receiving side that goes
slowly, all the reads were short, like 1-2K.  The way that works the 
reads were a lot larger as I recall.


Indeed I was getting more like 8K on each recv() call per netperf's -v 2 stats, 
but the system was more than fast enough to stay ahead of the traffic.  On the 
hunch that it was the interrupt throttling which was keeping the recv's large 
rather than the speed of the system(s) I nuked the InterruptThrottleRate to 0 
and was able to get between 1900 and 2300 byte recvs on the TCP_STREAM and 
TCP_MAERTS tests and still had 940 Mbit/s in each direction.


hpcpc106:~# netperf -H 192.168.7.107 -t TCP_STREAM -v 2 -c -C
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.7.107 
(192.168.7.107) port 0 AF_INET

Recv   SendSend  Utilization   Service Demand
Socket Socket  Message  Elapsed  Send Recv SendRecv
Size   SizeSize Time Throughput  localremote   local   remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB   us/KB

 87380  87380  8738010.02   940.95   10.7521.653.743   7.540

Alignment  Offset BytesBytes   Sends   BytesRecvs
Local  Remote  Local  Remote  Xfered   Per Per
Send   RecvSend   Recv Send (avg)  Recv (avg)
8   8  0   0 1.179e+09  87386.29 13491   1965.77 599729

Maximum
Segment
Size (bytes)
  1448
hpcpc106:~# netperf -H 192.168.7.107 -t TCP_MAERTS -v 2 -c -C
TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.7.107 
(192.168.7.107) port 0 AF_INET

Recv   SendSend  Utilization   Service Demand
Socket Socket  Message  Elapsed  Send Recv SendRecv
Size   SizeSize Time Throughput  localremote   local   remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB   us/KB

 87380  87380  8738010.02   940.82   20.4410.617.117   3.696

Alignment  Offset BytesBytes   Recvs   BytesSends
Local  Remote  Local  Remote  Xfered   Per Per
Recv   SendRecv   Send Recv (avg)  Send (avg)
8   8  0   0 1.178e+09  2352.26500931   87380.00  13485

Maximum
Segment
Size (bytes)
  1448

the systems above had four 1.6 GHz cores, netperf reports CPU as 0 to 100% 
regardless of core count.


and then my systems with the 3.0 GHz cores:

[EMAIL PROTECTED] netperf2_trunk]# netperf -H sweb20 -v 2 -t TCP_STREAM -c -C
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to sweb20.cup.hp.com 
(16.89.133.20) port 0 AF_INET

Recv   SendSend  Utilization   Service Demand
Socket Socket  Message  Elapsed  Send Recv SendRecv
Size   SizeSize Time Throughput  localremote   local   remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB   us/KB

 87380  16384  1638410.03   941.37   6.40 13.262.229   4.615

Alignment  Offset BytesBytes   Sends   BytesRecvs
Local  Remote  Local  Remote  Xfered   Per Per
Send   RecvSend   Recv Send (avg)  Recv (avg)
8   8  0   0 1.18e+09  16384.06 72035   1453.85 811793

Maximum
Segment
Size (bytes)
  1448
[EMAIL PROTECTED] netperf2_trunk]# netperf -H sweb20 -v 2 -t TCP_MAERTS -c -C
TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to sweb20.cup.hp.com 
(16.89.133.20) port 0 AF_INET

Recv   SendSend  Utilization   Service Demand
Socket Socket  Message  Elapsed  Send Recv SendRecv
Size   SizeSize Time Throughput  localremote   local   remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB   us/KB

 87380  16384  1638410.03   941.35   12.135.80 4.221   2.018

Alignment  Offset BytesBytes   Recvs   BytesSends
Local  Remote  Local  Remote  Xfered   Per Per
Recv   SendRecv   Send Recv (avg)  Send (avg)
8   8  0   0 1.181e+09  1452.38812953   16384.00  72065

Maximum
Segment
Size (bytes)
  1448


rick jones
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-01 Thread Larry McVoy
On Sat, Sep 29, 2007 at 11:02:32AM -0700, Linus Torvalds wrote:
 On Sat, 29 Sep 2007, Larry McVoy wrote:
  I haven't kept up on switch technology but in the past they were much
  better than you are thinking.  The Kalpana switch that I had modified
  to support vlans (invented by yours truly), did not store and forward,
  it was cut through and could handle any load that was theoretically
  possible within about 1%.
 
 Hey, you may well be right. Maybe my assumptions about cutting corners are 
 just cynical and pessimistic. 

So I got a netgear switch and it works fine.  But my tests are busted.  
Catching netdev up, I'm trying to optimize traffic to a server that has
a gbit interface; I moved to a 24 port netgear that is all 10/100/1000
and I have a pile of clients to act as load generators.

I can do this on each of the clients 

dd if=/dev/zero bs=1024000 | rsh work dd of=/dev/null

and that cranks up to about 47K packets/second which is about 70MB/sec.

One of my clients also has gigabit so I played around with just that
one and it (itanium running hpux w/ broadcom gigabit) can push the load
as well.  One weird thing is that it is dependent on the direction the
data is flowing.  If the hp is sending then I get 46MB/sec, if linux is
sending then I get 18MB/sec.  Weird.  Linux is debian, running 

Linux work 2.6.18-5-k7 #1 SMP Thu Aug 30 02:52:31 UTC 2007 i686 

and dual e1000 cards:

e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
e1000: eth1: e1000_probe: Intel(R) PRO/1000 Network Connection

I wrote a tiny little program to try and emulate this and I can't get
it to do as well.  I've tracked it down, I think, to the read side.
The server sources, the client sinks, the server looks like:

11689 accept(3, {sa_family=AF_INET, sin_port=htons(49376), 
sin_addr=inet_addr(10.3.1.38)}, [16]) = 4
11689 setsockopt(4, SOL_SOCKET, SO_RCVBUF, [1048576], 4) = 0
11689 setsockopt(4, SOL_SOCKET, SO_SNDBUF, [1048576], 4) = 0
11689 clone(child_stack=0, 
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7ddf708) 
= 11694
11689 close(4)  = 0
11689 accept(3,  unfinished ...
11694 write(4, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 
1048576) = 1048576
11694 write(4, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 
1048576) = 1048576
11694 write(4, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 
1048576) = 1048576
11694 write(4, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 
1048576) = 1048576
...

but the client looks like

connect(3, {sa_family=AF_INET, sin_port=htons(31235), 
sin_addr=inet_addr(10.3.9.1)}, 16) = 0
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
2896
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
1448
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
2896
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
2896
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
2896
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
2896
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
2896
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
2896
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
2896
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
1448
read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) = 
2896

which I suspect may be the problem.

I played around with SO_RCVBUF/SO_SNDBUF and that didn't help.  So any ideas why
a simple dd piped through rsh is kicking my ass?  It must be something simple
but my test program is tiny and does nothing weird that I can see.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-01 Thread Linus Torvalds


On Mon, 1 Oct 2007, Larry McVoy wrote:
 
 but the client looks like
 
 connect(3, {sa_family=AF_INET, sin_port=htons(31235), 
 sin_addr=inet_addr(10.3.9.1)}, 16) = 0
 read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) 
 = 2896
 read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) 
 = 1448
 read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 1048576) 
 = 2896
..

This is exactly what I'd expect if the machine is *not* under excessive 
load.

The system calls are fast enough that the latency for the TCP stack is 
roughly on the same scale as the time it takes to receive one new packet, 
so since a socket read will always return when it has any data (not until 
it has filled the whole buffer), you get exactly that one or two packets 
pattern.

If you'd be really CPU-limited or under load from other programs, you'd 
have more packets come in while you're in the read path, and you'd get 
bigger reads.

But do a tcpdump both ways, and see (for example) if the TCP window is 
much bigger going the other way.

Linus
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-01 Thread Larry McVoy
On Mon, Oct 01, 2007 at 07:14:37PM -0700, Linus Torvalds wrote:
 
 
 On Mon, 1 Oct 2007, Larry McVoy wrote:
  
  but the client looks like
  
  connect(3, {sa_family=AF_INET, sin_port=htons(31235), 
  sin_addr=inet_addr(10.3.9.1)}, 16) = 0
  read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 
  1048576) = 2896
  read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 
  1048576) = 1448
  read(3, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 
  1048576) = 2896
 ..
 
 This is exactly what I'd expect if the machine is *not* under excessive 
 load.

That's fine, but why is it that my trivial program can't do as well as 
dd | rsh dd?

A short summary is can someone please post a test program that sources
and sinks data at the wire speed?  because apparently I'm too old and
clueless to write such a thing.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-01 Thread David Miller
From: [EMAIL PROTECTED] (Larry McVoy)
Date: Mon, 1 Oct 2007 19:20:59 -0700

 A short summary is can someone please post a test program that sources
 and sinks data at the wire speed?  because apparently I'm too old and
 clueless to write such a thing.

You're not showing us your test program so there is no way we
can help you out.

My initial inclination, even without that critical information,
is to ask whether you are setting any socket options in way?

In particular, SO_RCVLOWAT can have a large effect here, if you're
setting it to something, that would explain why dd is doing better.  A
lot of people link to helper libraries with interfaces to setup
sockets with all sorts of socket option settings by default, try not
using such things if possible.

You also shouldn't dork at all with the receive and send buffer sizes.
They are adjusted dynamically by the kernel as the window grows.  But
if you set them to specific values, this dynamic logic is turned off.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: tcp bw in 2.6

2007-10-01 Thread Larry McVoy
On Mon, Oct 01, 2007 at 08:50:50PM -0700, David Miller wrote:
 From: [EMAIL PROTECTED] (Larry McVoy)
 Date: Mon, 1 Oct 2007 19:20:59 -0700
 
  A short summary is can someone please post a test program that sources
  and sinks data at the wire speed?  because apparently I'm too old and
  clueless to write such a thing.
 
 You're not showing us your test program so there is no way we
 can help you out.

Attached.  Drop it into an lmbench tree and build it.

 My initial inclination, even without that critical information,
 is to ask whether you are setting any socket options in way?

The only one I was playing with was SO_RCVBUF/SO_SNDBUF and I tried
disabling that and I tried playing with the read/write size.  Didn't
help.

 In particular, SO_RCVLOWAT can have a large effect here, if you're
 setting it to something, that would explain why dd is doing better.  A
 lot of people link to helper libraries with interfaces to setup
 sockets with all sorts of socket option settings by default, try not
 using such things if possible.

Agreed.  That was my first thought as well, I must have been doing 
something that messed up the defaults.  But you did get the strace
output, there wasn't anything weird there.

 You also shouldn't dork at all with the receive and send buffer sizes.
 They are adjusted dynamically by the kernel as the window grows.  But
 if you set them to specific values, this dynamic logic is turned off.

Yeah, dorking with those is left over from the bad old days of '95
when lmbench was first shipped.  But I turned that all off and no
difference.

So feel free to show me where I'm an idiot in the code, but if you
can't, then what would rock would be a little send.c / recv.c that
demonstrated filling the pipe.
-- 
---
Larry McVoylm at bitmover.com   http://www.bitkeeper.com
/*
 * bytes_tcp.c - simple TCP bandwidth source/sink
 *
 *	server usage:	bytes_tcp -s
 *	client usage:	bytes_tcp hostname [msgsize]
 *
 * Copyright (c) 1994 Larry McVoy.  
 * Copyright (c) 2002 Carl Staelin.  Distributed under the FSF GPL with
 * additional restriction that results may published only if
 * (1) the benchmark is unmodified, and
 * (2) the version in the sccsid below is included in the report.
 * Support for this development by Sun Microsystems is gratefully acknowledged.
 */
char	*id = $Id$\n;
#include bench.h
#define	XFER	(1024*1024)

int	server_main(int ac, char **av);
int	client_main(int ac, char **av);
void	source(int data);

void
transfer(int get, int server, char *buf)
{
	int	c;

	while ((get  0)  (c = read(server, buf, XFER))  0) {
		get -= c;
	}
	if (c  0) {
		perror(bytes_tcp: transfer: read failed);
		exit(4);
	}
}

/* ARGSUSED */
int
client_main(int ac, char **av)
{
	int	server;
	int	get = 256  20;
	char	buf[XFER];
	char*	usage = usage: %s -remotehost OR %s remotehost [msgsize]\n;

	if (ac != 2  ac != 3) {
		(void)fprintf(stderr, usage, av[0], av[0]);
		exit(0);
	}
	if (ac == 3) get = bytes(av[2]);
	server = tcp_connect(av[1], TCP_DATA+1, SOCKOPT_READ|SOCKOPT_REUSE);
	if (server  0) {
		perror(bytes_tcp: could not open socket to server);
		exit(2);
	}
	transfer(get, server, buf);
	close(server);
	exit(0);
	/*NOTREACHED*/
}

void
child()
{
	wait(0);
	signal(SIGCHLD, child);
}

/* ARGSUSED */
int
server_main(int ac, char **av)
{
	int	data, newdata;

	signal(SIGCHLD, child);
	data = tcp_server(TCP_DATA+1, SOCKOPT_READ|SOCKOPT_WRITE|SOCKOPT_REUSE);
	for ( ;; ) {
		newdata = tcp_accept(data, SOCKOPT_WRITE|SOCKOPT_READ);
		switch (fork()) {
		case -1:
			perror(fork);
			break;
		case 0:
			source(newdata);
			exit(0);
		default:
			close(newdata);
			break;
		}
	}
}

void
source(int data)
{
	char	buf[XFER];

	while (write(data, buf, sizeof(buf))  0);
}


int
main(int ac, char **av)
{
	char*	usage = Usage: %s -s OR %s -serverhost OR %s serverhost [msgsize]\n;
	if (ac  2 || 3  ac) {
		fprintf(stderr, usage, av[0], av[0], av[0]);
		exit(1);
	}
	if (ac == 2  !strcmp(av[1], -s)) {
		if (fork() == 0) server_main(ac, av);
		exit(0);
	} else {
		client_main(ac, av);
	}
	return(0);
}
/*
 * tcp_lib.c - routines for managing TCP connections.
 *
 * Positive port/program numbers are RPC ports, negative ones are TCP ports.
 *
 * Copyright (c) 1994-1996 Larry McVoy.
 */
#define		_LIB /* bench.h needs this */
#include	bench.h

/*
 * Get a TCP socket, bind it, figure out the port,
 * and advertise the port as program prog.
 *
 * XXX - it would be nice if you could advertise ascii strings.
 */
int
tcp_server(int prog, int rdwr)
{
	int	sock;
	struct	sockaddr_in s;

#ifdef	LIBTCP_VERBOSE
	fprintf(stderr, tcp_server(%u, %u)\n, prog, rdwr);
#endif
	if ((sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP))  0) {
		perror(socket);
		exit(1);
	}
	sock_optimize(sock, rdwr);
	bzero((void*)s, sizeof(s));
	s.sin_family = AF_INET;
	if (prog  0) {
		s.sin_port = htons(-prog);
	}
	if (bind(sock, (struct sockaddr*)s, sizeof(s))  0) {
		perror(bind);
		exit(2);
	}
	if (listen(sock, 100)  0) {
		perror(listen);