Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-22 Thread Rick Jones
When parsing the -P option in scan_socket_args() of src/nettest_bsd.c, 
netperf is using break_args() from src/netsh.c which indeed if the 
command line says -P 12345 will set both the local and remote port 
numbers to 12345.  If instead you were to say -P 12345,  it will use 
12345 only for the netperf side.  If you say -P ,12345 it will use 
12345 only for the netserver side.  To set both sides at once to 
different values it would be -P 12345,54321


In theory, send_udp_rr() in src/nettest_bsd.c (or I suppose 
scan_socket_args() could have more code added to it to check for a UDP 
test over loopback, but probably needs to be a check for any local IP, 
and unless this becomes something bigger than Doctor! Doctor! It hurts 
when I do this! :) I'm inclined to leave it as caveat benchmarker and 
perhaps some additional text in the manual.


rick jones
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-22 Thread Zhang, Yanmin
On Tue, 2008-01-22 at 10:36 -0800, Rick Jones wrote:
 When parsing the -P option in scan_socket_args() of src/nettest_bsd.c, 
 netperf is using break_args() from src/netsh.c which indeed if the 
 command line says -P 12345 will set both the local and remote port 
 numbers to 12345.  If instead you were to say -P 12345,  it will use 
 12345 only for the netperf side.  If you say -P ,12345 it will use 
 12345 only for the netserver side.  To set both sides at once to 
 different values it would be -P 12345,54321
 
 In theory, send_udp_rr() in src/nettest_bsd.c (or I suppose 
 scan_socket_args() could have more code added to it to check for a UDP 
 test over loopback, but probably needs to be a check for any local IP, 
 and unless this becomes something bigger than Doctor! Doctor! It hurts 
 when I do this! :) I'm inclined to leave it as caveat benchmarker and 
 perhaps some additional text in the manual.
I will instrument kernel to see if kernel does work like it is expected.

When an issue is found, we shouldn't escape by saying it's nothing to do
with me.

-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-22 Thread Zhang, Yanmin
On Wed, 2008-01-23 at 08:42 +0800, Zhang, Yanmin wrote:
 On Tue, 2008-01-22 at 10:36 -0800, Rick Jones wrote:
  When parsing the -P option in scan_socket_args() of src/nettest_bsd.c, 
  netperf is using break_args() from src/netsh.c which indeed if the 
  command line says -P 12345 will set both the local and remote port 
  numbers to 12345.  If instead you were to say -P 12345,  it will use 
  12345 only for the netperf side.  If you say -P ,12345 it will use 
  12345 only for the netserver side.  To set both sides at once to 
  different values it would be -P 12345,54321
  
  In theory, send_udp_rr() in src/nettest_bsd.c (or I suppose 
  scan_socket_args() could have more code added to it to check for a UDP 
  test over loopback, but probably needs to be a check for any local IP, 
  and unless this becomes something bigger than Doctor! Doctor! It hurts 
  when I do this! :) I'm inclined to leave it as caveat benchmarker and 
  perhaps some additional text in the manual.
 I will instrument kernel to see if kernel does work like it is expected.
 
 When an issue is found, we shouldn't escape by saying it's nothing to do
 with me.
 
I went through netperf source again and did a step debug with gdb.

Both sides bind 0.0.0.0:12384 to its own sockets. netperf binds firstly.
When netperf calls connect to configure server 127.0.0.1:12384, kernel chooses
socket A's queue. kernel is correct.

Anther question is no matter who binds 0.0.0.0:12384 firstly, netperf
always sends packets to its own socket. I suspect API connect called by netperf 
to
configure server ip/port has the side-effect, as server doesn't call connect.

It's good to add additional text in netperf manual.

Sorry and thanks for your guys kind response.

-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-21 Thread Zhang, Yanmin
On Mon, 2008-01-14 at 09:46 -0800, Rick Jones wrote:
 *) netperf/netserver support CPU affinity within themselves with the 
 global -T option to netperf.  Is the result with taskset much different? 
The equivalent to the above would be to run netperf with:
 
 ./netperf -T 0,7 ..
  
  I checked the source codes and didn't find this option.
  I use netperf V2.3 (I found the number in the makefile).
 
 Indeed, that version pre-dates the -T option.  If you weren't already 
 chasing a regression I'd suggest an upgrade to 2.4.mumble.  Once you are 
 at a point where changing another variable won't muddle things you may 
 want to consider upgrading.
 
 happy benchmarking,
Rick,

I found my UDP_RR testing is just loop in netperf instead of ping-pang between
netserver and netperf. Is it correct? TCP_RR is ok.

#./netserver
#./netperf -t UDP_RR -l 60 -H 127.0.0.1 -i 30,3 -I 99,5 -- -P 12384 -r 1,1

Thanks,
-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-21 Thread Zhang, Yanmin
On Tue, 2008-01-22 at 13:24 +0800, Zhang, Yanmin wrote:
 On Mon, 2008-01-14 at 09:46 -0800, Rick Jones wrote:
  *) netperf/netserver support CPU affinity within themselves with the 
  global -T option to netperf.  Is the result with taskset much different? 
 The equivalent to the above would be to run netperf with:
  
  ./netperf -T 0,7 ..
   
   I checked the source codes and didn't find this option.
   I use netperf V2.3 (I found the number in the makefile).
  
  Indeed, that version pre-dates the -T option.  If you weren't already 
  chasing a regression I'd suggest an upgrade to 2.4.mumble.  Once you are 
  at a point where changing another variable won't muddle things you may 
  want to consider upgrading.
  
  happy benchmarking,
 Rick,
 
 I found my UDP_RR testing is just loop in netperf instead of ping-pang between
 netserver and netperf. Is it correct? TCP_RR is ok.
 
 #./netserver
 #./netperf -t UDP_RR -l 60 -H 127.0.0.1 -i 30,3 -I 99,5 -- -P 12384 -r 1,1
I digged into netperf and netserver.

netperf binds ip 0 and port 12384 to its own socket. netserver binds ip
127.0.0.1 and port 12384 to its own socket. Then, netperf calls connect to 
setup server
127.0.0.1 and port 12384. Then, netperf starts sends UDP packets, but all 
packets netperf
sends are just received by netperf itself. netserver doesn't receive any packet.

I think netperf binding should fail, or netperf shouldn't get the packet it 
sends out, because
netserver already binds port 12384.

I am wondering if UDP stack in kernel has a bug.

TCP_RR testing hasn't such issue.

-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-21 Thread David Miller
From: Zhang, Yanmin [EMAIL PROTECTED]
Date: Tue, 22 Jan 2008 14:07:19 +0800

 I am wondering if UDP stack in kernel has a bug.

If one server binds to INADDR_ANY with port N, then any other socket
can be bound to a specific IP address with port N.  When packets
come in destined for port N, the delivery will be prioritized
to whichever socket has the more specific and matching binding.

So the kernel is fine.

Netperf just needs to be more careful in order to handle this kind of
case more cleanly.
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-21 Thread Eric Dumazet

Zhang, Yanmin a écrit :

On Tue, 2008-01-22 at 13:24 +0800, Zhang, Yanmin wrote:

On Mon, 2008-01-14 at 09:46 -0800, Rick Jones wrote:
*) netperf/netserver support CPU affinity within themselves with the 
global -T option to netperf.  Is the result with taskset much different? 
  The equivalent to the above would be to run netperf with:


./netperf -T 0,7 ..

I checked the source codes and didn't find this option.
I use netperf V2.3 (I found the number in the makefile).
Indeed, that version pre-dates the -T option.  If you weren't already 
chasing a regression I'd suggest an upgrade to 2.4.mumble.  Once you are 
at a point where changing another variable won't muddle things you may 
want to consider upgrading.


happy benchmarking,

Rick,

I found my UDP_RR testing is just loop in netperf instead of ping-pang between
netserver and netperf. Is it correct? TCP_RR is ok.

#./netserver
#./netperf -t UDP_RR -l 60 -H 127.0.0.1 -i 30,3 -I 99,5 -- -P 12384 -r 1,1

I digged into netperf and netserver.

netperf binds ip 0 and port 12384 to its own socket. netserver binds ip
127.0.0.1 and port 12384 to its own socket. Then, netperf calls connect to 
setup server
127.0.0.1 and port 12384. Then, netperf starts sends UDP packets, but all 
packets netperf
sends are just received by netperf itself. netserver doesn't receive any packet.

I think netperf binding should fail, or netperf shouldn't get the packet it 
sends out, because
netserver already binds port 12384.

I am wondering if UDP stack in kernel has a bug.


If :
- socket A is bound to 0.0.0.0:12384 and
- socket B is bound to 127.0.0.1:12384

Then packets sent to 127.0.0.1:12384 should be queued for socket B

If they are queued to socket A as you believe it is currently done, then yes 
there is a bug in kernel.




TCP_RR testing hasn't such issue.

-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-21 Thread Zhang, Yanmin
On Mon, 2008-01-21 at 22:22 -0800, David Miller wrote:
 From: Zhang, Yanmin [EMAIL PROTECTED]
 Date: Tue, 22 Jan 2008 14:07:19 +0800
 
  I am wondering if UDP stack in kernel has a bug.
 
 If one server binds to INADDR_ANY with port N, then any other socket
 can be bound to a specific IP address with port N.  When packets
 come in destined for port N, the delivery will be prioritized
 to whichever socket has the more specific and matching binding.
What does 'more specific' mean here? I assume 127.0.0.1 should be
prioritized before 0.0.0.0 which means packets should be queued to
127.0.0.1 firstly.

 
 So the kernel is fine.
But kernel now queues packets to 0.0.0.0.

 
 Netperf just needs to be more careful in order to handle this kind of
 case more cleanly.
It's better if kernel works more reasonable.

-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-21 Thread Zhang, Yanmin
On Tue, 2008-01-22 at 07:27 +0100, Eric Dumazet wrote:
 Zhang, Yanmin a �crit :
  On Tue, 2008-01-22 at 13:24 +0800, Zhang, Yanmin wrote:
  On Mon, 2008-01-14 at 09:46 -0800, Rick Jones wrote:
  *) netperf/netserver support CPU affinity within themselves with the 
  global -T option to netperf.  Is the result with taskset much 
  different? 
The equivalent to the above would be to run netperf with:
 
  ./netperf -T 0,7 ..
  I checked the source codes and didn't find this option.
  I use netperf V2.3 (I found the number in the makefile).
  Indeed, that version pre-dates the -T option.  If you weren't already 
  chasing a regression I'd suggest an upgrade to 2.4.mumble.  Once you are 
  at a point where changing another variable won't muddle things you may 
  want to consider upgrading.
 
  happy benchmarking,
  Rick,
 
  I found my UDP_RR testing is just loop in netperf instead of ping-pang 
  between
  netserver and netperf. Is it correct? TCP_RR is ok.
 
  #./netserver
  #./netperf -t UDP_RR -l 60 -H 127.0.0.1 -i 30,3 -I 99,5 -- -P 12384 -r 1,1
  I digged into netperf and netserver.
  
  netperf binds ip 0 and port 12384 to its own socket. netserver binds ip
  127.0.0.1 and port 12384 to its own socket. Then, netperf calls connect to 
  setup server
  127.0.0.1 and port 12384. Then, netperf starts sends UDP packets, but all 
  packets netperf
  sends are just received by netperf itself. netserver doesn't receive any 
  packet.
  
  I think netperf binding should fail, or netperf shouldn't get the packet it 
  sends out, because
  netserver already binds port 12384.
  
  I am wondering if UDP stack in kernel has a bug.
 
 If :
 - socket A is bound to 0.0.0.0:12384 and
 - socket B is bound to 127.0.0.1:12384
 
 Then packets sent to 127.0.0.1:12384 should be queued for socket B
 
 If they are queued to socket A as you believe it is currently done, then yes 
 there is a bug in kernel.
I double-checked it and they are queued to socket A. If I define a different 
local port
for netperf, packets will be queued to socket B.

-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-21 Thread Eric Dumazet

Zhang, Yanmin a écrit :

On Mon, 2008-01-21 at 22:22 -0800, David Miller wrote:

From: Zhang, Yanmin [EMAIL PROTECTED]
Date: Tue, 22 Jan 2008 14:07:19 +0800


I am wondering if UDP stack in kernel has a bug.

If one server binds to INADDR_ANY with port N, then any other socket
can be bound to a specific IP address with port N.  When packets
come in destined for port N, the delivery will be prioritized
to whichever socket has the more specific and matching binding.

What does 'more specific' mean here? I assume 127.0.0.1 should be
prioritized before 0.0.0.0 which means packets should be queued to
127.0.0.1 firstly.


vi +278 net/ipv4/udp.c

int score = (sk-sk_family == PF_INET ? 1 : 0);
if (inet-rcv_saddr) {
if (inet-rcv_saddr != daddr)
continue;
score+=2;
}
if (inet-daddr) {
if (inet-daddr != saddr)
continue;
score+=2;
}
if (inet-dport) {
if (inet-dport != sport)
continue;
score+=2;
}
if (sk-sk_bound_dev_if) {
if (sk-sk_bound_dev_if != dif)
continue;
score+=2;
}

So in your case, socket bound to 127.0.0.1 should have a better score (+2) 
than other one, unless the other one got an = score because of another match 
(rcv_saddr set or bounded to an interface)






--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-21 Thread David Miller
From: Zhang, Yanmin [EMAIL PROTECTED]
Date: Tue, 22 Jan 2008 14:52:32 +0800

 I double-checked it and they are queued to socket A. If I define a
 different local port for netperf, packets will be queued to socket
 B.

This does not prove the kernel is buggy.

If netperf is binding to devices, that could make the kernel consider
the 0.0.0.0 bound socket equally preferable to the 127.0.0.1 bound
one.  When preference is equal, the first socket in the list is
choosen.

The algorithm is in net/ipv4/udp.c:__udp4_lib_lookup(), you
can look for yourself.  It uses a scoring system to decide
which socket to match.  Binding to a specific device gives
the score two points, so does binding to a specific local
address.

--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-15 Thread Zhang, Yanmin
On Mon, 2008-01-14 at 21:53 +1100, Herbert Xu wrote:
 On Mon, Jan 14, 2008 at 08:44:40AM +, Ilpo J�rvinen wrote:
 
I tried to use bisect to locate the bad patch between 2.6.22 and 
2.6.23-rc1,
but the bisected kernel wasn't stable and went crazy.
  
  TCP work between that is very much non-existing.
 
 Make sure you haven't switched between SLAB/SLUB while testing this.
I can make sure. In addition, I tried both SLAB and SLUB and make sure the 
regression is still there if CONFIG_SLAB=y.

Thanks,
-yanmin

--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-15 Thread Zhang, Yanmin
On Wed, 2008-01-16 at 08:34 +0800, Zhang, Yanmin wrote:
 On Mon, 2008-01-14 at 21:53 +1100, Herbert Xu wrote:
  On Mon, Jan 14, 2008 at 08:44:40AM +, Ilpo Jrvinen wrote:
  
 I tried to use bisect to locate the bad patch between 2.6.22 and 
 2.6.23-rc1,
 but the bisected kernel wasn't stable and went crazy.
   
   TCP work between that is very much non-existing.
  
  Make sure you haven't switched between SLAB/SLUB while testing this.
 I can make sure. In addition, I tried both SLAB and SLUB and make sure the 
 regression is still there if CONFIG_SLAB=y.
I retried bisect between 2.6.22 and 2.6.23-rc1. This time, I enabled 
CONFIG_SLAB=y,
and deleted the warmup procedure in the testing scripts. In addition, bind the 2
processes on the same logical processor. The regression is about 20% which is 
larger
than the one when binding 2 processes to different core.

The new bisect reported cfs core patch causes it. The results of every step look
stable.

dd41f596cda0d7d6e4a8b139ffdfabcefdd46528 is first bad commit
commit dd41f596cda0d7d6e4a8b139ffdfabcefdd46528
Author: Ingo Molnar [EMAIL PROTECTED]
Date:   Mon Jul 9 18:51:59 2007 +0200

sched: cfs core code

apply the CFS core code.

this change switches over the scheduler core to CFS's modular
design and makes use of kernel/sched_fair/rt/idletask.c to implement
Linux's scheduling policies.

thanks to Andrew Morton and Thomas Gleixner for lots of detailed review
feedback and for fixlets.

Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
Signed-off-by: Mike Galbraith [EMAIL PROTECTED]
Signed-off-by: Dmitry Adamushko [EMAIL PROTECTED]
Signed-off-by: Srivatsa Vaddagiri [EMAIL PROTECTED]


-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-14 Thread Ilpo Järvinen
On Fri, 11 Jan 2008, Zhang, Yanmin wrote:

 On Wed, 2008-01-09 at 17:35 +0800, Zhang, Yanmin wrote: 
  The regression is:
  1)stoakley with 2 qual-core processors: 11%;
  2)Tulsa with 4 dual-core(+hyperThread) processors:13%;
 I have new update on this issue and also cc to netdev maillist.
 Thank David Miller for pointing me the netdev maillist.
 
  
  The test command is:
  #sudo taskset -c 7 ./netserver
  #sudo taskset -c 0 ./netperf -t TCP_RR -l 60 -H 127.0.0.1 -i 50,3 -I 99,5 
  -- -r 1,1
  
  As a matter of fact, 2.6.23 has about 6% regression and 2.6.24-rc's
  regression is between 16%~11%.
  
  I tried to use bisect to locate the bad patch between 2.6.22 and 2.6.23-rc1,
  but the bisected kernel wasn't stable and went crazy.

TCP work between that is very much non-existing.

Using git-reset's to select a nearby merge point instead of default 
commit where bisection lands might be help in case the bisected kernel 
breaks.

Also, limiting bisection under a subsystem might reduce probability of 
brokeness (might at least be able to narrow it down quite a lot), e.g.

git bisect start net/


-- 
 i.
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-14 Thread Ilpo Järvinen
On Mon, 14 Jan 2008, Ilpo Järvinen wrote:

 On Fri, 11 Jan 2008, Zhang, Yanmin wrote:
 
  On Wed, 2008-01-09 at 17:35 +0800, Zhang, Yanmin wrote: 
   
   As a matter of fact, 2.6.23 has about 6% regression and 2.6.24-rc's
   regression is between 16%~11%.
   
   I tried to use bisect to locate the bad patch between 2.6.22 and 
   2.6.23-rc1,
   but the bisected kernel wasn't stable and went crazy.
 
 TCP work between that is very much non-existing.

I _really_ meant 2.6.22 - 2.6.23-rc1, not 2.6.24-rc1 in case you had a 
typo there which is not that uncommon while typing kernel versions... :-)

-- 
 i.

Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-14 Thread Zhang, Yanmin
On Mon, 2008-01-14 at 11:21 +0200, Ilpo J�rvinen wrote:
 On Mon, 14 Jan 2008, Ilpo J�rvinen wrote:
 
  On Fri, 11 Jan 2008, Zhang, Yanmin wrote:
  
   On Wed, 2008-01-09 at 17:35 +0800, Zhang, Yanmin wrote: 

As a matter of fact, 2.6.23 has about 6% regression and 2.6.24-rc's
regression is between 16%~11%.

I tried to use bisect to locate the bad patch between 2.6.22 and 
2.6.23-rc1,
but the bisected kernel wasn't stable and went crazy.
  
  TCP work between that is very much non-existing.
 
 I _really_ meant 2.6.22 - 2.6.23-rc1, not 2.6.24-rc1 in case you had a 
 typo
I did bisect 2.6.22 - 2.6.23-rc1. I also tested it on the latest 2.6.24-rc.

  there which is not that uncommon while typing kernel versions... :-)
Thanks. I will retry bisect and bind the server/client to the same logical 
processor, where
I hope the result is stable this time when bisecting.

Manual testing showed there is still same or more regression if I bind the
processes on the same cpu.


Thanks a lot!

-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-14 Thread Herbert Xu
On Mon, Jan 14, 2008 at 08:44:40AM +, Ilpo Järvinen wrote:

   I tried to use bisect to locate the bad patch between 2.6.22 and 
   2.6.23-rc1,
   but the bisected kernel wasn't stable and went crazy.
 
 TCP work between that is very much non-existing.

Make sure you haven't switched between SLAB/SLUB while testing this.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} [EMAIL PROTECTED]
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-14 Thread Rick Jones
*) netperf/netserver support CPU affinity within themselves with the 
global -T option to netperf.  Is the result with taskset much different? 
  The equivalent to the above would be to run netperf with:


./netperf -T 0,7 ..


I checked the source codes and didn't find this option.
I use netperf V2.3 (I found the number in the makefile).


Indeed, that version pre-dates the -T option.  If you weren't already 
chasing a regression I'd suggest an upgrade to 2.4.mumble.  Once you are 
at a point where changing another variable won't muddle things you may 
want to consider upgrading.


happy benchmarking,

rick jones
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-13 Thread Zhang, Yanmin
On Fri, 2008-01-11 at 09:56 -0800, Rick Jones wrote:
 The test command is:
 #sudo taskset -c 7 ./netserver
 #sudo taskset -c 0 ./netperf -t TCP_RR -l 60 -H 127.0.0.1 -i 50,3 -I 99,5 
 -- -r 1,1
 
 A couple of comments/questions on the command lines:
Thanks for your kind comments.

 
 *) netperf/netserver support CPU affinity within themselves with the 
 global -T option to netperf.  Is the result with taskset much different? 
The equivalent to the above would be to run netperf with:
 
 ./netperf -T 0,7 ..
I checked the source codes and didn't find this option.
I use netperf V2.3 (I found the number in the makefile).

 .
 
 The one possibly salient difference between the two is that when done 
 within netperf, the initial process creation will take place wherever 
 the scheduler wants it.
 
 *) The -i option to set the confidence iteration count will silently cap 
 the max at 30.
Indeed, you are right.

-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-11 Thread Zhang, Yanmin
On Wed, 2008-01-09 at 17:35 +0800, Zhang, Yanmin wrote: 
 The regression is:
 1)stoakley with 2 qual-core processors: 11%;
 2)Tulsa with 4 dual-core(+hyperThread) processors:13%;
I have new update on this issue and also cc to netdev maillist.
Thank David Miller for pointing me the netdev maillist.

 
 The test command is:
 #sudo taskset -c 7 ./netserver
 #sudo taskset -c 0 ./netperf -t TCP_RR -l 60 -H 127.0.0.1 -i 50,3 -I 99,5 -- 
 -r 1,1
 
 As a matter of fact, 2.6.23 has about 6% regression and 2.6.24-rc's
 regression is between 16%~11%.
 
 I tried to use bisect to locate the bad patch between 2.6.22 and 2.6.23-rc1,
 but the bisected kernel wasn't stable and went crazy.
 
 I tried both CONFIG_SLUB=y and CONFIG_SLAB=y to make sure SLUB isn't the
 culprit.
 
 The oprofile data of CONFIG_SLAB=y. Top cpu utilizations are:
 1) 2.6.22 
 2067379   9.4888  vmlinux  schedule
 1873604   8.5994  vmlinux  mwait_idle
 1568131   7.1974  vmlinux  resched_task
 1066976   4.8972  vmlinux  tcp_v4_rcv
 9866414.5285  vmlinux  tcp_rcv_established
 9795184.4958  vmlinux  find_busiest_group
 7670693.5207  vmlinux  sock_def_readable
 7368083.3818  vmlinux  tcp_sendmsg
 5958892.7350  vmlinux  task_rq_lock
 5571932.5574  vmlinux  tcp_ack
 4705702.1598  vmlinux  __mod_timer
 3922201.8002  vmlinux  __alloc_skb
 3581061.6436  vmlinux  skb_release_data
 3133721.4383  vmlinux  skb_clone
 
 2) 2.6.24-rc7
 2668426  12.4497  vmlinux  vmlinux  schedule
 9556984.4589  vmlinux  vmlinux  
 skb_release_data
 8363113.9018  vmlinux  vmlinux  tcp_v4_rcv
 7623983.5570  vmlinux  vmlinux  
 skb_release_all
 7289073.4007  vmlinux  vmlinux  
 task_rq_lock
 7050373.2894  vmlinux  vmlinux  __wake_up
 6942063.2388  vmlinux  vmlinux  
 __mod_timer
 6176162.8815  vmlinux  vmlinux  mwait_idle
 
 It looks like tcp in 2.6.22 sends more packets, but frees far less skb than 
 2.6.24-rc6.
 tcp_rcv_established in 2.6.22 is highlighted on cpu utilization.
I instrumented kernel to capure the function call numbers.
1) 2.6.22
skb_release_data:50148649
tcp_ack: 25062858   
tcp_transmit_skb:25063150   
tcp_v4_rcv:  25063279   

2) 2.6.24-rc6
skb_release_data:21429692   
tcp_ack: 10707710   
tcp_transmit_skb:10707866
tcp_v4_rcv:  10707959   

The data doesn't show that 2.6.22 sends more packets while freeing far less skb 
than
2.6.24-rc6.

The data showed skb_release_data of kernel 2.6.22 is more than double of the 
one of
2.6.24-rc6. But netperf result just showed about 10% regression.

As the packet only has 1 byte, so I suspect 2.6.24-rc6 tries to merge packets 
after waiting for
a latency. 2.6.22 might haven't the wait latency or the latency is very small, 
so 2.6.22 almost
sends the packets immediately. I will check the source codes later.

-yanmin


--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Netperf TCP_RR(loopback) 10% regression in 2.6.24-rc6, comparing with 2.6.22

2008-01-11 Thread Rick Jones

The test command is:
#sudo taskset -c 7 ./netserver
#sudo taskset -c 0 ./netperf -t TCP_RR -l 60 -H 127.0.0.1 -i 50,3 -I 99,5 -- -r 
1,1


A couple of comments/questions on the command lines:

*) netperf/netserver support CPU affinity within themselves with the 
global -T option to netperf.  Is the result with taskset much different? 
  The equivalent to the above would be to run netperf with:


./netperf -T 0,7 ...

The one possibly salient difference between the two is that when done 
within netperf, the initial process creation will take place wherever 
the scheduler wants it.


*) The -i option to set the confidence iteration count will silently cap 
the max at 30.


happy benchmarking,

rick jones
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html