Hi,
I am trying to use iperf 2.0.5 to measure TCP throughput between two Linux
systems connected back to back.
The topology I am using is below:
Linux A eth1(192.138.14.1)----eth4(192.138.14.4) Linux B
eth2(192.138.4.3)------eth3(192.138.4.2) Linux C
All links between Linuxes are 1gig links. All Linux interfaces are
configured to 1000Mbps speed.
Throughout measured from B to C gives
*Linux C# iperf
-s------------------------------------------------------------Server
listening on TCP port 5001TCP window size: 85.3 KByte
(default)------------------------------------------------------------[ 4]
local 192.138.4.2 port 5001 connected with 192.138.4.3 port 60918[ ID]
Interval Transfer Bandwidth[ 4] 0.0-10.1 sec 1.11 GBytes 941
Mbits/secLinuxB# iperf -c
192.138.4.2------------------------------------------------------------Client
connecting to 192.138.4.2, TCP port 5001TCP window size: 23.2 KByte
(default)------------------------------------------------------------[ 3]
local 192.138.4.3 port 60918 connected with 192.138.4.2 port 5001[ ID]
Interval Transfer Bandwidth[ 3] 0.0-10.0 sec 1.11 GBytes 952
Mbits/sec*
When I send from C to B,
*Linux B# iperf
-s------------------------------------------------------------Server
listening on TCP port 5001TCP window size: 85.3 KByte
(default)------------------------------------------------------------[ 4]
local 192.138.4.3 port 5001 connected with 192.138.4.2 port 38576[ ID]
Interval Transfer Bandwidth[ 4] 0.0-10.0 sec 970 MBytes 813
Mbits/sec*
*Linux C# iperf -c
192.138.4.3------------------------------------------------------------Client
connecting to 192.138.4.3, TCP port 5001TCP window size: 23.2 KByte
(default)------------------------------------------------------------[ 3]
local 192.138.4.2 port 38576 connected with 192.138.4.3 port 5001[ ID]
Interval Transfer Bandwidth[ 3] 0.0-10.0 sec 970 MBytes 814
Mbits/sec*
Why am I seeing a marked difference in throughput measured in different
directions between two back to back connected systems?
All my sysctl parameters in both Linux are same:
*Linux B# sudo vim /etc/sysctl.conf# Kernel sysctl configuration file for
Red Hat Linux## For binary values, 0 is disabled, 1 is enabled. See
sysctl(8) and# sysctl.conf(5) for more details.# Controls IP packet
forwardingnet.ipv4.ip_forward = 1# Controls source route
verificationnet.ipv4.conf.default.rp_filter = 1# Do not accept source
routingnet.ipv4.conf.default.accept_source_route = 0# Controls the System
Request debugging functionality of the kernelkernel.sysrq = 0# Controls
whether core dumps will append the PID to the core filename.# Useful for
debugging multi-threaded applications.kernel.core_uses_pid = 1# Controls
the use of TCP syncookiesnet.ipv4.tcp_syncookies = 1#Enables/Disables TCP
SACK (default 1)net.ipv4.tcp_sack = 1# Window
Scalingnet.ipv4.tcp_window_scaling = 1#Maximum Receive window
size#net.core.rmem_max = 16777216# Receive Window Size Min Avg
Max#net.ipv4.tcp_rmem = 4096 87380 16777216# Send Window Size Min Avg
Max#net.ipv4.tcp_wmem = 4096 16384 16777216# Disable netfilter on
bridges.net.bridge.bridge-nf-call-ip6tables =
0net.bridge.bridge-nf-call-iptables = 0net.bridge.bridge-nf-call-arptables
= 0# Controls the default maxmimum size of a mesage queuekernel.msgmnb =
65536# Controls the maximum size of a message, in byteskernel.msgmax =
65536# Controls the maximum shared segment size, in byteskernel.shmmax =
68719476736# Controls the maximum number of shared memory segments, in
pageskernel.shmall = 4294967296*
All Linux systems are running Linux Centos 6.4 and same kernel
2.6.32-358.el6.x86_64. This means they all should have the same defualt
buffer sizes and same tunable TCP parameters.
On checking the network stats using *netstat -s*, I found the number of TCP
segments sent out to be lesser. Linux B - 21047 segments sent (B to C) and
Linux C - 16132 segments sent (C to B). Why is this ? Is there something
apart from link speed, linux interface configs, tunable TCP parameters that
is affecting the throughput values ?
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
_______________________________________________
Iperf-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/iperf-users