Howdy!
The other thing when testing with UDP is high CPU load with
not-the-latest iperf binaries. Achieved throughput is then
CPU-dependant. Example: I've got a PII linux with 100Mbps NIC acting as
iperf server. When using iperf 2.0.2 it used to use 100% of CPU when
testing UDP and max achievable throughput was slightly above 20Mbps. Two
or more parallel iperf instances did not improve cumulative performance
as they were competing for the same CPU resource.
When I installed iperf 2.0.5, CPU load became neglectable and max
achievable throughput reached NIC capacity.
You probably have a slightly faster CPU in your test machines which
would impose higher throughput limit if you're using somehow outdated
iperf executables. Check iperf version you're running as well as CPU
load while doing UDP tests.
Peace!
Mkx
-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc
------------------------------------------------------------------------
BOFH excuse #267:
The UPS is on strike.
Vasanthy Kolluri (vkolluri) je dne 22/03/11 18:46 napisal-a:
Hi
Thanks for your reply.
But I have a 10G link between client and server, there shouldn't be a
1G bottleneck in between. I could actually run a 9.3G iperf TCP
session between the same client and server set up. So I'm not sure
what's causing the issue here.
-Vasanthy
*From:* Metod Kozelj [mailto:[email protected]]
*Sent:* Tuesday, March 22, 2011 2:34 AM
*To:* Vasanthy Kolluri (vkolluri)
*Cc:* [email protected]
*Subject:* Re: [Iperf-users] Question reg iperf server reports
Howdy!
Vasanthy Kolluri (vkolluri) je dne 21/03/11 19:49 napisal-a:
I have two iperf clients sending UDP packets to a iperf server. Once
the test is completed, the Server side results for only one of the
streams is reported both on the client and server.
Iperf client:
/# iperf -c 10.0.50.100 -B 10.0.50.1 -u -b1G -t100 &/
/# iperf -c 10.0.50.100 -B 10.0.50.2 -u -b1G -t100 &/
/ /
/[ ID] Interval Transfer Bandwidth/
/[ 3] 0.0-100.0 sec 9.38 GBytes 805 Mbits/sec/
/[ 3] Sent 6848733 datagrams/
/[ ID] Interval Transfer Bandwidth/
/[ 3] 0.0-100.0 sec 9.36 GBytes 804 Mbits/sec/
/[ 3] Sent 6837468 datagrams/
/[ 3] Server Report:/
/[ 3] 0.0-100.0 sec 5.02 GBytes 431 Mbits/sec 0.010 ms
3168363/6837467 (46%)/
/[ 3] 0.0-100.0 sec 18 datagrams received out-of-order/
/[ 3] *WARNING: did not receive ack of last datagram after 10 tries.*/
When doing UDP tests with requested bandwidth significantly hingher
that bottleneck permits last packets can get either delayed by too
much or dropped (most probably the former). Either way server doesn't
send report to client quickly enough for client to catch it before it
exits. This mostly doesn't happen when requested bandwidth is within
link capacity or when testing TCP (with its flow control).
In your case the limiting factor can well be the first leg (NIC of the
client machine) as you're requesting two times 1Gbps. The achieved
throughput seems to indicate you've really got some kind of 2Gbps link
out of client machine (achieved throughput is 805+804=1609 Mbps).
Still after 100 seconds TX buffer of your IP stack will be full and
that causes delay - how much depend on the size of buffer. Thre is
some 1Gbps bottleneck somewhere between client and server as server
seems to get slightly less than half of packets. Delay and dropped
packets mean last packets are not acked by server (bearing throughput
report) to client. Seems like last packets actually got dropped as
server did not notice that client was trying to close the connection.
If you want to get at least some statistics on server side, you could
set server for periodic reporting using /-i <interval>/ ... it won't
help to get report through client though.
--
Peace!
Mkx
-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc
------------------------------------------------------------------------
BOFH excuse #425:
stop bit received
------------------------------------------------------------------------------
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software
be a part of the solution? Download the Intel(R) Manageability Checker
today! http://p.sf.net/sfu/intel-dev2devmar
_______________________________________________
Iperf-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/iperf-users