IIRC Iperf has CPU issues when using UDP AND the bandwidth setting.  I could 
use tcp at 100Mb/s and barely hit 20% CPU, but UDP limited to even 1Mb/s would 
use 100% CPU.  Something about "wait" and the OS scheduler and what not.  At 
one time I tweaked the source to use some smaller unit of wait time, nanowait? 
I forget exactly...  Also messed with the scheduler on FreeBSD.  That was a 
long time ago...  I had the problem with the newer binaries as well.  I 
typically use the client on winblows, and it sucks up CPU.



________________________________
From: Vasanthy Kolluri (vkolluri) [mailto:[email protected]]
Sent: Wednesday, March 23, 2011 5:17 PM
To: Metod Kozelj
Cc: [email protected]
Subject: Re: [Iperf-users] Question reg iperf server reports

I'm using the latest iperf version-2.0.5 and I'm not running out of cpu on the 
server.My iperf server is only consuming about 25% of cpu.

-Vasanthy
From: Metod Kozelj [mailto:[email protected]]
Sent: Wednesday, March 23, 2011 12:41 AM
To: Vasanthy Kolluri (vkolluri)
Cc: [email protected]
Subject: Re: [Iperf-users] Question reg iperf server reports

Howdy!

The other thing when testing with UDP is high CPU load with not-the-latest 
iperf binaries. Achieved throughput is then CPU-dependant. Example: I've got a 
PII linux with 100Mbps NIC acting as iperf server. When using iperf 2.0.2 it 
used to use 100% of CPU when testing UDP and max achievable throughput was 
slightly above 20Mbps. Two or more parallel iperf instances did not improve 
cumulative performance as they were competing for the same CPU resource.
When I installed iperf 2.0.5, CPU load became neglectable and max achievable 
throughput reached NIC capacity.
You probably have a slightly faster CPU in your test machines which would 
impose higher throughput limit if you're using somehow outdated iperf 
executables. Check iperf version you're running as well as CPU load while doing 
UDP tests.

Peace!

  Mkx



-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc

________________________________

BOFH excuse #267:



The UPS is on strike.


Vasanthy Kolluri (vkolluri) je dne 22/03/11 18:46 napisal-a:
Hi

Thanks for your reply.

But I have a 10G link between client and server, there shouldn't be a 1G  
bottleneck in between. I could actually run a 9.3G iperf TCP session between 
the same client and server set up. So I'm not sure what's causing the issue 
here.

-Vasanthy


From: Metod Kozelj [mailto:[email protected]]
Sent: Tuesday, March 22, 2011 2:34 AM
To: Vasanthy Kolluri (vkolluri)
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [Iperf-users] Question reg iperf server reports

Howdy!


Vasanthy Kolluri (vkolluri) je dne 21/03/11 19:49 napisal-a:
I have two iperf clients sending UDP packets to a iperf server. Once the test 
is completed, the Server side results for only one of the streams is reported 
both on the client and server.

Iperf client:

# iperf -c 10.0.50.100 -B 10.0.50.1 -u -b1G -t100 &
# iperf -c 10.0.50.100 -B 10.0.50.2 -u -b1G -t100 &

[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-100.0 sec  9.38 GBytes   805 Mbits/sec
[  3] Sent 6848733 datagrams
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-100.0 sec  9.36 GBytes   804 Mbits/sec
[  3] Sent 6837468 datagrams
[  3] Server Report:
[  3]  0.0-100.0 sec  5.02 GBytes   431 Mbits/sec   0.010 ms 3168363/6837467 
(46%)
[  3]  0.0-100.0 sec  18 datagrams received out-of-order
[  3] WARNING: did not receive ack of last datagram after 10 tries.

When doing UDP tests with requested bandwidth significantly hingher that 
bottleneck permits last packets can get either delayed by too much or dropped 
(most probably the former). Either way server doesn't send report to client 
quickly enough for client to catch it before it exits. This mostly doesn't 
happen when requested bandwidth is within link capacity or when testing TCP 
(with its flow control).

In your case the limiting factor can well be the first leg (NIC of the client 
machine) as you're requesting two times 1Gbps. The achieved throughput seems to 
indicate you've really got some kind of 2Gbps link out of client machine 
(achieved throughput is 805+804=1609 Mbps). Still after 100 seconds TX buffer 
of your IP stack will be full and that causes delay - how much depend on the 
size of buffer. Thre is some 1Gbps bottleneck somewhere between client and 
server as server seems to get slightly less than half of packets. Delay and 
dropped packets mean last packets are not acked by server (bearing throughput 
report) to client. Seems like last packets actually got dropped as server did 
not notice that client was trying to close the connection.

If you want to get at least some statistics on server side, you could set 
server for periodic reporting using -i <interval> ... it won't help to get 
report through client though.
--

Peace!

  Mkx



-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc

________________________________

BOFH excuse #425:



stop bit received






<font size="1">
<div style='border:none;border-bottom:double windowtext 2.25pt;padding:0in 0in 
1.0pt 0in'>
</div>
"This email is intended to be reviewed by only the intended recipient
 and may contain information that is privileged and/or confidential.
 If you are not the intended recipient, you are hereby notified that
 any review, use, dissemination, disclosure or copying of this email
 and its attachments, if any, is strictly prohibited.  If you have
 received this email in error, please immediately notify the sender by
 return email and delete this email from your system."
</font>

------------------------------------------------------------------------------
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
_______________________________________________
Iperf-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/iperf-users

Reply via email to