Oops, the only “handshake” is the ARP and there is no negotiation of the port 
number.  The connected is from the server which prints out the ports from the 
client’s packet (assuming it’s listening on the client’s dest port.)

Can you issue a iperf –s –I 0.5  -u and show the results from the server?

Bob
From: Metod Kozelj [mailto:[email protected]]
Sent: Sunday, August 24, 2014 11:45 PM
To: Bob (Robert) McMahon; Martin T
Cc: [email protected]
Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth 
results if Iperf server is unreachable

Hi!

I just checked (using wireshark) how UDP looks like when run against server 
that's not listening. All the packets, including the first one, look exactly 
the same, except for the sequence number, which starts at 0. Meaning that 
there's no special handshake. The 'connected with server port xxx' is thus only 
client's fiction mimicking similar message which is there for TCP testing (and 
it's real then).

After a while (anything between "next packet" and few seconds) client did 
receive ICMP message type 3 code 3 (meaning destination not reachable, port not 
reachable) from server. It happens sporadically and is obviously ignored by 
iperf. It seems possible to me that iperf application is not aware of these 
messages though.

BR,
  Metod


Bob (Robert) McMahon je dne 22/08/14 19:03 napisal-a:

Do you have the server output?



If the client can't reach the server then the following should not happen:



[  3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001



UDP does use a handshake at the start of traffic.  That's how the ports are 
determined.  The only type of traffic where a client sends without initial 
reachability to the server is multicast.



Iperf 2.0.5 has known performance problems and on many machines tops out at 
~800Mbs.  This is addressed in iperf2's version 2.0.6 or greater.



http://sourceforge.net/projects/iperf2/?source=directory



My initial guess is that you aren't connecting to what you think you are.   Two 
reasons



o  If the server is not reachable there should be no connected message

o The thruput is too high



Bob

-----Original Message-----

From: Martin T [mailto:[email protected]]

Sent: Friday, August 22, 2014 2:04 AM

To: Metod Kozelj; Bob (Robert) McMahon

Cc: [email protected]<mailto:[email protected]>

Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth 
results if Iperf server is unreachable



Hi,



please see the full output below:



root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m

------------------------------------------------------------

Client connecting to 10.10.10.1, UDP port 5001

Sending 1470 byte datagrams

UDP buffer size: 0.16 MByte (default)

------------------------------------------------------------

[  3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-60.0 sec  422744 MBytes   59104 Mbits/sec

[  3] 60.0-120.0 sec  435030 MBytes   60822 Mbits/sec

[  3] 120.0-180.0 sec  402263 MBytes   56240 Mbits/sec

[  3] 180.0-240.0 sec  398167 MBytes   55668 Mbits/sec

[  3] 240.0-300.0 sec  422746 MBytes   59104 Mbits/sec

[  3] 300.0-360.0 sec  381786 MBytes   53378 Mbits/sec

[  3] 360.0-420.0 sec  402263 MBytes   56240 Mbits/sec

[  3] 420.0-480.0 sec  406365 MBytes   56814 Mbits/sec

[  3] 480.0-540.0 sec  438132 MBytes   61395 Mbits/sec

[  3]  0.0-600.0 sec  4108674 MBytes   57443 Mbits/sec

[  3] Sent 6119890 datagrams

read failed: No route to host

[  3] WARNING: did not receive ack of last datagram after 3 tries.

root@vserver:~#





In case of UDP mode the Iperf client will send the data despite the

fact that the Iperf server is not reachable.



Still, to me this looks like a bug. Iperf client reporting ~60Gbps

egress traffic on a virtual-machine with 1GigE vNIC while having

bandwidth specified with -b flag, is IMHO not expected bahavior.





regards,

Martin





On 8/22/14, Metod Kozelj <[email protected]><mailto:[email protected]> 
wrote:

Hi,



the bandwidth limitation switch (-b) limits the maximum rate with which

sending party (that's usually client) will transmit data if there's no

bottleneck that sending party is able to detect. If test is done using TCP,



bottleneck will be apparent to client (IP stack will always block

transmission

if outstanding data is not delivered yet). If test is done using UDP,

sending

party will mostly just transmit data at maximum rate except in some rare

cases.



To verify this, you can run iperf in client mode with command similar to

this:



iperf -c localhost -i 1 -p 42000 -u -b500M -t 10



... make sure that the port used in command above (42000) is not used by

some

other application. If you vary the bandwidth setting, you can se that

there's

a practical maximum speed that even loopback network device can handle. When



experimenting with the command above, I've found a few interesting facts

about

my particular machine:



  * when targeting machine on my 100Mbps LAN, transmit rate would not go

    beyond 96Mbps (which is consistent with the fact that 100Mmbps is wire

    speed while UDP over ethernet faces some overhead)

  * when targeting loopback device with "low" bandwidth requirement (such

as

    50Mbps), transmit rate would be exactly half of requested. I don't know

if

    this is some kind of reporting artefact or it actually does transmit at

    half the rate

  * UDP transmit rate over loopback device would not go beyond 402Mbps.



I was using iperf 2.0.5. And I found out that it behaves similarly on

another

host (402 Mbps max over loopback, up to 812 Mbps over GigE).



Tests above show that loopback devices (and I would count any virtualised

network devices as such) experience some kind of limits.



Peace!

   Mkx



-- perl -e 'print

$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc



------------------------------------------------------------------------------



BOFH excuse #299:



The data on your hard drive is out of balance.







Martin T je dne 21/08/14 16:51 napisal-a:

Metod,



but shouldn't the Iperf client send out traffic at 500Mbps as I had

"-b 500m" specified? In my example is prints unrealistic

bandwidth(~60Gbps) results.





regards,

Martin



On 8/21/14, Metod Kozelj <[email protected]><mailto:[email protected]> 
wrote:

Hi,



Martin T je dne 21/08/14 15:12 napisal-a:

if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and

10.10.10.1 is behind the firewall so that Iperf client is not able to

reach it, then I will see following results printed by Iperf client:



[  ID]   Interval                Transfer                   Bandwidth

[   3]   0.0 - 60.0 sec      422744 MBytes       59104 Mbits/sec

[   3]   60.0 - 120.0 sec  435030 MBytes       60822 Mbits/sec

etc





Why does Iperf client behave like that? Is this a know bug?

That's not a bug in iperf, it's how UDP is working. The main difference

between TCP and UDP is that with TCP, IP stack itself takes care of all

the



details (such as in-order delivery, retransmissions, rate adaption,

...),

while with UDP stack that's responsibility of application. The only

functionality that iperf application does when using UDP is to fetch the

server (receiving side) report at the end of transmission. Even this

function

is not performed in perfect way ... sending side only waits for server

report

for short time and if it filled network buffers, this waiting time can

be

too

short.



The same phenomenon can be seen if there's a bottleneck somewhere

between

the

nodes and you try to push datarate too high ... routers at either side

of

the

bottle will discard packets when their TX buffers get filled up. If TCP

was



used, this would trigger retransmission in IP stack and all of

TCP-slow-start

would kick in and sending application would notice drop in throughput.

If

UDP

was used, IP stack would not react in any way and application would dump

data

at top speed.

--



Peace!

    Mkx



-- perl -e 'print

$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc



------------------------------------------------------------------------------



BOFH excuse #252:



Our ISP is having {switching,routing,SMDS,frame relay} problems









--

Peace!

  Mkx



-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc

________________________________

BOFH excuse #79:



Look, buddy:  Windows 3.1 IS A General Protection Fault.
------------------------------------------------------------------------------
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
_______________________________________________
Iperf-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/iperf-users

Reply via email to