Not at an intermediate network but at the host computer (or VM) that is
transmitting

Bob

On Sun, Aug 2, 2020 at 6:49 PM Zufar Dhiyaulhaq <zufardhiyaul...@gmail.com>
wrote:

> Hi Bob and Tim,
>
> Thank you for responding to my question. sometime today will try
> increasing the window. Since I am testing this in a virtual environment
> (OpenStack), it is hard to define the wire limitation. I get UDP throughput
> somewhere between 100-120 Mbps and TCP throughput between 50-100 Mbps
> (using 1402 (GENEVE) & 1410 (VXLAN) mss since I am using tunneling
> protocol).
>
> In the case of UDP that Bob saying. so the limitation you said is in the
> intermediate network, right? so the client will force to send all packet
> (by increasing window size) but will get dropped in the intermediate net?
>
> Best Regards,
> Zufar Dhiyaulhaq
>
>
> On Mon, Aug 3, 2020 at 2:05 AM Bob McMahon <bob.mcma...@broadcom.com>
> wrote:
>
>> If the bottleneck is the transmitter's wire that means things will back
>> up behind that.  The network stack on the client will queue packets.  Since
>> it's in a state of oversubscription there is no way for the client to ever
>> drain the bottleneck so-to-speak.  A bottleneck is when the service time is
>> less than the arrival time.  So one host to look more closely at the client
>> host, where the bottleneck is, to understand.  There are two things
>> happening, iperf is issuing writes() and the network stack is sending
>> packets. While related, they're different.
>>
>> The iperf client issues a write() to cause the sending of a packet.  If
>> the operating system has system buffers it will accept the write()
>> otherwise it has two choices, block or suspend the write until a system
>> buffer comes available or error on the write.  What I suspect you're seeing
>> is an os blocking on the write().  Increasing the window size will allow
>> the os to accept the write and pass the packet to the network stack, which
>> will in turn drop the packet.  Then you'll see packet loss.
>>
>> Did you try with a bigger window?
>>
>> Bob
>>
>> On Fri, Jul 31, 2020 at 4:29 PM Zufar Dhiyaulhaq <
>> zufardhiyaul...@gmail.com> wrote:
>>
>>> Hi Bob,
>>>
>>> Thanks for replying. In my understanding, when increasing the bitrate
>>> above the bandwidth/throughput, it will increase the packet loss right? but
>>> in my case, I increase to 9 Gbps and still not seeing any packet loss. Did
>>> increasing window size will increasing packet loss? and why that can happen?
>>>
>>> I am trying to simulate packet loss.
>>>
>>> Thanks
>>>
>>> Best Regards,
>>> Zufar Dhiyaulhaq
>>>
>>>
>>> On Sat, Aug 1, 2020 at 5:28 AM Bob McMahon <bob.mcma...@broadcom.com>
>>> wrote:
>>>
>>>> Try to increase the window size with -w on the client.  This will allow
>>>> the operating system to accept the write and drop packets within the
>>>> stack.  If the window is too small the operating system will block the
>>>> write until os buffers are available.
>>>>
>>>> Bob
>>>>
>>>> On Fri, Jul 31, 2020 at 8:56 AM Zufar Dhiyaulhaq <
>>>> zufardhiyaul...@gmail.com> wrote:
>>>>
>>>>> Hi Folks,
>>>>>
>>>>> I have a problem with iperf3, I try to simulate packet loss with
>>>>> Iperf3 with increasing the bitrate above the bandwidth. But packet loss
>>>>> output not increasing.
>>>>>
>>>>> Ubuntu 18.04
>>>>> Iperf 3.7.3
>>>>>
>>>>> I don't know why this is happening? Is there any bug with Iperf? this
>>>>> sounds stupid for me.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *ubuntu@vm1:~$ iperf3 -c 192.168.0.92 --udp -t 20 --bitrate 9000m -R
>>>>> -Viperf 3.7Linux vm1 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39
>>>>> UTC 2020 x86_64Control connection MSS 1390Setting UDP block size to
>>>>> 1390Time: Fri, 31 Jul 2020 15:53:39 GMTConnecting to host 192.168.0.92,
>>>>> port 5201Reverse mode, remote host 192.168.0.92 is sending      Cookie:
>>>>> geu5ktrvwtalkelbszen5ym4rzxfp5xgzwdy      Target Bitrate: 9000000000[  5]
>>>>> local 192.168.0.226 port 47999 connected to 192.168.0.92 port 5201Starting
>>>>> Test: protocol: UDP, 1 streams, 1390 byte blocks, omitting 0 seconds, 20
>>>>> second test, tos 0[ ID] Interval           Transfer     Bitrate
>>>>> Jitter    Lost/Total Datagrams[  5]   0.00-1.00   sec  13.8 MBytes   116
>>>>> Mbits/sec  0.081 ms  269/10704 (2.5%)  [  5]   1.00-2.00   sec  13.7 
>>>>> MBytes
>>>>>   115 Mbits/sec  0.085 ms  0/10346 (0%)  [  5]   2.00-3.00   sec  13.6
>>>>> MBytes   114 Mbits/sec  0.035 ms  126/10365 (1.2%)  [  5]   3.00-4.00   
>>>>> sec
>>>>>  12.8 MBytes   107 Mbits/sec  0.033 ms  279/9946 (2.8%)  [  5]   4.00-5.00
>>>>>   sec  13.5 MBytes   113 Mbits/sec  0.051 ms  262/10427 (2.5%)  [  5]
>>>>> 5.00-6.00   sec  13.2 MBytes   111 Mbits/sec  0.058 ms  0/9965 (0%)  [  5]
>>>>>   6.00-7.00   sec  13.3 MBytes   111 Mbits/sec  0.044 ms  32/10047 (0.32%)
>>>>>  [  5]   7.00-8.00   sec  13.0 MBytes   109 Mbits/sec  0.053 ms  43/9874
>>>>> (0.44%)  [  5]   8.00-9.00   sec  13.0 MBytes   109 Mbits/sec  0.042 ms
>>>>>  34/9847 (0.35%)  [  5]   9.00-10.00  sec  13.6 MBytes   114 Mbits/sec
>>>>>  0.055 ms  78/10305 (0.76%)  [  5]  10.00-11.00  sec  13.5 MBytes   113
>>>>> Mbits/sec  0.070 ms  0/10171 (0%)  [  5]  11.00-12.00  sec  13.1 MBytes
>>>>> 110 Mbits/sec  0.047 ms  0/9851 (0%)  [  5]  12.00-13.00  sec  13.3 MBytes
>>>>>   112 Mbits/sec  0.034 ms  0/10055 (0%)  [  5]  13.00-14.00  sec  13.4
>>>>> MBytes   112 Mbits/sec  0.040 ms  36/10136 (0.36%)  [  5]  14.00-15.00  
>>>>> sec
>>>>>  13.9 MBytes   117 Mbits/sec  0.055 ms  437/10921 (4%)  [  5]  15.00-16.00
>>>>>  sec  13.2 MBytes   111 Mbits/sec  0.043 ms  25/9964 (0.25%)  [  5]
>>>>>  16.00-17.00  sec  13.2 MBytes   110 Mbits/sec  0.043 ms  21/9942 (0.21%)
>>>>>  [  5]  17.00-18.00  sec  12.9 MBytes   108 Mbits/sec  0.046 ms  0/9702
>>>>> (0%)  [  5]  18.00-19.00  sec  13.4 MBytes   112 Mbits/sec  0.050 ms
>>>>>  208/10294 (2%)  [  5]  19.00-20.00  sec  13.5 MBytes   113 Mbits/sec
>>>>>  0.048 ms  0/10152 (0%)  - - - - - - - - - - - - - - - - - - - - - - - -
>>>>> -Test Complete. Summary Results:[ ID] Interval           Transfer
>>>>> Bitrate         Jitter    Lost/Total Datagrams[  5]   0.00-20.04  sec   
>>>>> 269
>>>>> MBytes   113 Mbits/sec  0.000 ms  0/203058 (0%)  sender[  5]   0.00-20.00
>>>>>  sec   267 MBytes   112 Mbits/sec  0.048 ms  1850/203014 (0.91%)  
>>>>> receiver*
>>>>>
>>>>> Thank you
>>>>>
>>>>> Best Regards,
>>>>> Zufar Dhiyaulhaq
>>>>> _______________________________________________
>>>>> Iperf-users mailing list
>>>>> Iperf-users@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/iperf-users
>>>>>
>>>>
_______________________________________________
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users

Reply via email to