Re: Too aggressive TCP ACKs

2022-10-21 Thread Michael Tuexen
> On 21. Oct 2022, at 16:19, Zhenlei Huang  wrote:
> 
> Hi,
> 
> While I was repeating 
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=258755, I observed a
> strange behavior. The TCP ACKs from FreeBSD host are too aggressive.
> 
> My setup is simple:
>  A B
>[ MacOS ]  <> [ FreeBSD VM ]
> 192.168.120.1192.168.12.134 (disable tso and lro)
> While A <--- B, i.e. A as server and B as client, the packets rate looks good.
> 
> One session on B:
> 
> root@:~ # iperf3 -c 192.168.120.1 -b 10m
> Connecting to host 192.168.120.1, port 5201
> [  5] local 192.168.120.134 port 54459 connected to 192.168.120.1 port 5201
> [ ID] Interval   Transfer Bitrate Retr  Cwnd
> [  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes   
> [  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes   
> [  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes   
> [  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes   
> [  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bitrate Retr
> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec0 sender
> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  receiver
> 
> iperf Done.
> 
> Another session on B:
> 
> root@:~ # netstat -w 1 -I vmx0
> input   vmx0   output
>packets  errs idrops  bytespackets  errs  bytes colls
>  0 0 0  0  0 0  0 0
>  0 0 0  0  0 0  0 0
>342 0 0  22600526 0 775724 0
>150 0 0   9900851 01281454 0
>109 0 0   7194901 01357850 0
>126 0 0   8316828 01246632 0
>122 0 0   8052910 01370780 0
>109 0 0   7194819 01233702 0
>120 0 0   7920910 01370780 0
>110 0 0   7260819 01233702 0
>123 0 0   8118910 01370780 0
>109 0 0   7194819 01233702 0
> 73 0 0   5088465 0 686342 0
>  0 0 0  0  0 0  0 0
>  0 0 0  0  0 0  0 0
> 
> 
> 
> 
> 
> 
> While A ---> B, i.e. A as client and B as server, the ACKs sent from B looks 
> strange.
> 
> Session on A:
> 
> % iperf3 -c 192.168.120.134 -b 10m
> Connecting to host 192.168.120.134, port 5201
> [  5] local 192.168.120.1 port 52370 connected to 192.168.120.134 port 5201
> [ ID] Interval   Transfer Bitrate
> [  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec  
> [  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec  
> [  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec  
> [  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec  
> [  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec  
> [  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec  
> [  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec  
> [  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec  
> [  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec  
> [  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec  
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bitrate
> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  sender
> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  receiver
> 
> iperf Done.
> 
> Session on B:
> 
> root@:~ # netstat -w 1 -I vmx0
> input   vmx0   output
>packets  errs idrops  bytespackets  errs  bytes colls
>  0 0 0  0  0 0  0 0
>  0 0 0  0  0 0  0 0
>649 0 0 960562330 0  21800 0
>819 0 01233702415 0  27390 0
>910 0 0137078045

Re: Too aggressive TCP ACKs

2022-10-21 Thread Zhenlei Huang

> On Oct 21, 2022, at 10:34 PM, Michael Tuexen 
> mailto:michael.tue...@lurchi.franken.de>> 
> wrote:
> 
>> On 21. Oct 2022, at 16:19, Zhenlei Huang > > wrote:
>> 
>> Hi,
>> 
>> While I was repeating 
>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=258755 
>> , I observed a
>> strange behavior. The TCP ACKs from FreeBSD host are too aggressive.
>> 
>> My setup is simple:
>> A B
>>   [ MacOS ]  <> [ FreeBSD VM ]
>> 192.168.120.1192.168.12.134 (disable tso and lro)
>> While A <--- B, i.e. A as server and B as client, the packets rate looks 
>> good.
>> 
>> One session on B:
>> 
>> root@:~ # iperf3 -c 192.168.120.1 -b 10m
>> Connecting to host 192.168.120.1, port 5201
>> [  5] local 192.168.120.134 port 54459 connected to 192.168.120.1 port 5201
>> [ ID] Interval   Transfer Bitrate Retr  Cwnd
>> [  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes  
>>  
>> [  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes  
>>  
>> [  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes  
>>  
>> [  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes  
>>  
>> [  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes  
>>  
>> [  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes  
>>  
>> [  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes  
>>  
>> [  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes  
>>  
>> [  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes  
>>  
>> [  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes  
>>  
>> - - - - - - - - - - - - - - - - - - - - - - - - -
>> [ ID] Interval   Transfer Bitrate Retr
>> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec0 sender
>> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  
>> receiver
>> 
>> iperf Done.
>> 
>> Another session on B:
>> 
>> root@:~ # netstat -w 1 -I vmx0
>>input   vmx0   output
>>   packets  errs idrops  bytespackets  errs  bytes colls
>> 0 0 0  0  0 0  0 0
>> 0 0 0  0  0 0  0 0
>>   342 0 0  22600526 0 775724 0
>>   150 0 0   9900851 01281454 0
>>   109 0 0   7194901 01357850 0
>>   126 0 0   8316828 01246632 0
>>   122 0 0   8052910 01370780 0
>>   109 0 0   7194819 01233702 0
>>   120 0 0   7920910 01370780 0
>>   110 0 0   7260819 01233702 0
>>   123 0 0   8118910 01370780 0
>>   109 0 0   7194819 01233702 0
>>73 0 0   5088465 0 686342 0
>> 0 0 0  0  0 0  0 0
>> 0 0 0  0  0 0  0 0
>> 
>> 
>> 
>> 
>> 
>> 
>> While A ---> B, i.e. A as client and B as server, the ACKs sent from B looks 
>> strange.
>> 
>> Session on A:
>> 
>> % iperf3 -c 192.168.120.134 -b 10m
>> Connecting to host 192.168.120.134, port 5201
>> [  5] local 192.168.120.1 port 52370 connected to 192.168.120.134 port 5201
>> [ ID] Interval   Transfer Bitrate
>> [  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec  
>> [  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec  
>> [  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec  
>> [  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec  
>> [  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec  
>> [  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec  
>> [  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec  
>> [  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec  
>> [  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec  
>> [  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec  
>> - - - - - - - - - - - - - - - - - - - - - - - - -
>> [ ID] Interval   Transfer Bitrate
>> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  sender
>> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  
>> receiver
>> 
>> iperf Done.
>> 
>> Session on B:
>> 
>> root@:~ # netstat -w 1 -I vmx0
>>input   vmx0   output
>>   packets  errs idrops  bytespackets  errs  bytes colls

Re: Too aggressive TCP ACKs

2022-10-21 Thread Cui, Cheng
You can also think about MacOS’s delayed ACK setup in default is conservative.

https://developer.apple.com/forums/thread/716394


--
Cheng Cui

From: owner-freebsd-...@freebsd.org  on behalf 
of Zhenlei Huang 
Date: Friday, October 21, 2022 at 11:01 AM
To: Michael Tuexen 
Cc: freebsd-net@freebsd.org 
Subject: Re: Too aggressive TCP ACKs
NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.



On Oct 21, 2022, at 10:34 PM, Michael Tuexen 
mailto:michael.tue...@lurchi.franken.de>> 
wrote:

On 21. Oct 2022, at 16:19, Zhenlei Huang 
mailto:zlei.hu...@gmail.com>> wrote:

Hi,

While I was repeating https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=258755, 
I observed a
strange behavior. The TCP ACKs from FreeBSD host are too aggressive.

My setup is simple:
A B
  [ MacOS ]  <> [ FreeBSD VM ]
192.168.120.1192.168.12.134 (disable tso and lro)
While A <--- B, i.e. A as server and B as client, the packets rate looks good.

One session on B:

root@:~ # iperf3 -c 192.168.120.1 -b 10m
Connecting to host 192.168.120.1, port 5201
[  5] local 192.168.120.134 port 54459 connected to 192.168.120.1 port 5201
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes
[  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes
[  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes
[  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes
[  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec0 sender
[  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  receiver

iperf Done.

Another session on B:

root@:~ # netstat -w 1 -I vmx0
   input   vmx0   output
  packets  errs idrops  bytespackets  errs  bytes colls
0 0 0  0  0 0  0 0
0 0 0  0  0 0  0 0
  342 0 0  22600526 0 775724 0
  150 0 0   9900851 01281454 0
  109 0 0   7194901 01357850 0
  126 0 0   8316828 01246632 0
  122 0 0   8052910 01370780 0
  109 0 0   7194819 01233702 0
  120 0 0   7920910 01370780 0
  110 0 0   7260819 01233702 0
  123 0 0   8118910 01370780 0
  109 0 0   7194819 01233702 0
   73 0 0   5088465 0 686342 0
0 0 0  0  0 0  0 0
0 0 0  0  0 0  0 0






While A ---> B, i.e. A as client and B as server, the ACKs sent from B looks 
strange.

Session on A:

% iperf3 -c 192.168.120.134 -b 10m
Connecting to host 192.168.120.134, port 5201
[  5] local 192.168.120.1 port 52370 connected to 192.168.120.134 port 5201
[ ID] Interval   Transfer Bitrate
[  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec
[  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec
[  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec
[  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec
[  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate
[  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  sender
[  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  receiver

iperf Done.

Session on B:

root@:~ # netstat -w 1 -I vmx0
   input   vmx0   output
  packets  errs idrops  bytespackets  errs  bytes colls
0 0 0  0  0 0  0 0
0 0 0  0  0 0  0

Re: Too aggressive TCP ACKs

2022-10-21 Thread Zhenlei Huang

> On Oct 21, 2022, at 11:02 PM, Cui, Cheng  <mailto:cheng@netapp.com>> wrote:
> 
> You can also think about MacOS’s delayed ACK setup in default is conservative.
>  
> https://developer.apple.com/forums/thread/716394 
> <https://developer.apple.com/forums/thread/716394>I thinks that's a good 
> starting point.
Thanks!

>  
>  
> -- 
> Cheng Cui
>  
> From: owner-freebsd-...@freebsd.org <mailto:owner-freebsd-...@freebsd.org> 
> mailto:owner-freebsd-...@freebsd.org>> on 
> behalf of Zhenlei Huang mailto:zlei.hu...@gmail.com>>
> Date: Friday, October 21, 2022 at 11:01 AM
> To: Michael Tuexen  <mailto:michael.tue...@lurchi.franken.de>>
> Cc: freebsd-net@freebsd.org <mailto:freebsd-net@freebsd.org> 
> mailto:freebsd-net@freebsd.org>>
> Subject: Re: Too aggressive TCP ACKs
> 
> NetApp Security WARNING: This is an external email. Do not click links or 
> open attachments unless you recognize the sender and know the content is 
> safe. 
> 
> 
> 
>  
> On Oct 21, 2022, at 10:34 PM, Michael Tuexen 
> mailto:michael.tue...@lurchi.franken.de>> 
> wrote:
>  
> On 21. Oct 2022, at 16:19, Zhenlei Huang  <mailto:zlei.hu...@gmail.com>> wrote:
> 
> Hi,
> 
> While I was repeating 
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=258755 
> <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=258755>, I observed a
> strange behavior. The TCP ACKs from FreeBSD host are too aggressive.
> 
> My setup is simple:
> A B
>   [ MacOS ]  <> [ FreeBSD VM ]
> 192.168.120.1192.168.12.134 (disable tso and lro)
> While A <--- B, i.e. A as server and B as client, the packets rate looks good.
> 
> One session on B:
> 
> root@:~ # iperf3 -c 192.168.120.1 -b 10m
> Connecting to host 192.168.120.1, port 5201
> [  5] local 192.168.120.134 port 54459 connected to 192.168.120.1 port 5201
> [ ID] Interval   Transfer Bitrate Retr  Cwnd
> [  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes   
> [  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes   
> [  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes   
> [  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> [  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes   
> [  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes   
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval   Transfer Bitrate Retr
> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec0 sender
> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  receiver
> 
> iperf Done.
> 
> Another session on B:
> 
> root@:~ # netstat -w 1 -I vmx0
>input   vmx0   output
>   packets  errs idrops  bytespackets  errs  bytes colls
> 0 0 0  0  0 0  0 0
> 0 0 0  0  0 0  0 0
>   342 0 0  22600526 0 775724 0
>   150 0 0   9900851 01281454 0
>   109 0 0   7194901 01357850 0
>   126 0 0   8316828 01246632 0
>   122 0 0   8052910 01370780 0
>   109 0 0   7194819 01233702 0
>   120 0 0   7920910 01370780 0
>   110 0 0   7260819 01233702 0
>   123 0 0   8118910 01370780 0
>   109 0 0   7194819 01233702 0
>73 0 0   5088465 0 686342 0
> 0 0 0  0  0 0  0 0
> 0 0 0  0  0 0  0 0
> 
> 
> 
> 
> 
> 
> While A ---> B, i.e. A as client and B as server, the ACKs sent from B looks 
> strange.
> 
> Session on A:
> 
> % iperf3 -c 192.168.120.134 -b 10m
> Connecting to host 192.168.120.134, port 5201
> [  5] local 192.168.120.1 port 52370 connected to 192.168.120.134 

Re: Too aggressive TCP ACKs

2022-10-21 Thread Michael Tuexen
> On 21. Oct 2022, at 17:00, Zhenlei Huang  wrote:
> 
> 
>> On Oct 21, 2022, at 10:34 PM, Michael Tuexen 
>>  wrote:
>> 
>>> On 21. Oct 2022, at 16:19, Zhenlei Huang  wrote:
>>> 
>>> Hi,
>>> 
>>> While I was repeating 
>>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=258755, I observed a
>>> strange behavior. The TCP ACKs from FreeBSD host are too aggressive.
>>> 
>>> My setup is simple:
>>> A B
>>>   [ MacOS ]  <> [ FreeBSD VM ]
>>> 192.168.120.1192.168.12.134 (disable tso and lro)
>>> While A <--- B, i.e. A as server and B as client, the packets rate looks 
>>> good.
>>> 
>>> One session on B:
>>> 
>>> root@:~ # iperf3 -c 192.168.120.1 -b 10m
>>> Connecting to host 192.168.120.1, port 5201
>>> [  5] local 192.168.120.134 port 54459 connected to 192.168.120.1 port 5201
>>> [ ID] Interval   Transfer Bitrate Retr  Cwnd
>>> [  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes 
>>>   
>>> [  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes 
>>>   
>>> [  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes 
>>>   
>>> [  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes 
>>>   
>>> [  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes 
>>>   
>>> [  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes 
>>>   
>>> [  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes 
>>>   
>>> [  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes 
>>>   
>>> [  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes 
>>>   
>>> [  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes 
>>>   
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval   Transfer Bitrate Retr
>>> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec0 sender
>>> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  
>>> receiver
>>> 
>>> iperf Done.
>>> 
>>> Another session on B:
>>> 
>>> root@:~ # netstat -w 1 -I vmx0
>>>input   vmx0   output
>>>   packets  errs idrops  bytespackets  errs  bytes colls
>>> 0 0 0  0  0 0  0 0
>>> 0 0 0  0  0 0  0 0
>>>   342 0 0  22600526 0 775724 0
>>>   150 0 0   9900851 01281454 0
>>>   109 0 0   7194901 01357850 0
>>>   126 0 0   8316828 01246632 0
>>>   122 0 0   8052910 01370780 0
>>>   109 0 0   7194819 01233702 0
>>>   120 0 0   7920910 01370780 0
>>>   110 0 0   7260819 01233702 0
>>>   123 0 0   8118910 01370780 0
>>>   109 0 0   7194819 01233702 0
>>>73 0 0   5088465 0 686342 0
>>> 0 0 0  0  0 0  0 0
>>> 0 0 0  0  0 0  0 0
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> While A ---> B, i.e. A as client and B as server, the ACKs sent from B 
>>> looks strange.
>>> 
>>> Session on A:
>>> 
>>> % iperf3 -c 192.168.120.134 -b 10m
>>> Connecting to host 192.168.120.134, port 5201
>>> [  5] local 192.168.120.1 port 52370 connected to 192.168.120.134 port 5201
>>> [ ID] Interval   Transfer Bitrate
>>> [  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec  
>>> [  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec  
>>> [  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec  
>>> [  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec  
>>> [  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec  
>>> [  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec  
>>> [  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec  
>>> [  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec  
>>> [  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec  
>>> [  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec  
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval   Transfer Bitrate
>>> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  sender
>>> [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  
>>> receiver
>>> 
>>> iperf Done.
>>> 
>>> Session on B:
>>> 
>>> root@:~ # netstat -w 1 -I vmx0
>>>input   vmx0   output
>>>   packets  errs idrops  bytespac

Re: Too aggressive TCP ACKs

2022-10-21 Thread Zhenlei Huang

> On Oct 22, 2022, at 2:16 AM, Michael Tuexen  > wrote:
> 
>> On 21. Oct 2022, at 17:00, Zhenlei Huang > > wrote:
>> 
>> 
>>> On Oct 21, 2022, at 10:34 PM, Michael Tuexen 
>>> >> > wrote:
>>> 
 On 21. Oct 2022, at 16:19, Zhenlei Huang >>> > wrote:
 
 Hi,
 
 While I was repeating 
 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=258755 
 , I observed a
 strange behavior. The TCP ACKs from FreeBSD host are too aggressive.
 
 My setup is simple:
A B
  [ MacOS ]  <> [ FreeBSD VM ]
 192.168.120.1192.168.12.134 (disable tso and lro)
 While A <--- B, i.e. A as server and B as client, the packets rate looks 
 good.
 
 One session on B:
 
 root@:~ # iperf3 -c 192.168.120.1 -b 10m
 Connecting to host 192.168.120.1, port 5201
 [  5] local 192.168.120.134 port 54459 connected to 192.168.120.1 port 5201
 [ ID] Interval   Transfer Bitrate Retr  Cwnd
 [  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes

 [  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes

 [  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes

 [  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes

 [  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes

 [  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes

 [  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes

 [  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes

 [  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes

 [  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes

 - - - - - - - - - - - - - - - - - - - - - - - - -
 [ ID] Interval   Transfer Bitrate Retr
 [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec0 
 sender
 [  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  
 receiver
 
 iperf Done.
 
 Another session on B:
 
 root@:~ # netstat -w 1 -I vmx0
   input   vmx0   output
  packets  errs idrops  bytespackets  errs  bytes colls
0 0 0  0  0 0  0 0
0 0 0  0  0 0  0 0
  342 0 0  22600526 0 775724 0
  150 0 0   9900851 01281454 0
  109 0 0   7194901 01357850 0
  126 0 0   8316828 01246632 0
  122 0 0   8052910 01370780 0
  109 0 0   7194819 01233702 0
  120 0 0   7920910 01370780 0
  110 0 0   7260819 01233702 0
  123 0 0   8118910 01370780 0
  109 0 0   7194819 01233702 0
   73 0 0   5088465 0 686342 0
0 0 0  0  0 0  0 0
0 0 0  0  0 0  0 0
 
 
 
 
 
 
 While A ---> B, i.e. A as client and B as server, the ACKs sent from B 
 looks strange.
 
 Session on A:
 
 % iperf3 -c 192.168.120.134 -b 10m
 Connecting to host 192.168.120.134, port 5201
 [  5] local 192.168.120.1 port 52370 connected to 192.168.120.134 port 5201
 [ ID] Interval   Transfer Bitrate
 [  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec  
 [  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec  
 [  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec  
 [  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec  
 [  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec  
 [  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec  
 [  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec  
 [  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec  
 [  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec  
 [  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec  
 - - - - - - - - - - - - - - - - - - - - - - - - -
 [ ID] Interval   Trans

Re: Too aggressive TCP ACKs

2022-10-22 Thread Hans Petter Selasky

Hi,

Some thoughts about this topic.

Delaying ACKs means loss of performance when using Gigabit TCP 
connections in data centers. There it is important to ACK the data as 
quick as possible, to avoid running out of TCP window space. Thinking 
about TCP connections at 30 GBit/s and above!


I think the implementation should be exactly like it is.

There is a software LRO in FreeBSD to coalesce the ACKs before they hit 
the network stack, so there are no real problems there.


--HPS





Re: Too aggressive TCP ACKs

2022-10-26 Thread Tom Jones
On Sat, Oct 22, 2022 at 12:14:25PM +0200, Hans Petter Selasky wrote:
> Hi,
> 
> Some thoughts about this topic.
> 
> Delaying ACKs means loss of performance when using Gigabit TCP 
> connections in data centers. There it is important to ACK the data as 
> quick as possible, to avoid running out of TCP window space. Thinking 
> about TCP connections at 30 GBit/s and above!
> 
> I think the implementation should be exactly like it is.
> 
> There is a software LRO in FreeBSD to coalesce the ACKs before they hit 
> the network stack, so there are no real problems there.
> 

Changing the ACK ratio seems to be okay in most cases, a paper I wrote
about this was published this week:

https://onlinelibrary.wiley.com/doi/10.1002/sat.1466

It focuses on QUIC, but congestion control dynamics don't change with
the protocol. You should be able to read there, but if not I'm happy to
send anyone a pdf.

- Tom



Re: Too aggressive TCP ACKs

2022-10-26 Thread Hans Petter Selasky

On 10/26/22 10:57, Tom Jones wrote:

It focuses on QUIC, but congestion control dynamics don't change with
the protocol. You should be able to read there, but if not I'm happy to
send anyone a pdf.


If QUIC doesn't support TSO  (Large Send Offload), it cannot be compared 
I think.


--HPS



Re: Too aggressive TCP ACKs

2022-10-26 Thread tuexen
> On 26. Oct 2022, at 10:57, Tom Jones  wrote:
> 
> On Sat, Oct 22, 2022 at 12:14:25PM +0200, Hans Petter Selasky wrote:
>> Hi,
>> 
>> Some thoughts about this topic.
>> 
>> Delaying ACKs means loss of performance when using Gigabit TCP 
>> connections in data centers. There it is important to ACK the data as 
>> quick as possible, to avoid running out of TCP window space. Thinking 
>> about TCP connections at 30 GBit/s and above!
>> 
>> I think the implementation should be exactly like it is.
>> 
>> There is a software LRO in FreeBSD to coalesce the ACKs before they hit 
>> the network stack, so there are no real problems there.
>> 
> 
> Changing the ACK ratio seems to be okay in most cases, a paper I wrote
> about this was published this week:
> 
> https://onlinelibrary.wiley.com/doi/10.1002/sat.1466
> 
> It focuses on QUIC, but congestion control dynamics don't change with
> the protocol. You should be able to read there, but if not I'm happy to
> send anyone a pdf.
Is QUIC using an L=2 for ABC?

Best regards
Michael
> 
> - Tom




Re: Too aggressive TCP ACKs

2022-10-26 Thread Tom Jones
On Wed, Oct 26, 2022 at 02:55:21PM +0200, tue...@freebsd.org wrote:
> > On 26. Oct 2022, at 10:57, Tom Jones  wrote:
> > 
> > On Sat, Oct 22, 2022 at 12:14:25PM +0200, Hans Petter Selasky wrote:
> >> Hi,
> >> 
> >> Some thoughts about this topic.
> >> 
> >> Delaying ACKs means loss of performance when using Gigabit TCP 
> >> connections in data centers. There it is important to ACK the data as 
> >> quick as possible, to avoid running out of TCP window space. Thinking 
> >> about TCP connections at 30 GBit/s and above!
> >> 
> >> I think the implementation should be exactly like it is.
> >> 
> >> There is a software LRO in FreeBSD to coalesce the ACKs before they hit 
> >> the network stack, so there are no real problems there.
> >> 
> > 
> > Changing the ACK ratio seems to be okay in most cases, a paper I wrote
> > about this was published this week:
> > 
> > https://onlinelibrary.wiley.com/doi/10.1002/sat.1466
> > 
> > It focuses on QUIC, but congestion control dynamics don't change with
> > the protocol. You should be able to read there, but if not I'm happy to
> > send anyone a pdf.
> Is QUIC using an L=2 for ABC?

I think that is the rfc recommendation, actual deployed reality is more
scattershot.

- Tom



Re: Too aggressive TCP ACKs

2022-10-26 Thread tuexen
> On 26. Oct 2022, at 14:59, Tom Jones  wrote:
> 
> On Wed, Oct 26, 2022 at 02:55:21PM +0200, tue...@freebsd.org wrote:
>>> On 26. Oct 2022, at 10:57, Tom Jones  wrote:
>>> 
>>> On Sat, Oct 22, 2022 at 12:14:25PM +0200, Hans Petter Selasky wrote:
 Hi,
 
 Some thoughts about this topic.
 
 Delaying ACKs means loss of performance when using Gigabit TCP 
 connections in data centers. There it is important to ACK the data as 
 quick as possible, to avoid running out of TCP window space. Thinking 
 about TCP connections at 30 GBit/s and above!
 
 I think the implementation should be exactly like it is.
 
 There is a software LRO in FreeBSD to coalesce the ACKs before they hit 
 the network stack, so there are no real problems there.
 
>>> 
>>> Changing the ACK ratio seems to be okay in most cases, a paper I wrote
>>> about this was published this week:
>>> 
>>> https://onlinelibrary.wiley.com/doi/10.1002/sat.1466
>>> 
>>> It focuses on QUIC, but congestion control dynamics don't change with
>>> the protocol. You should be able to read there, but if not I'm happy to
>>> send anyone a pdf.
>> Is QUIC using an L=2 for ABC?
> 
> I think that is the rfc recommendation, actual deployed reality is more
> scattershot.
Wouldn't that be relevant? If you get an ack for, let's say 8 packets, you would
only increment (in slow start) the cwnd by 2 packets, not 8?

Best regards
Michael
> 
> - Tom
> 




RE: Too aggressive TCP ACKs

2022-10-27 Thread Scheffenegger, Richard
 It focuses on QUIC, but congestion control dynamics don't change 
 with the protocol. You should be able to read there, but if not I'm 
 happy to send anyone a pdf.
>>> Is QUIC using an L=2 for ABC?
>>
>> I think that is the rfc recommendation, actual deployed reality is 
>> more scattershot.
>Wouldn't that be relevant? If you get an ack for, let's say 8 packets, you 
>would only increment (in slow start) the cwnd by 2 packets, not 8?
>
>Best regards
>Michael

Isn't that the optimization in Linux with QuickAck during the periods, where 
the data receiver assumes, that the sender is still in SlowStart - and acking 
every packet?

Richard



RE: Too aggressive TCP ACKs

2022-10-27 Thread Scheffenegger, Richard
To come back to this.

With TSO / LRO disabled, FBSD is behaving per RFC by acking every other data 
packet, or delaying an ACK (e.g when data stops on an “uneven” packet) after a 
short delay (delayed ACKs).

If you want to see different ACK ratios, and also higher gigabit throughput 
rates for a single session, maybe test with the FBSD RACK stack

https://klarasystems.com/articles/using-the-freebsd-rack-tcp-stack/

sysctl net.inet.tcp.functions_default=rack

Richard


From: owner-freebsd-...@freebsd.org  On Behalf 
Of Zhenlei Huang
Sent: Freitag, 21. Oktober 2022 16:19
To: freebsd-net@freebsd.org
Subject: Too aggressive TCP ACKs

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.


Hi,

While I was repeating https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=258755, 
I observed a
strange behavior. The TCP ACKs from FreeBSD host are too aggressive.

My setup is simple:
 A B
   [ MacOS ]  <> [ FreeBSD VM ]
192.168.120.1192.168.12.134 (disable tso and lro)
While A <--- B, i.e. A as server and B as client, the packets rate looks good.

One session on B:

root@:~ # iperf3 -c 192.168.120.1 -b 10m
Connecting to host 192.168.120.1, port 5201
[  5] local 192.168.120.134 port 54459 connected to 192.168.120.1 port 5201
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes
[  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes
[  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes
[  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
[  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec0257 KBytes
[  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec0257 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec0 sender
[  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  receiver

iperf Done.

Another session on B:

root@:~ # netstat -w 1 -I vmx0
input   vmx0   output
   packets  errs idrops  bytespackets  errs  bytes colls
 0 0 0  0  0 0  0 0
 0 0 0  0  0 0  0 0
   342 0 0  22600526 0 775724 0
   150 0 0   9900851 01281454 0
   109 0 0   7194901 01357850 0
   126 0 0   8316828 01246632 0
   122 0 0   8052910 01370780 0
   109 0 0   7194819 01233702 0
   120 0 0   7920910 01370780 0
   110 0 0   7260819 01233702 0
   123 0 0   8118910 01370780 0
   109 0 0   7194819 01233702 0
73 0 0   5088465 0 686342 0
 0 0 0  0  0 0  0 0
 0 0 0  0  0 0  0 0






While A ---> B, i.e. A as client and B as server, the ACKs sent from B looks 
strange.

Session on A:

% iperf3 -c 192.168.120.134 -b 10m
Connecting to host 192.168.120.134, port 5201
[  5] local 192.168.120.1 port 52370 connected to 192.168.120.134 port 5201
[ ID] Interval   Transfer Bitrate
[  5]   0.00-1.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   2.00-3.00   sec  1.12 MBytes  9.44 Mbits/sec
[  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   4.00-5.00   sec  1.12 MBytes  9.44 Mbits/sec
[  5]   5.00-6.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   6.00-7.00   sec  1.12 MBytes  9.44 Mbits/sec
[  5]   7.00-8.00   sec  1.25 MBytes  10.5 Mbits/sec
[  5]   8.00-9.00   sec  1.12 MBytes  9.44 Mbits/sec
[  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate
[  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  sender
[  5]   0.00-10.00  sec  12.0 MBytes  10.1 Mbits/sec  receiver

iperf Done.

Session on B:

root@:~ # netstat -w 1 -I vmx0
input   vmx0   output
   packets  errs idrops  bytespackets  errs  bytes colls
 0 0 

Re: Too aggressive TCP ACKs

2022-10-27 Thread tuexen
> On 27. Oct 2022, at 10:08, Scheffenegger, Richard 
>  wrote:
> 
> It focuses on QUIC, but congestion control dynamics don't change 
> with the protocol. You should be able to read there, but if not I'm 
> happy to send anyone a pdf.
 Is QUIC using an L=2 for ABC?
>>> 
>>> I think that is the rfc recommendation, actual deployed reality is 
>>> more scattershot.
>> Wouldn't that be relevant? If you get an ack for, let's say 8 packets, you 
>> would only increment (in slow start) the cwnd by 2 packets, not 8?
>> 
>> Best regards
>> Michael
> 
> Isn't that the optimization in Linux with QuickAck during the periods, where 
> the data receiver assumes, that the sender is still in SlowStart - and acking 
> every packet?
Sure. But that is not specified... I just wanted to point out that simply
"Changing the ACK ratio seems to be okay in most cases" might be more complex
than the sentence reads...

Best regards
Michael

> 
> Richard
> 




Re: Too aggressive TCP ACKs

2022-11-08 Thread Zhenlei Huang

> On Oct 22, 2022, at 6:14 PM, Hans Petter Selasky  wrote:
> 
> Hi,
> 
> Some thoughts about this topic.

Sorry for late response.

> 
> Delaying ACKs means loss of performance when using Gigabit TCP connections in 
> data centers. There it is important to ACK the data as quick as possible, to 
> avoid running out of TCP window space. Thinking about TCP connections at 30 
> GBit/s and above!

In data centers, the bandwidth is much more and the latency is extremely low 
(compared to WAN), sub-milliseconds .
The TCP window space is bandwidth multiply RTT. For a 30 GBit/s network it is 
about 750KiB . I think that is trivial for a
datacenter server.

4.2.3.2 in RFC 1122 states:
> in a stream of full-sized segments there SHOULD be an ACK for at least every 
> second segment 
Even if the ACK every tenth segment, the impact of delayed ACKs on TCP window 
is not significant ( at most
 ten segments not ACKed in TCP send window ).

Anyway, for datacenter usage the bandwidth is symmetric and the reverse path ( 
TX path of receiver ) is sufficient.
Servers can even ACK every segment (no delaying ACK).

> 
> I think the implementation should be exactly like it is.
> 
> There is a software LRO in FreeBSD to coalesce the ACKs before they hit the 
> network stack, so there are no real problems there.

I'm OK with the current implementation.

I think upper layers (or application) have (business) information to indicate 
whether delaying ACKs should be employed.
After googling I found there's a draft [1].

[1] Sender Control of Delayed Acknowledgments in TCP: 
https://www.ietf.org/archive/id/draft-gomez-tcpm-delack-suppr-reqs-01.xml

> 
> --HPS
> 
> 

Best regards,
Zhenlei

Re: Too aggressive TCP ACKs

2022-11-10 Thread Zhenlei Huang
> On Nov 9, 2022, at 11:18 AM, Zhenlei Huang  wrote:
> 
> 
>> On Oct 22, 2022, at 6:14 PM, Hans Petter Selasky > > wrote:
>> 
>> Hi,
>> 
>> Some thoughts about this topic.
> 
> Sorry for late response.
> 
>> 
>> Delaying ACKs means loss of performance when using Gigabit TCP connections 
>> in data centers. There it is important to ACK the data as quick as possible, 
>> to avoid running out of TCP window space. Thinking about TCP connections at 
>> 30 GBit/s and above!
> 
> In data centers, the bandwidth is much more and the latency is extremely low 
> (compared to WAN), sub-milliseconds .
> The TCP window space is bandwidth multiply RTT. For a 30 GBit/s network it is 
> about 750KiB . I think that is trivial for a
> datacenter server.
> 
> 4.2.3.2 in RFC 1122 states:
> > in a stream of full-sized segments there SHOULD be an ACK for at least 
> > every second segment 
> Even if the ACK every tenth segment, the impact of delayed ACKs on TCP window 
> is not significant ( at most
>  ten segments not ACKed in TCP send window ).
> 
> Anyway, for datacenter usage the bandwidth is symmetric and the reverse path 
> ( TX path of receiver ) is sufficient.
> Servers can even ACK every segment (no delaying ACK).
> 
>> 
>> I think the implementation should be exactly like it is.
>> 
>> There is a software LRO in FreeBSD to coalesce the ACKs before they hit the 
>> network stack, so there are no real problems there.
> 
> I'm OK with the current implementation.
> 
> I think upper layers (or application) have (business) information to indicate 
> whether delaying ACKs should be employed.
> After googling I found there's a draft [1].
> 
> [1] Sender Control of Delayed Acknowledgments in TCP: 
> https://www.ietf.org/archive/id/draft-gomez-tcpm-delack-suppr-reqs-01.xml 
> 
Found the html / pdf / txt version of the draft RFC.
https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-pull/

> 
>> 
>> --HPS
>> 
>> 
> 
> Best regards,
> Zhenlei



Re: Too aggressive TCP ACKs

2022-11-10 Thread tuexen
> On 10. Nov 2022, at 08:07, Zhenlei Huang  wrote:
> 
>> On Nov 9, 2022, at 11:18 AM, Zhenlei Huang  wrote:
>> 
>> 
>>> On Oct 22, 2022, at 6:14 PM, Hans Petter Selasky  wrote:
>>> 
>>> Hi,
>>> 
>>> Some thoughts about this topic.
>> 
>> Sorry for late response.
>> 
>>> 
>>> Delaying ACKs means loss of performance when using Gigabit TCP connections 
>>> in data centers. There it is important to ACK the data as quick as 
>>> possible, to avoid running out of TCP window space. Thinking about TCP 
>>> connections at 30 GBit/s and above!
>> 
>> In data centers, the bandwidth is much more and the latency is extremely low 
>> (compared to WAN), sub-milliseconds .
>> The TCP window space is bandwidth multiply RTT. For a 30 GBit/s network it 
>> is about 750KiB . I think that is trivial for a
>> datacenter server.
>> 
>> 4.2.3.2 in RFC 1122 states:
>> > in a stream of full-sized segments there SHOULD be an ACK for at least 
>> > every second segment 
>> Even if the ACK every tenth segment, the impact of delayed ACKs on TCP 
>> window is not significant ( at most
>>  ten segments not ACKed in TCP send window ).
>> 
>> Anyway, for datacenter usage the bandwidth is symmetric and the reverse path 
>> ( TX path of receiver ) is sufficient.
>> Servers can even ACK every segment (no delaying ACK).
>> 
>>> 
>>> I think the implementation should be exactly like it is.
>>> 
>>> There is a software LRO in FreeBSD to coalesce the ACKs before they hit the 
>>> network stack, so there are no real problems there.
>> 
>> I'm OK with the current implementation.
>> 
>> I think upper layers (or application) have (business) information to 
>> indicate whether delaying ACKs should be employed.
>> After googling I found there's a draft [1].
>> 
>> [1] Sender Control of Delayed Acknowledgments in TCP: 
>> https://www.ietf.org/archive/id/draft-gomez-tcpm-delack-suppr-reqs-01.xml
> 
> 
> Found the html / pdf / txt version of the draft RFC.
> https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-pull/
Can you specify the problem you are facing or trying to solve?

Best regards
Michael
> 
>> 
>>> 
>>> --HPS
>>> 
>>> 
>> 
>> Best regards,
>> Zhenlei
> 
> 




RE: Too aggressive TCP ACKs

2022-11-10 Thread Scheffenegger, Richard
This is the current draft in this space:

https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-rate-request/

and it has been adopted as WG document at this weeks IETF, from what I can tell.

So it has traction – if you want to give your feedback, please subscribe to the 
tcpm mailing list, and discuss your use case and how/if the approach aligns 
with this there.

Richard



From: owner-freebsd-...@freebsd.org  On Behalf 
Of Zhenlei Huang
Sent: Donnerstag, 10. November 2022 09:07
To: Hans Petter Selasky 
Cc: Michael Tuexen ; freebsd-net@freebsd.org
Subject: Re: Too aggressive TCP ACKs

NetApp Security WARNING: This is an external email. Do not click links or open 
attachments unless you recognize the sender and know the content is safe.


On Nov 9, 2022, at 11:18 AM, Zhenlei Huang 
mailto:zlei.hu...@gmail.com>> wrote:


On Oct 22, 2022, at 6:14 PM, Hans Petter Selasky 
mailto:h...@selasky.org>> wrote:

Hi,

Some thoughts about this topic.

Sorry for late response.



Delaying ACKs means loss of performance when using Gigabit TCP connections in 
data centers. There it is important to ACK the data as quick as possible, to 
avoid running out of TCP window space. Thinking about TCP connections at 30 
GBit/s and above!

In data centers, the bandwidth is much more and the latency is extremely low 
(compared to WAN), sub-milliseconds .
The TCP window space is bandwidth multiply RTT. For a 30 GBit/s network it is 
about 750KiB . I think that is trivial for a
datacenter server.


4.2.3.2 in RFC 1122 states:
> in a stream of full-sized segments there SHOULD be an ACK for at least every 
> second segment
Even if the ACK every tenth segment, the impact of delayed ACKs on TCP window 
is not significant ( at most
 ten segments not ACKed in TCP send window ).

Anyway, for datacenter usage the bandwidth is symmetric and the reverse path ( 
TX path of receiver ) is sufficient.
Servers can even ACK every segment (no delaying ACK).


I think the implementation should be exactly like it is.

There is a software LRO in FreeBSD to coalesce the ACKs before they hit the 
network stack, so there are no real problems there.

I'm OK with the current implementation.

I think upper layers (or application) have (business) information to indicate 
whether delaying ACKs should be employed.
After googling I found there's a draft [1].

[1] Sender Control of Delayed Acknowledgments in TCP: 
https://www.ietf.org/archive/id/draft-gomez-tcpm-delack-suppr-reqs-01.xml

Found the html / pdf / txt version of the draft RFC.
https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-pull/





--HPS


Best regards,
Zhenlei



Re: Too aggressive TCP ACKs

2022-11-10 Thread Zhenlei Huang

> On Nov 10, 2022, at 5:28 PM, tue...@freebsd.org wrote:
> 
>> On 10. Nov 2022, at 08:07, Zhenlei Huang  wrote:
>> 
>>> On Nov 9, 2022, at 11:18 AM, Zhenlei Huang  wrote:
>>> 
>>> 
 On Oct 22, 2022, at 6:14 PM, Hans Petter Selasky  wrote:
 
 Hi,
 
 Some thoughts about this topic.
>>> 
>>> Sorry for late response.
>>> 
 
 Delaying ACKs means loss of performance when using Gigabit TCP connections 
 in data centers. There it is important to ACK the data as quick as 
 possible, to avoid running out of TCP window space. Thinking about TCP 
 connections at 30 GBit/s and above!
>>> 
>>> In data centers, the bandwidth is much more and the latency is extremely 
>>> low (compared to WAN), sub-milliseconds .
>>> The TCP window space is bandwidth multiply RTT. For a 30 GBit/s network it 
>>> is about 750KiB . I think that is trivial for a
>>> datacenter server.
>>> 
>>> 4.2.3.2 in RFC 1122 states:
 in a stream of full-sized segments there SHOULD be an ACK for at least 
 every second segment 
>>> Even if the ACK every tenth segment, the impact of delayed ACKs on TCP 
>>> window is not significant ( at most
>>> ten segments not ACKed in TCP send window ).
>>> 
>>> Anyway, for datacenter usage the bandwidth is symmetric and the reverse 
>>> path ( TX path of receiver ) is sufficient.
>>> Servers can even ACK every segment (no delaying ACK).
>>> 
 
 I think the implementation should be exactly like it is.
 
 There is a software LRO in FreeBSD to coalesce the ACKs before they hit 
 the network stack, so there are no real problems there.
>>> 
>>> I'm OK with the current implementation.
>>> 
>>> I think upper layers (or application) have (business) information to 
>>> indicate whether delaying ACKs should be employed.
>>> After googling I found there's a draft [1].
>>> 
>>> [1] Sender Control of Delayed Acknowledgments in TCP: 
>>> https://www.ietf.org/archive/id/draft-gomez-tcpm-delack-suppr-reqs-01.xml
>> 
>> 
>> Found the html / pdf / txt version of the draft RFC.
>> https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-pull/
> Can you specify the problem you are facing or trying to solve?

For me, no problems currently.

> I think upper layers (or application) have (business) information to indicate 
> whether delaying ACKs should be employed.

That is from my experience in software developing. In typical layered 
architectures, Callers have more information than callee.
To be flexible generally callee should not presume too much but instead should 
have options to support (what callers intend).

It is a little off-topic. For "Too aggressive TCP ACKs" if delaying ACK every 
third or more full segments, it has known issue
"Stretch ACK violation". See section 2.13 from RFC 2525.


> 
> Best regards
> Michael
>> 
>>> 
 
 --HPS
 
 
>>> 
>>> Best regards,
>>> Zhenlei

Best regards,
Zhenlei



Re: Too aggressive TCP ACKs

2022-11-10 Thread Zhenlei Huang
> On Nov 10, 2022, at 8:01 PM, Scheffenegger, Richard 
>  wrote:
> 
> This is the current draft in this space:
>  
> https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-rate-request/ 
> <https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-rate-request/>
>  
> and it has been adopted as WG document at this weeks IETF, from what I can 
> tell.

Thanks for that information !

>  
> So it has traction – if you want to give your feedback, please subscribe to 
> the tcpm mailing list, and discuss your use case and how/if the approach 
> aligns with this there.

Subscribed.

>  
> Richard
>  
>  
>  
> From: owner-freebsd-...@freebsd.org <mailto:owner-freebsd-...@freebsd.org> 
> mailto:owner-freebsd-...@freebsd.org>> On 
> Behalf Of Zhenlei Huang
> Sent: Donnerstag, 10. November 2022 09:07
> To: Hans Petter Selasky mailto:h...@selasky.org>>
> Cc: Michael Tuexen  <mailto:michael.tue...@lurchi.franken.de>>; freebsd-net@freebsd.org 
> <mailto:freebsd-net@freebsd.org>
> Subject: Re: Too aggressive TCP ACKs
>  
> NetApp Security WARNING: This is an external email. Do not click links or 
> open attachments unless you recognize the sender and know the content is 
> safe. 
> 
> 
> 
> On Nov 9, 2022, at 11:18 AM, Zhenlei Huang  <mailto:zlei.hu...@gmail.com>> wrote:
>  
>  
> On Oct 22, 2022, at 6:14 PM, Hans Petter Selasky  <mailto:h...@selasky.org>> wrote:
>  
> Hi,
> 
> Some thoughts about this topic.
>  
> Sorry for late response.
> 
> 
> 
> Delaying ACKs means loss of performance when using Gigabit TCP connections in 
> data centers. There it is important to ACK the data as quick as possible, to 
> avoid running out of TCP window space. Thinking about TCP connections at 30 
> GBit/s and above!
>  
> In data centers, the bandwidth is much more and the latency is extremely low 
> (compared to WAN), sub-milliseconds .
> The TCP window space is bandwidth multiply RTT. For a 30 GBit/s network it is 
> about 750KiB . I think that is trivial for a
> datacenter server.
> 
> 
> 4.2.3.2 in RFC 1122 states:
> > in a stream of full-sized segments there SHOULD be an ACK for at least 
> > every second segment 
> Even if the ACK every tenth segment, the impact of delayed ACKs on TCP window 
> is not significant ( at most
>  ten segments not ACKed in TCP send window ).
>  
> Anyway, for datacenter usage the bandwidth is symmetric and the reverse path 
> ( TX path of receiver ) is sufficient.
> Servers can even ACK every segment (no delaying ACK).
>  
> 
> I think the implementation should be exactly like it is.
> 
> There is a software LRO in FreeBSD to coalesce the ACKs before they hit the 
> network stack, so there are no real problems there.
>  
> I'm OK with the current implementation.
>  
> I think upper layers (or application) have (business) information to indicate 
> whether delaying ACKs should be employed.
> After googling I found there's a draft [1].
>  
> [1] Sender Control of Delayed Acknowledgments in TCP: 
> https://www.ietf.org/archive/id/draft-gomez-tcpm-delack-suppr-reqs-01.xml 
> <https://www.ietf.org/archive/id/draft-gomez-tcpm-delack-suppr-reqs-01.xml>
>  
> Found the html / pdf / txt version of the draft RFC.
> https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-pull/ 
> <https://datatracker.ietf.org/doc/draft-gomez-tcpm-ack-pull/>
> 
> 
> 
> 
> 
> --HPS
> 
> 
>  
> Best regards,
> Zhenlei