Hi Florin,

I made a simple epoll tcp proxy (using vcl) and saw the same behavior.

I increased the fifo size to 16k, but I got exactly the same effect.  A full 
dump for a session with a buffer size of 16k can be obtained by reference [1]  
(192.168.0.1 is the interface on vpp, 192.168.0.200 is the linux host with 
nginx) .

Maybe we should not allow full buffer filling?

P.S. I tested several host stacks working with dpdk, but only in vpp this 
behavior is observed.

Thanks.

[1]   https://drive.google.com/open?id=1JV1zSpggwEoWdeddDR8lcY3yeQRY6-B3  

>Среда, 24 июля 2019, 17:45 +03:00 от Florin Coras <fcoras.li...@gmail.com>:
>
>Hi, 
>
>It seems that linux is reluctant to send a segment smaller than the mss, so it 
>probably delays sending it. Since there’s little fifo space, that’s pretty 
>much unavoidable. 
>
>Still, note that as you increase the number of sessions, if all send traffic 
>at the same rate, then their fair share will be considerably lower than the 
>maximum you can achieve on your interfaces. If you expect some sessions to be 
>“elephant flows”, you could solve the issue by growing their fifos (see 
>segment_manager_grow_fifo) from the app. The builtin tcp proxy does not 
>support this at this time, so you’ll have to do it yourself. 
>
>Florin
>
>> On Jul 24, 2019, at 1:34 AM, max1976 via Lists.Fd.Io < 
>> max1976=mail...@lists.fd.io > wrote:
>> 
>> Hello,
>> 
>> Experimenting with the size of fifo, I saw a problem. The smaller the size 
>> of the fifo, the more often tcp window overflow errors occur (Segment not in 
>> receive window in vpp terminology). In the dump [1], is shown the data 
>> exchange between the vpp tcp proxy (192.168.0.1) and the nginx server under 
>> Linux (192.168.0.200), the size of the rx fifo in the vpp is set to 8192 
>> bytes. The red arrow indicates that the vpp is waiting for the latest data 
>> to fill the buffer. The green arrow indicates that the Linux host stack is 
>> sending data with a significant delay.
>> This behavior significantly reduces throughput. I plan to use a large number 
>> of simultaneous sessions, so I can not set the size of the fifo too large. 
>> How can I solve this problem?
>> 
>> Thanks.
>> [1]  https://monosnap.com/file/XfDjcqvpofIR7fJ6lEXgoyCB17LdfY 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#13555):  https://lists.fd.io/g/vpp-dev/message/13555
>> Mute This Topic:  https://lists.fd.io/mt/32582078/675152
>> Group Owner:  vpp-dev+ow...@lists.fd.io
>> Unsubscribe:  https://lists.fd.io/g/vpp-dev/unsub [fcoras.li...@gmail.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
>
>-=-=-=-=-=-=-=-=-=-=-=-
>Links: You receive all messages sent to this group.
>
>View/Reply Online (#13567):  https://lists.fd.io/g/vpp-dev/message/13567
>Mute This Topic:  https://lists.fd.io/mt/32582078/1863201
>Group Owner:  vpp-dev+ow...@lists.fd.io
>Unsubscribe:  https://lists.fd.io/g/vpp-dev/unsub [max1...@mail.ru]
>-=-=-=-=-=-=-=-=-=-=-=-


-- 
Max A.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13571): https://lists.fd.io/g/vpp-dev/message/13571
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to