Hi Max, 

Inline.

> On Jul 24, 2019, at 10:09 AM, Max A. <max1...@mail.ru> wrote:
> 
> 
> Hi Florin,
> 
> I made a simple epoll tcp proxy (using vcl) and saw the same behavior.
> 
> I increased the fifo size to 16k, but I got exactly the same effect. A full 
> dump for a session with a buffer size of 16k can be obtained by reference [1] 
> (192.168.0.1 is the interface on vpp, 192.168.0.200 is the linux host with 
> nginx).

Since fifos are fixed, you’ll see the same effect as long as the sender is 
faster than the rate at which data is dequeued out of the rx fifo. 

> 
> Maybe we should not allow full buffer filling?

As explained above, as long as the sender is faster, this will happen. Still, 
out of curiosity, can you try this [1] to see if it changes linux’s behavior in 
any way? Although, I suspect the linux’s window probe timer, after a zero 
window, is not smaller than min rto (which is the 200 ms you’re seeing). 

[1] https://gerrit.fd.io/r/c/20830/ <https://gerrit.fd.io/r/c/20830/>

> 
> P.S. I tested several host stacks working with dpdk, but only in vpp this 
> behavior is observed.

Well, the question there is how large are the rx buffers. If you never see a 
zero rcv window advertised to the sender, I suspect the rx buffer is large 
enough to sustain the throughput. We will eventually support auto-tunning of 
the fifos (i.e., dynamically grow/shrink them based on observed throughput) but 
until then, you either request large enough fifos or write a builtin proxy that 
does the auto-tunning of the fifos via the shrink/grow fifo apis (e.g., 
segment_manager_grow_fifo)

Florin

> 
> Thanks.
> 
> [1]  https://drive.google.com/open?id=1JV1zSpggwEoWdeddDR8lcY3yeQRY6-B3 
> <https://drive.google.com/open?id=1JV1zSpggwEoWdeddDR8lcY3yeQRY6-B3> 
> 
> Среда, 24 июля 2019, 17:45 +03:00 от Florin Coras <fcoras.li...@gmail.com>:
> 
> Hi, 
> 
> It seems that linux is reluctant to send a segment smaller than the mss, so 
> it probably delays sending it. Since there’s little fifo space, that’s pretty 
> much unavoidable. 
> 
> Still, note that as you increase the number of sessions, if all send traffic 
> at the same rate, then their fair share will be considerably lower than the 
> maximum you can achieve on your interfaces. If you expect some sessions to be 
> “elephant flows”, you could solve the issue by growing their fifos (see 
> segment_manager_grow_fifo) from the app. The builtin tcp proxy does not 
> support this at this time, so you’ll have to do it yourself.  
> 
> Florin
> 
> > On Jul 24, 2019, at 1:34 AM, max1976 via Lists.Fd.Io 
> > <max1976=mail...@lists.fd.io <mailto:max1976=mail...@lists.fd.io>> wrote:
> > 
> > Hello,
> > 
> > Experimenting with the size of fifo, I saw a problem. The smaller the size 
> > of the fifo, the more often tcp window overflow errors occur (Segment not 
> > in receive window in vpp terminology). In the dump [1], is shown the data 
> > exchange between the vpp tcp proxy (192.168.0.1) and the nginx server under 
> > Linux (192.168.0.200), the size of the rx fifo in the vpp is set to 8192 
> > bytes. The red arrow indicates that the vpp is waiting for the latest data 
> > to fill the buffer. The green arrow indicates that the Linux host stack is 
> > sending data with a significant delay.
> > This behavior significantly reduces throughput. I plan to use a large 
> > number of simultaneous sessions, so I can not set the size of the fifo too 
> > large. How can I solve this problem?
> > 
> > Thanks.
> > [1] https://monosnap.com/file/XfDjcqvpofIR7fJ6lEXgoyCB17LdfY 
> > <https://monosnap.com/file/XfDjcqvpofIR7fJ6lEXgoyCB17LdfY> 
> > -=-=-=-=-=-=-=-=-=-=-=-
> > Links: You receive all messages sent to this group.
> > 
> > View/Reply Online (#13555): https://lists.fd.io/g/vpp-dev/message/13555 
> > <https://lists.fd.io/g/vpp-dev/message/13555>
> > Mute This Topic: https://lists.fd.io/mt/32582078/675152 
> > <https://lists.fd.io/mt/32582078/675152>
> > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
> > <https://lists.fd.io/g/vpp-dev/unsub>  [fcoras.li...@gmail.com]
> > -=-=-=-=-=-=-=-=-=-=-=-
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13567): https://lists.fd.io/g/vpp-dev/message/13567 
> <https://lists.fd.io/g/vpp-dev/message/13567>
> Mute This Topic: https://lists.fd.io/mt/32582078/1863201 
> <https://lists.fd.io/mt/32582078/1863201>
> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
> <https://lists.fd.io/g/vpp-dev/unsub> [max1...@mail.ru 
> <mailto:max1...@mail.ru>]
> -=-=-=-=-=-=-=-=-=-=-=-
> 
> 
> -- 
> Max A.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13572): https://lists.fd.io/g/vpp-dev/message/13572
Mute This Topic: https://lists.fd.io/mt/32582078/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to