Hi, 

That indeed looks like an issue due to vpp not being able to recycle 
connections fast enough. There are only 64k connections available between vpp 
and the upstream server, so recycling them as fast as possible, i.e., with 0 
timeout as the kernel does after tcp_max_tw_buckets threshold is hit, might 
make it look like performance is moderately good assuming there are less than 
64k active connections (not closing). 

However, as explained in the previous emails, that might lead to connection 
errors (see my previous links). You could try to emulate that with vpp, by just 
setting timewait-time to 0 but the same disclaimer regarding connection errors 
holds. The only other option is to ensure vpp can allocate more connections to 
the upstream server, i.e., either more source IPs or more destination/server 
IPs.

Regards,
Florin 

> On May 2, 2022, at 8:33 AM, weizhen9...@163.com wrote:
> 
> Hi,
>     The short link means that after the client send GET request, the client 
> send tcp FIN packet. Instead, the long link means that after the client send 
> GET request,  the client send next http GET request by using the same link 
> and don't need to send syn packet.
>     We found that when vpp and the upstream servers used the short link, the 
> performance is lower than nginx proxy using kernel host stack. The picture 
> shows the performance of nginx proxy using vpp host stack.
> <dummyfile.0.part>
> Actually, the performance of nginx proxy using vpp host stack is higher than 
> nginx proxy using kernel host stack. I don't understand why?
> Thanks.
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21322): https://lists.fd.io/g/vpp-dev/message/21322
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to