Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-04-30 Thread weizhen9612
Hi,
Now I use nginx  which uses vpp host stack as a proxy to test the performance. 
But I find the performance of nginx using vpp host stack is lower  than nginx 
using kernel host stack. The reason is that I config the tcp_max_tw_bucket in 
kernel host stack. So does the vpp stack support the setting tcp_max_tw_bucket? 
If not, can I modify the vpp host stack?
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21308): https://lists.fd.io/g/vpp-dev/message/21308
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-04-30 Thread Florin Coras
Hi, 

What is performance in this case, CPS? If yes, does nginx proxy only towards 
one IP, hence the need for tcp_max_tw_bucket? 

You have the option to reduce time wait time in tcp by setting timewait-time in 
tcp’s startup.conf stanza. I would not recommend reducing it too much as it can 
lead to corruption of data streams whenever connections cannot be gracefully 
closed because of lost packets. 

If you have more IPs vpp could use on the interface vpp towards your server, 
I’d recommend providing them to tcp via: tcp src-address  - 

Regards,
Florin
 

> On Apr 30, 2022, at 4:17 AM, weizhen9...@163.com wrote:
> 
> Hi,
> Now I use nginx  which uses vpp host stack as a proxy to test the 
> performance. But I find the performance of nginx using vpp host stack is 
> lower  than nginx using kernel host stack. The reason is that I config the  
> tcp_max_tw_bucket in kernel host stack. So does the vpp stack support the 
> setting tcp_max_tw_bucket? If not, can I modify the vpp host stack?
> Thanks. 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21311): https://lists.fd.io/g/vpp-dev/message/21311
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-04-30 Thread weizhen9612
Hi,
I test nginx proxy using RPS. And nginx proxy only towards one IP.
Now I test the performance of nginx proxy using vpp host stack and by 
configuring nginx, it is a short connection between the nginx reverse proxy and 
the upstream server. The result of test show that the performance of nginx 
proxy using vpp host stack is lower than nginx proxy using kernel host stack. 
In kernel host stack, I config tcp_max_tw_bucket.
But when  it is a long connection between the nginx reverse proxy and the 
upstream server, the performance of nginx proxy using vpp host stack is higher 
than nginx proxy using kernel host stack.
So what should I do to improve the performance of nginx proxy using vpp host 
stack when  it is a short connection between the nginx reverse proxy and the 
upstream server?
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21312): https://lists.fd.io/g/vpp-dev/message/21312
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-04-30 Thread Florin Coras
Hi, 

Understood. See the comments in my previous reply regarding timewait-time 
(tcp_max_tw_bucket practically sets time-wait to 0 once threshold is passed) 
and tcp-src address. 

Regards, 
Florin

> On Apr 30, 2022, at 10:08 AM, weizhen9...@163.com wrote:
> 
> Hi,
> I test nginx proxy using RPS. And nginx proxy only towards one IP.
> Now I test the performance of nginx proxy using vpp host stack and by 
> configuring nginx, it is a short connection between the nginx reverse proxy 
> and the upstream server. The result of test show that the performance of 
> nginx proxy using vpp host stack is lower than nginx proxy using kernel host 
> stack. In kernel host stack, I config tcp_max_tw_bucket.
> But when  it is a long connection between the nginx reverse proxy and the 
> upstream server, the performance of nginx proxy using vpp host stack is 
> higher than nginx proxy using kernel host stack.
> So what should I do to improve the performance of nginx proxy using vpp host 
> stack when  it is a short connection between the nginx reverse proxy and the 
> upstream server?
> Thanks. 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21313): https://lists.fd.io/g/vpp-dev/message/21313
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-01 Thread weizhen9612
Hi,
I set the timewait_time which is equal to 1s in tcp's configuration. But the 
performance of nginx proxy using vpp host stack is still lower than nginx proxy 
using kernel host stack.
Now I want to know what can I do to improve the performance? And does nginx 
proxy using vpp host stack support short link?
In addition, just as you said above, do I need to sets time-wait to 0? And I 
don't set tcp-src address. I hope the performance of nginx proxy using vpp host 
stack is higher than the performance of nginx proxy using kernel host stack in 
the hardware environment.
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21314): https://lists.fd.io/g/vpp-dev/message/21314
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-02 Thread Florin Coras
Hi, 

As per this [1], after tcp_max_tw_buckets threshold is hit timewait time is 0 
and this [2] explains what will go wrong. Assuming you’re hitting the 
threshold, 1s timewait-time in vpp will probably not be enough to match 
performance. 

Not sure what you mean by “short link”. If you can’t use multiple source IPs or 
destination IPs in the active opens between vpp and the upstream servers, 
there’s not much that could be done beyond what’s mentioned above as vpp can’t 
allocate connections. If your nginx and server it’s proxying for are colocated, 
and the server can use vcl, you could maybe try to use cut-through sessions as 
those do not consume ports in tcp. 

Regards, 
Florin

[1] https://sysctl-explorer.net/net/ipv4/tcp_max_tw_buckets/ 

[2] 
https://stackoverflow.com/questions/45979123/what-is-the-side-effect-of-setting-tcp-max-tw-buckets-to-a-very-small-value
 


> On May 1, 2022, at 2:09 AM, weizhen9...@163.com wrote:
> 
> Hi,
> I set the timewait_time which is equal to 1s in tcp's configuration. But 
> the performance of nginx proxy using vpp host stack is still lower than nginx 
> proxy using kernel host stack. 
> Now I want to know what can I do to improve the performance? And does 
> nginx proxy using vpp host stack support short link?
> In addition, just as you said above, do I need to sets time-wait to 0? 
> And I don't set tcp-src address. I hope the performance of nginx proxy using 
> vpp host stack is higher than the performance of nginx proxy using kernel 
> host stack in the hardware environment.
> Thanks. 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21317): https://lists.fd.io/g/vpp-dev/message/21317
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-02 Thread weizhen9612
Hi,
The short link means that after the client send GET request, the client send 
tcp FIN packet. Instead, the long link means that after the client send GET 
request,  the client send next http GET request by using the same link and 
don't need to send syn packet.
We found that when vpp and the upstream servers used the short link, the 
performance is lower than nginx proxy using kernel host stack. The picture 
shows the performance of nginx proxy using vpp host stack.

Actually, the performance of nginx proxy using vpp host stack is higher than 
nginx proxy using kernel host stack. I don't understand why?
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21318): https://lists.fd.io/g/vpp-dev/message/21318
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-02 Thread Florin Coras
Hi, 

That indeed looks like an issue due to vpp not being able to recycle 
connections fast enough. There are only 64k connections available between vpp 
and the upstream server, so recycling them as fast as possible, i.e., with 0 
timeout as the kernel does after tcp_max_tw_buckets threshold is hit, might 
make it look like performance is moderately good assuming there are less than 
64k active connections (not closing). 

However, as explained in the previous emails, that might lead to connection 
errors (see my previous links). You could try to emulate that with vpp, by just 
setting timewait-time to 0 but the same disclaimer regarding connection errors 
holds. The only other option is to ensure vpp can allocate more connections to 
the upstream server, i.e., either more source IPs or more destination/server 
IPs.

Regards,
Florin 

> On May 2, 2022, at 8:33 AM, weizhen9...@163.com wrote:
> 
> Hi,
> The short link means that after the client send GET request, the client 
> send tcp FIN packet. Instead, the long link means that after the client send 
> GET request,  the client send next http GET request by using the same link 
> and don't need to send syn packet.
> We found that when vpp and the upstream servers used the short link, the 
> performance is lower than nginx proxy using kernel host stack. The picture 
> shows the performance of nginx proxy using vpp host stack.
> 
> Actually, the performance of nginx proxy using vpp host stack is higher than 
> nginx proxy using kernel host stack. I don't understand why?
> Thanks.
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21322): https://lists.fd.io/g/vpp-dev/message/21322
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-03 Thread weizhen9612
Hi,

I wanted to ask if you have tested the performance of nginx proxy using vpp 
host stack as a short connection, i.e. after vpp send GET request to upstream 
server, vpp close the connection. If yes, please tell me the result.
Thank you for your suggestion about adding multiple source IPs. But we want to 
make the performance of the vpp protocol stack higher than that of the kernel 
in the same condition.

Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21345): https://lists.fd.io/g/vpp-dev/message/21345
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-03 Thread Florin Coras
Hi, 

Unfortunately, I have not, partly because I didn’t expect too much out of the 
test due to the issues you’re hitting. What’s the difference between linux and 
vpp with and without tcp_max_tw_bucket? 

Regards,
Florin

> On May 3, 2022, at 3:28 AM, weizhen9...@163.com wrote:
> 
> Hi,
> 
> I wanted to ask if you have tested the performance of nginx proxy using 
> vpp host stack as a short connection, i.e. after vpp send GET request to 
> upstream server, vpp close the connection. If yes, please tell me the result. 
> Thank you for your suggestion about adding multiple source IPs. But we 
> want to make the performance of the vpp protocol stack higher than that of 
> the kernel in the same condition.
> 
> Thanks.
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21348): https://lists.fd.io/g/vpp-dev/message/21348
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread weizhen9612
Hi,
When I use wrk to test the performance of  nginx proxy using vpp host stack, I 
execute the command "show session" by vppctl. The result is following.

The main core has most of sessions. Is this normal? If not, what should I DO?
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21350): https://lists.fd.io/g/vpp-dev/message/21350
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread Florin Coras
Hi, 

Those are half-open connects. So yes, they’re expected if nginx opens a new 
connection for each request.

Regards,
Florin

> On May 4, 2022, at 6:48 AM, weizhen9...@163.com wrote:
> 
> Hi,
> When I use wrk to test the performance of  nginx proxy using vpp host 
> stack, I execute the command "show session" by vppctl. The result is 
> following.
> 
> The main core has most of sessions. Is this normal? If not, what should I DO?
> Thanks.
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21351): https://lists.fd.io/g/vpp-dev/message/21351
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread weizhen9612
Hi,
Is this the reason for the low performance? In general, the main threads 
handles management functions(debug CLI, API, stats collection) and one or more 
worker threads handle packet processing from input to output of the packet. Why 
does the main core handle the session? Does the condition influence the 
performance? If yes, what should I do?
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21352): https://lists.fd.io/g/vpp-dev/message/21352
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread Florin Coras
Hi, 

That shouldn’t be the issue. Half-opens are on main because connection 
establishment needs locks before it sends out a syn packet. Handshakes are not 
completed on main but on workers. VPP with one worker + main should be able to 
handle 100k+ CPS with warmed up pools. 

Long term we’ll switch from main to first worker for syns but again, that’s not 
the thing that limits performance in your case. Instead, it’s probably the 
number of ports. You should be able to confirm that by testing with multiple 
source ips. 

Regards,
Florin

> On May 4, 2022, at 8:00 AM, weizhen9...@163.com wrote:
> 
> Hi,
>Is this the reason for the low performance? In general, the main threads 
> handles management functions(debug CLI, API, stats collection) and one or 
> more worker threads handle packet processing from input to output of the 
> packet. Why does the main core handle the session? Does the condition 
> influence the performance? If yes, what should I do?
> Thanks. 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21353): https://lists.fd.io/g/vpp-dev/message/21353
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread weizhen9612
According to your suggestion, I test with multiple source ips. But the 
performance is still low.

The ip is as follows.

vpp#tcp src-address 192.168.6.6-192.168.6.9
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21354): https://lists.fd.io/g/vpp-dev/message/21354
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread Florin Coras
What’s the result prior to multiple addresses? Also, can you give it the whole 
/24? No need to configure the ips, just tcp src-address 
192.168.6.6-192.168.6.250

Forgot to ask before but is the server that’s being proxied for handling the 
load? It will also need to accept a lot of connections. 

Regards,
Florin

> On May 4, 2022, at 8:35 AM, weizhen9...@163.com wrote:
> 
> According to your suggestion, I test with multiple source ips. But the 
> performance is still low.
> 
> The ip is as follows.
> 
> vpp#tcp src-address 192.168.6.6-192.168.6.9
> Thanks.
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21355): https://lists.fd.io/g/vpp-dev/message/21355
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread Florin Coras
As mentioned previously, is the upstream server handling the load? Do you see 
drops between vpp and the upstream server? 

Regards,
Florin

> On May 4, 2022, at 9:10 AM, weizhen9...@163.com wrote:
> 
> Hi,
>According to your suggestion, I config the src-address.
> 
> 
> But the performance is lower than that before.
> 
> 
> 
> Thanks.
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21357): https://lists.fd.io/g/vpp-dev/message/21357
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread weizhen9612
Hi,
I test the performance of upstream server.

Just as you see, the performance of upstream is more higher than vpp proxy. In 
addition, I don't find any drops.
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21359): https://lists.fd.io/g/vpp-dev/message/21359
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread Florin Coras
Next step then. What’s segment-size and add-segment-size in vcl.conf? Could you 
set them to something large like 40? Also event-queue-size 100, 
just to make sure mq and fifo segments are not a limiting factor. In vpp under 
session stanza, set event-queue-length 20. 

Try also to run the test twice to make sure the issue is not pool warmup. 
Finally, if perf doesn’t improve, before test do "clear error" and after test 
"show error” and let’s see if there’s something there. 

Regards,
Florin 

> On May 4, 2022, at 6:25 PM, weizhen9...@163.com wrote:
> 
> Hi,
> I test the performance of upstream server.
> 
> Just as you see, the performance of upstream is more higher than vpp proxy. 
> In addition, I don't find any drops.
> Thanks.
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21360): https://lists.fd.io/g/vpp-dev/message/21360
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread 汪翰林
You can try to set the long downstream and upstream connections in nginx 
configruation like this:
 http {

...

keepalive_timeout 65;
keepalive_requests 100;

upstream backend{
 ...
 keepalive 3;
 }

 }


Regards,
Hanlin
| |
汪翰林
|
|
hanlin_w...@163.com
|
On 5/5/2022 09:29, wrote:
Hi,
I test the performance of upstream server.

Just as you see, the performance of upstream is more higher than vpp proxy. In 
addition, I don't find any drops.
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21361): https://lists.fd.io/g/vpp-dev/message/21361
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread weizhen9612
Hi,
When I set a long connection, the performance of vpp proxy is higher than 
before. But we need to set a short connection between vpp and upstream server.
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21362): https://lists.fd.io/g/vpp-dev/message/21362
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-04 Thread 汪翰林
Can you check if [1] patch merged?


[1] https://gerrit.fd.io/r/c/vpp/+/33496 


Regards,
Hanlin


| |
汪翰林
|
|
hanlin_w...@163.com
|
On 5/5/2022 10:51, wrote:
Hi,
   When I set a long connection, the performance of vpp proxy is higher than 
before. But we need to set a short connection between vpp and upstream server.
Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21363): https://lists.fd.io/g/vpp-dev/message/21363
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-05 Thread weizhen9612
Hi,
Just as you see, I check the patch.

Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21368): https://lists.fd.io/g/vpp-dev/message/21368
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-05 Thread weizhen9612
Hi,
Now I configure main-core to 2 and corelist-workers to 0, and find that the 
performance has improved significantly.

When I execute the following conmand, I find that vpp have main thread only.
#show threads

What does this situation show?
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21370): https://lists.fd.io/g/vpp-dev/message/21370
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-05 Thread liuyacan







Hi Florin, weizhen9612:     I'm not sure whether rpc for connects will be executed immediately by the main thread in the current implementation, or will it wait for the epoll_pwait in linux_epoll_input_inline() to time out.
Regards,yacan
 

On 5/5/2022 16:19, wrote: 


Hi,   Now I configure main-core to 2 and corelist-workers to 0, and find that the performance has improved significantly.When I execute the following conmand, I find that vpp have main thread only.#show threadsWhat does this situation show?Thanks. 






-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21372): https://lists.fd.io/g/vpp-dev/message/21372
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-05 Thread Florin Coras
Hi, 

Make sure that nginx and vpp are on the same numa, in case you run on a 
multi-socket host. Check that with lscpu and “show hardware verbose”. Also make 
sure that nginx and vpp's cpus don’t overlap, i.e., run nginx with taskset. 

Regarding the details of your change, normally we recommend not to use workers 
on core 0, so not entirely sure what happens there. 

Regards,
Florin

> On May 5, 2022, at 1:19 AM, weizhen9...@163.com wrote:
> 
> Hi,
>Now I configure main-core to 2 and corelist-workers to 0, and find that 
> the performance has improved significantly.
> 
> When I execute the following conmand, I find that vpp have main thread only.
> #show threads
> 
> 
> What does this situation show?
> Thanks. 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21374): https://lists.fd.io/g/vpp-dev/message/21374
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-05 Thread Florin Coras
Hi Yacan, 

Currently rpcs from first worker to main are done through session layer and are 
processed by main in batches. Session queue node runs on main in interrupt mode 
so first worker will set an interrupt when the list of pending connects goes 
non-empty and main will switch to polling in the rpc handler if it notices it 
can’t handle pending connects in one dispatch. So, the first connect might be 
affected by main sleeping in epoll_pwait but subsequent connects should not, 
assuming we get a constant stream of connects.

To test that, weizhen9612 try adding to session stanza in startup.conf: session 
{ poll-main }. That should avoid main sleeping in epoll_pwait.  

Eventually, we’ll get to a point where first worker will execute the connects. 
Part of the changes needed are in, i.e., session pools are now realloced with 
barrier, but more improvements are needed.   

Regards,
Florin

> On May 5, 2022, at 5:01 AM, liuyacan  wrote:
> 
> Hi Florin, weizhen9612:
> 
>  I'm not sure whether rpc for connects will be executed immediately by 
> the main thread in the current implementation, or will it wait for the 
> epoll_pwait in linux_epoll_input_inline() to time out.
> 
> Regards,
> yacan
> On 5/5/2022 16:19,  wrote: 
> Hi,
>Now I configure main-core to 2 and corelist-workers to 0, and find that 
> the performance has improved significantly.
> 
> When I execute the following conmand, I find that vpp have main thread only.
> #show threads
> 
> 
> What does this situation show?
> Thanks. 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21375): https://lists.fd.io/g/vpp-dev/message/21375
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-05 Thread weizhen9612
Hi,
We have only one numa node. Just as the following picture show.

vpp# sh hardware-interfaces verbose
Name                Idx   Link  Hardware
ens1f0                             1     up   ens1f0
Link speed: 10 Gbps
RX Queues:
queue thread         mode
0     main (0)       polling
1     main (0)       polling
Ethernet address 00:13:95:0a:58:03
Intel 82599
carrier up full duplex max-frame-size 2056
flags: admin-up intel-phdr-cksum rx-ip4-cksum
Devargs:
rx: queues 2 (max 128), desc 512 (min 32 max 4096 align 8)
tx: queues 2 (max 64), desc 512 (min 32 max 4096 align 8)
pci: device 8086:10fb subsystem : address :09:00.00 numa 0
max rx packet len: 15872
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
macsec-strip vlan-filter vlan-extend scatter security
keep-crc rss-hash
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
tcp-tso macsec-insert multi-segs security
tx offload active: none
rss avail:         ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
ipv6-udp ipv6-ex ipv6
rss active:        ipv4-tcp ipv4-udp ipv4
tx burst function: ixgbe_recv_scattered_pkts_vec
rx burst function: (not available)

rx frames ok                                           5
rx bytes ok                                          415
extended stats:
rx_good_packets                                      5
rx_good_bytes                                      415
rx_q0_packets                                        5
rx_q0_bytes                                        415
mac_remote_errors                                    1
rx_size_65_to_127_packets                            5
rx_multicast_packets                                 5
rx_total_packets                                     5
rx_total_bytes                                     415
ens1f1                             2     up   ens1f1
Link speed: 10 Gbps
RX Queues:
queue thread         mode
0     main (0)       polling
1     main (0)       polling
Ethernet address 00:13:95:0a:58:04
Intel 82599
carrier up full duplex max-frame-size 2056
flags: admin-up intel-phdr-cksum rx-ip4-cksum
Devargs:
rx: queues 2 (max 128), desc 512 (min 32 max 4096 align 8)
tx: queues 2 (max 64), desc 512 (min 32 max 4096 align 8)
pci: device 8086:10fb subsystem : address :09:00.01 numa 0
max rx packet len: 15872
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
macsec-strip vlan-filter vlan-extend scatter security
keep-crc rss-hash
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
tcp-tso macsec-insert multi-segs security
tx offload active: none
rss avail:         ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
ipv6-udp ipv6-ex ipv6
rss active:        ipv4-tcp ipv4-udp ipv4
tx burst function: ixgbe_recv_scattered_pkts_vec
rx burst function: (not available)

rx frames ok                                           5
rx bytes ok                                          415
extended stats:
rx_good_packets                                      5
rx_good_bytes                                      415
rx_q0_packets                                        5
rx_q0_bytes                                        415
mac_local_errors                                    28
mac_remote_errors                                    1
rx_size_65_to_127_packets                            5
rx_multicast_packets                                 5
rx_total_packets                                     5
rx_total_bytes                                     415
local0                             0    down  local0
Link speed: unknown
local
In addition, I don't use worker on core 0. Instead, I don't config the worker. 
So the vpp have only one thread. Just as the following picture show.

Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21377): https://lists.fd.io/g/vpp-dev/message/21377
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-05 Thread Florin Coras
Hi, 

That’s also a source for slowdowns. Try configuring one worker. 

Regards,
Florin

> On May 5, 2022, at 6:40 PM, weizhen9...@163.com wrote:
> 
> Hi,
>We have only one numa node. Just as the following picture show.
> 
> vpp# sh hardware-interfaces verbose
>   NameIdx   Link  Hardware
> ens1f0 1 up   ens1f0
>   Link speed: 10 Gbps
>   RX Queues:
> queue thread mode
> 0 main (0)   polling
> 1 main (0)   polling
>   Ethernet address 00:13:95:0a:58:03
>   Intel 82599
> carrier up full duplex max-frame-size 2056
> flags: admin-up intel-phdr-cksum rx-ip4-cksum
> Devargs:
> rx: queues 2 (max 128), desc 512 (min 32 max 4096 align 8)
> tx: queues 2 (max 64), desc 512 (min 32 max 4096 align 8)
> pci: device 8086:10fb subsystem : address :09:00.00 numa 0
> max rx packet len: 15872
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>macsec-strip vlan-filter vlan-extend scatter security
>keep-crc rss-hash
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso macsec-insert multi-segs security
> tx offload active: none
> rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
>ipv6-udp ipv6-ex ipv6
> rss active:ipv4-tcp ipv4-udp ipv4
> tx burst function: ixgbe_recv_scattered_pkts_vec
> rx burst function: (not available)
>  
> rx frames ok   5
> rx bytes ok  415
> extended stats:
>   rx_good_packets  5
>   rx_good_bytes  415
>   rx_q0_packets5
>   rx_q0_bytes415
>   mac_remote_errors1
>   rx_size_65_to_127_packets5
>   rx_multicast_packets 5
>   rx_total_packets 5
>   rx_total_bytes 415
> ens1f1 2 up   ens1f1
>   Link speed: 10 Gbps
>   RX Queues:
> queue thread mode
> 0 main (0)   polling
> 1 main (0)   polling
>   Ethernet address 00:13:95:0a:58:04
>   Intel 82599
> carrier up full duplex max-frame-size 2056
> flags: admin-up intel-phdr-cksum rx-ip4-cksum
> Devargs:
> rx: queues 2 (max 128), desc 512 (min 32 max 4096 align 8)
> tx: queues 2 (max 64), desc 512 (min 32 max 4096 align 8)
> pci: device 8086:10fb subsystem : address :09:00.01 numa 0
> max rx packet len: 15872
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>macsec-strip vlan-filter vlan-extend scatter security
>keep-crc rss-hash
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso macsec-insert multi-segs security
> tx offload active: none
> rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
>ipv6-udp ipv6-ex ipv6
> rss active:ipv4-tcp ipv4-udp ipv4
> tx burst function: ixgbe_recv_scattered_pkts_vec
> rx burst function: (not available)
>  
> rx frames ok   5
> rx bytes ok  415
> extended stats:
>   rx_good_packets  5
>   rx_good_bytes  415
>   rx_q0_packets5
>   rx_q0_bytes415
>   mac_local_errors28
>   mac_remote_errors1
>   rx_size_65_to_127_packets5
>   rx_multicast_packets 5
>   rx_total_packets 5
>   rx_total_bytes 415
> local0 0down  local0
>   Link speed: unknown
>   local
> In addition, I don't use worker on core 0. Instead, I don't config the 
> worker. So the vpp have only one thread. Just as the following picture show.
> 
> 
> Thanks.
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21378): https://lists.fd.io/g/vpp-dev/message

Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-06 Thread liuyacan







Hi Florin,
   Thanks for the clear explanation !  I do remember that I consulted with you about this procedure, and it has been improved before. But i am failed to find the patch two days ago.Regards,yacan






 

 

On 5/6/2022 01:12,Florin Coras wrote: 


Hi Yacan, Currently rpcs from first worker to main are done through session layer and are processed by main in batches. Session queue node runs on main in interrupt mode so first worker will set an interrupt when the list of pending connects goes non-empty and main will switch to polling in the rpc handler if it notices it can’t handle pending connects in one dispatch. So, the first connect might be affected by main sleeping in epoll_pwait but subsequent connects should not, assuming we get a constant stream of connects.To test that, weizhen9612 try adding to session stanza in startup.conf: session { poll-main }. That should avoid main sleeping in epoll_pwait.  Eventually, we’ll get to a point where first worker will execute the connects. Part of the changes needed are in, i.e., session pools are now realloced with barrier, but more improvements are needed.   Regards,FlorinOn May 5, 2022, at 5:01 AM, liuyacan  wrote:Hi Florin, weizhen9612:     I'm not sure whether rpc for connects will be executed immediately by the main thread in the current implementation, or will it wait for the epoll_pwait in linux_epoll_input_inline() to time out.Regards,yacanOn 5/5/2022 16:19, wrote: Hi,   Now I configure main-core to 2 and corelist-workers to 0, and find that the performance has improved significantly.When I execute the following conmand, I find that vpp have main thread only.#show threadsWhat does this situation show?Thanks. 




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21380): https://lists.fd.io/g/vpp-dev/message/21380
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] test performance of nginx using vpp host stack#vpp-hoststack

2022-05-07 Thread weizhen9612
Hi,
I try adding to session stanza in startup.conf: session { poll-main }. But the 
performance is still exists.
Thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21381): https://lists.fd.io/g/vpp-dev/message/21381
Mute This Topic: https://lists.fd.io/mt/90793836/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-