Re: [vpp-dev] VPP Stateful NAT64 crashes with segmentation fault

2022-11-08 Thread Gabor LENCSE

Dear Filip,

Thank you very much for your prompt reply!

I have attached the startup.conf files for the case with 2 workers 
(startup.conf-mc1wc02) and for the case when I used only the main core 
(startup.conf-mc0). In both cases, 4 CPU cores (0-3) were enabled and 
cores 0-2 were excluded from the Linux scheduler using the "maxcpus=4" 
and "isolcpus=0-2" kernel command line parameters, respectively.


I am new to FD.io VPP. Could you please advise me if there are pre-built 
debug packages available for Debian 10, and if yes, where can I find them?


If I need to compile them myself, could you please give me a pointer, 
how I can do  it?


I am currently using servers in NICT StarBED, Japan. This is a test-bed 
environment, and I can download packages only using a http or ftp proxy. 
(Or I can download them on my Windows laptop and upload them through a 
gateway.)


Thank you very much in advance!

Best regards,

Gábor

On 11/9/2022 4:04 PM, filvarga wrote:

Hi Gabor,

I will look into it and get back to you. Meanwhile could you run the 
same test with a debug build and post the results ? Maybe even core 
dump. Also please post your startup.conf file


Best regards,
Filip Varga


st 9. 11. 2022 o 7:50 Gabor LENCSE  napísal(a):

Dear VPP Developers,

I am a researcher and I would like to benchmark the performance of
the stateful NAT64 implementation of FD.io VPP.

Unfortunately, VPP crashed with segmentation fault.

Some details:

I used two Dell PowerEdge R430 servers as the Tester and the DUT
(Device Under Test), two 10GbE interfaces of which were
interconnected by direct cables. On the DUT, I used Debian Linux
10.13 with 4.19.0-20-amd64 kernel and the version of FD.io VPP was
22.06. The following packages were installed: libvppinfra, vpp,
vpp-plugin-core, vpp-plugin-dpdk.

I used the following commands to set up Stateful NAT64:

root@p109:~/DUT-settings# cat set-vpp
vppctl set interface state TenGigabitEthernet5/0/0 up
vppctl set interface state TenGigabitEthernet5/0/1 up
vppctl set interface ip address TenGigabitEthernet5/0/0 2001:2::1/64
vppctl set interface ip address TenGigabitEthernet5/0/1
198.19.0.1/24 
vppctl ip route add 2001:2::/64 via 2001:2::1 TenGigabitEthernet5/0/0
vppctl ip route add 198.19.0.0/24  via
198.19.0.1 TenGigabitEthernet5/0/1
vppctl set ip neighbor static TenGigabitEthernet5/0/0 2001:2::2
a0:36:9f:74:73:64
vppctl set ip neighbor static TenGigabitEthernet5/0/1 198.19.0.2
a0:36:9f:74:73:66
vppctl set interface nat64 in TenGigabitEthernet5/0/0
vppctl set interface nat64 out TenGigabitEthernet5/0/1
vppctl nat64 add prefix 64:ff9b::/96
vppctl nat64 add pool address 198.19.0.1

As for VPP, first I used two workers, but then I also tried
without workers, using only the main core. Unfortunately, VPP
crashed in both cases, but with somewhat different messages in the
syslog. (Previously I tested both setups with IPv6 packet
forwarding and they worked with an excellent performance.)

The error messages in the syslog when I used two workers:

Nov  7 16:32:02 p109 vnet[2479]: received signal SIGSEGV, PC
0x7fa86f138168, faulting address 0x4f8
Nov  7 16:32:02 p109 vnet[2479]: #0  0x7fa8b2158137 0x7fa8b2158137
Nov  7 16:32:02 p109 vnet[2479]: #1  0x7fa8b2086730 0x7fa8b2086730
Nov  7 16:32:02 p109 vnet[2479]: #2  0x7fa86f138168 0x7fa86f138168
Nov  7 16:32:02 p109 vnet[2479]: #3  0x7fa86f11d228 0x7fa86f11d228
Nov  7 16:32:02 p109 vnet[2479]: #4  0x7fa8b20fbe62 0x7fa8b20fbe62
Nov  7 16:32:02 p109 vnet[2479]: #5  0x7fa8b20fda4f
vlib_worker_loop + 0x5ff
Nov  7 16:32:02 p109 vnet[2479]: #6  0x7fa8b2135e79
vlib_worker_thread_fn + 0xa9
Nov  7 16:32:02 p109 vnet[2479]: #7  0x7fa8b2135290
vlib_worker_thread_bootstrap_fn + 0x50
Nov  7 16:32:02 p109 vnet[2479]: #8  0x7fa8b207bfa3
start_thread + 0xf3
Nov  7 16:32:02 p109 vnet[2479]: #9  0x7fa8b1d75eff clone + 0x3f
Nov  7 16:32:02 p109 systemd[1]: vpp.service: Main process exited,
code=killed, status=6/ABRT

The error messages in the syslog when I used only the main core:

Nov  7 16:48:57 p109 vnet[2606]: received signal SIGSEGV, PC
0x7fbe1d24a168, faulting address 0x1a8
Nov  7 16:48:57 p109 vnet[2606]: #0  0x7fbe6026a137 0x7fbe6026a137
Nov  7 16:48:57 p109 vnet[2606]: #1  0x7fbe60198730 0x7fbe60198730
Nov  7 16:48:57 p109 vnet[2606]: #2  0x7fbe1d24a168 0x7fbe1d24a168
Nov  7 16:48:57 p109 vnet[2606]: #3  0x7fbe1d22f228 0x7fbe1d22f228
Nov  7 16:48:57 p109 vnet[2606]: #4  0x7fbe6020de62 0x7fbe6020de62
Nov  7 16:48:57 p109 vnet[2606]: #5  0x7fbe602127d1 vlib_main
+ 0xd41
Nov  7 16:48:57 p109 vnet[2606]: #6  0x7fbe6026906a 0x7fbe6026906a
Nov  7 16:48:57 p109 vnet[2606]: #7  0x7fbe60169964 

Re: [vpp-dev] VPP Stateful NAT64 crashes with segmentation fault

2022-11-08 Thread filvarga
Hi Gabor,

I will look into it and get back to you. Meanwhile could you run the same
test with a debug build and post the results ? Maybe even core dump. Also
please post your startup.conf file

Best regards,
Filip Varga


st 9. 11. 2022 o 7:50 Gabor LENCSE  napísal(a):

> Dear VPP Developers,
>
> I am a researcher and I would like to benchmark the performance of the
> stateful NAT64 implementation of FD.io VPP.
>
> Unfortunately, VPP crashed with segmentation fault.
>
> Some details:
>
> I used two Dell PowerEdge R430 servers as the Tester and the DUT (Device
> Under Test), two 10GbE interfaces of which were interconnected by direct
> cables. On the DUT, I used Debian Linux 10.13 with 4.19.0-20-amd64 kernel
> and the version of FD.io VPP was 22.06. The following packages were
> installed: libvppinfra, vpp, vpp-plugin-core, vpp-plugin-dpdk.
>
> I used the following commands to set up Stateful NAT64:
>
> root@p109:~/DUT-settings# cat set-vpp
> vppctl set interface state TenGigabitEthernet5/0/0 up
> vppctl set interface state TenGigabitEthernet5/0/1 up
> vppctl set interface ip address TenGigabitEthernet5/0/0 2001:2::1/64
> vppctl set interface ip address TenGigabitEthernet5/0/1 198.19.0.1/24
> vppctl ip route add 2001:2::/64 via 2001:2::1 TenGigabitEthernet5/0/0
> vppctl ip route add 198.19.0.0/24 via 198.19.0.1 TenGigabitEthernet5/0/1
> vppctl set ip neighbor static TenGigabitEthernet5/0/0 2001:2::2
> a0:36:9f:74:73:64
> vppctl set ip neighbor static TenGigabitEthernet5/0/1 198.19.0.2
> a0:36:9f:74:73:66
> vppctl set interface nat64 in TenGigabitEthernet5/0/0
> vppctl set interface nat64 out TenGigabitEthernet5/0/1
> vppctl nat64 add prefix 64:ff9b::/96
> vppctl nat64 add pool address 198.19.0.1
>
> As for VPP, first I used two workers, but then I also tried without
> workers, using only the main core. Unfortunately, VPP crashed in both
> cases, but with somewhat different messages in the syslog. (Previously I
> tested both setups with IPv6 packet forwarding and they worked with an
> excellent performance.)
>
> The error messages in the syslog when I used two workers:
>
> Nov  7 16:32:02 p109 vnet[2479]: received signal SIGSEGV, PC
> 0x7fa86f138168, faulting address 0x4f8
> Nov  7 16:32:02 p109 vnet[2479]: #0  0x7fa8b2158137 0x7fa8b2158137
> Nov  7 16:32:02 p109 vnet[2479]: #1  0x7fa8b2086730 0x7fa8b2086730
> Nov  7 16:32:02 p109 vnet[2479]: #2  0x7fa86f138168 0x7fa86f138168
> Nov  7 16:32:02 p109 vnet[2479]: #3  0x7fa86f11d228 0x7fa86f11d228
> Nov  7 16:32:02 p109 vnet[2479]: #4  0x7fa8b20fbe62 0x7fa8b20fbe62
> Nov  7 16:32:02 p109 vnet[2479]: #5  0x7fa8b20fda4f vlib_worker_loop +
> 0x5ff
> Nov  7 16:32:02 p109 vnet[2479]: #6  0x7fa8b2135e79
> vlib_worker_thread_fn + 0xa9
> Nov  7 16:32:02 p109 vnet[2479]: #7  0x7fa8b2135290
> vlib_worker_thread_bootstrap_fn + 0x50
> Nov  7 16:32:02 p109 vnet[2479]: #8  0x7fa8b207bfa3 start_thread + 0xf3
> Nov  7 16:32:02 p109 vnet[2479]: #9  0x7fa8b1d75eff clone + 0x3f
> Nov  7 16:32:02 p109 systemd[1]: vpp.service: Main process exited,
> code=killed, status=6/ABRT
>
> The error messages in the syslog when I used only the main core:
>
> Nov  7 16:48:57 p109 vnet[2606]: received signal SIGSEGV, PC
> 0x7fbe1d24a168, faulting address 0x1a8
> Nov  7 16:48:57 p109 vnet[2606]: #0  0x7fbe6026a137 0x7fbe6026a137
> Nov  7 16:48:57 p109 vnet[2606]: #1  0x7fbe60198730 0x7fbe60198730
> Nov  7 16:48:57 p109 vnet[2606]: #2  0x7fbe1d24a168 0x7fbe1d24a168
> Nov  7 16:48:57 p109 vnet[2606]: #3  0x7fbe1d22f228 0x7fbe1d22f228
> Nov  7 16:48:57 p109 vnet[2606]: #4  0x7fbe6020de62 0x7fbe6020de62
> Nov  7 16:48:57 p109 vnet[2606]: #5  0x7fbe602127d1 vlib_main + 0xd41
> Nov  7 16:48:57 p109 vnet[2606]: #6  0x7fbe6026906a 0x7fbe6026906a
> Nov  7 16:48:57 p109 vnet[2606]: #7  0x7fbe60169964 0x7fbe60169964
> Nov  7 16:48:57 p109 systemd[1]: vpp.service: Main process exited,
> code=killed, status=6/ABRT
>
> As for the first time I started with quite high load, I suspected that I
> exhausted some sort of resources, so I tried with much lower load, but the
> same thing happened even when I sent only a single packet.
>
> I used siitperf as Tester: https://github.com/lencsegabor/siitperf
>
> And I followed this methodology:
> https://datatracker.ietf.org/doc/html/draft-ietf-bmwg-benchmarking-stateful
>
> Previously my tests were successful with the following stateful NAT64
> implementations:
> - Jool
> - tayga+iptables
> - OpenBSD PF
>
> Could you please help me why VPP crashes, and how I could make it work?
>
> Thank you very much for your help in advance!
>
> Best regards,
>
> Gábor Lencse
>
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22154): https://lists.fd.io/g/vpp-dev/message/22154
Mute This Topic: https://lists.fd.io/mt/94908130/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 

[vpp-dev] VPP Stateful NAT64 crashes with segmentation fault

2022-11-08 Thread Gabor LENCSE

Dear VPP Developers,

I am a researcher and I would like to benchmark the performance of the 
stateful NAT64 implementation of FD.io VPP.


Unfortunately, VPP crashed with segmentation fault.

Some details:

I used two Dell PowerEdge R430 servers as the Tester and the DUT (Device 
Under Test), two 10GbE interfaces of which were interconnected by direct 
cables. On the DUT, I used Debian Linux 10.13 with 4.19.0-20-amd64 
kernel and the version of FD.io VPP was 22.06. The following packages 
were installed: libvppinfra, vpp, vpp-plugin-core, vpp-plugin-dpdk.


I used the following commands to set up Stateful NAT64:

root@p109:~/DUT-settings# cat set-vpp
vppctl set interface state TenGigabitEthernet5/0/0 up
vppctl set interface state TenGigabitEthernet5/0/1 up
vppctl set interface ip address TenGigabitEthernet5/0/0 2001:2::1/64
vppctl set interface ip address TenGigabitEthernet5/0/1 198.19.0.1/24
vppctl ip route add 2001:2::/64 via 2001:2::1 TenGigabitEthernet5/0/0
vppctl ip route add 198.19.0.0/24 via 198.19.0.1 TenGigabitEthernet5/0/1
vppctl set ip neighbor static TenGigabitEthernet5/0/0 2001:2::2 
a0:36:9f:74:73:64
vppctl set ip neighbor static TenGigabitEthernet5/0/1 198.19.0.2 
a0:36:9f:74:73:66

vppctl set interface nat64 in TenGigabitEthernet5/0/0
vppctl set interface nat64 out TenGigabitEthernet5/0/1
vppctl nat64 add prefix 64:ff9b::/96
vppctl nat64 add pool address 198.19.0.1

As for VPP, first I used two workers, but then I also tried without 
workers, using only the main core. Unfortunately, VPP crashed in both 
cases, but with somewhat different messages in the syslog. (Previously I 
tested both setups with IPv6 packet forwarding and they worked with an 
excellent performance.)


The error messages in the syslog when I used two workers:

Nov  7 16:32:02 p109 vnet[2479]: received signal SIGSEGV, PC 
0x7fa86f138168, faulting address 0x4f8

Nov  7 16:32:02 p109 vnet[2479]: #0  0x7fa8b2158137 0x7fa8b2158137
Nov  7 16:32:02 p109 vnet[2479]: #1  0x7fa8b2086730 0x7fa8b2086730
Nov  7 16:32:02 p109 vnet[2479]: #2  0x7fa86f138168 0x7fa86f138168
Nov  7 16:32:02 p109 vnet[2479]: #3  0x7fa86f11d228 0x7fa86f11d228
Nov  7 16:32:02 p109 vnet[2479]: #4  0x7fa8b20fbe62 0x7fa8b20fbe62
Nov  7 16:32:02 p109 vnet[2479]: #5  0x7fa8b20fda4f vlib_worker_loop 
+ 0x5ff
Nov  7 16:32:02 p109 vnet[2479]: #6  0x7fa8b2135e79 
vlib_worker_thread_fn + 0xa9
Nov  7 16:32:02 p109 vnet[2479]: #7  0x7fa8b2135290 
vlib_worker_thread_bootstrap_fn + 0x50

Nov  7 16:32:02 p109 vnet[2479]: #8  0x7fa8b207bfa3 start_thread + 0xf3
Nov  7 16:32:02 p109 vnet[2479]: #9  0x7fa8b1d75eff clone + 0x3f
Nov  7 16:32:02 p109 systemd[1]: vpp.service: Main process exited, 
code=killed, status=6/ABRT


The error messages in the syslog when I used only the main core:

Nov  7 16:48:57 p109 vnet[2606]: received signal SIGSEGV, PC 
0x7fbe1d24a168, faulting address 0x1a8

Nov  7 16:48:57 p109 vnet[2606]: #0  0x7fbe6026a137 0x7fbe6026a137
Nov  7 16:48:57 p109 vnet[2606]: #1  0x7fbe60198730 0x7fbe60198730
Nov  7 16:48:57 p109 vnet[2606]: #2  0x7fbe1d24a168 0x7fbe1d24a168
Nov  7 16:48:57 p109 vnet[2606]: #3  0x7fbe1d22f228 0x7fbe1d22f228
Nov  7 16:48:57 p109 vnet[2606]: #4  0x7fbe6020de62 0x7fbe6020de62
Nov  7 16:48:57 p109 vnet[2606]: #5  0x7fbe602127d1 vlib_main + 0xd41
Nov  7 16:48:57 p109 vnet[2606]: #6  0x7fbe6026906a 0x7fbe6026906a
Nov  7 16:48:57 p109 vnet[2606]: #7  0x7fbe60169964 0x7fbe60169964
Nov  7 16:48:57 p109 systemd[1]: vpp.service: Main process exited, 
code=killed, status=6/ABRT


As for the first time I started with quite high load, I suspected that I 
exhausted some sort of resources, so I tried with much lower load, but 
the same thing happened even when I sent only a single packet.


I used siitperf as Tester: https://github.com/lencsegabor/siitperf

And I followed this methodology: 
https://datatracker.ietf.org/doc/html/draft-ietf-bmwg-benchmarking-stateful


Previously my tests were successful with the following stateful NAT64 
implementations:

- Jool
- tayga+iptables
- OpenBSD PF

Could you please help me why VPP crashes, and how I could make it work?

Thank you very much for your help in advance!

Best regards,

Gábor Lencse


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22153): https://lists.fd.io/g/vpp-dev/message/22153
Mute This Topic: https://lists.fd.io/mt/94908130/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP ACL IPv6 with vppctl

2022-11-08 Thread Andrew Yourtchenko
Hi Jens,

Thanks for the report ! 

There are two issues. First, if a parameter is omitted, it is implicitly 
initialized to 0.0.0.0/0 - as a shortcut for the debugging. 

So even the “working” case is not correct - you end up with ipv6 source and 
ipv4 destination, which may give you unpredictable results. 
https://gerrit.fd.io/r/c/vpp/+/31770 will address this by preventing the 
creation of rules with source and destination being different address families. 
So, long story short - best to specify both src/dst explicitly.

Which gets us to the second part of the issue - incorrect parsing. This looks 
like a regression introduced by commit dd2f12ba - which changed the way the 
parsing of the IP prefix works… and the ::/0 prefix is interpreted as an IPv4 
of 0.0.0.0, with unhappy results, due to the function “ip46_address_is_ip4” 
returning true for a whole set of valid IPv6 prefixes… which resets the address 
family of the prefix to a wrong value (IPv4), which then makes the ACL code 
unhappy.

To fix this, I made a small change https://gerrit.fd.io/r/c/vpp/+/37602 which 
essentially returns the old behavior but without the problem that the previous 
fix was fixing, if Benoit is happy with it then it will take care of the 
incorrect parsing of an “any” prefix.

--a

> On 8 Nov 2022, at 16:50, Jens Rösiger via lists.fd.io 
>  wrote:
> 
> 
> Dear VPP Dev Team,
> 
> i found a problem with creating a VPP ACL with IPv6 SRC / DST Address. The 
> ACL created incorrectly. The problem occurs when "any" ::/0 is used as the 
> SRC address. 
> 
> Example (correct)
> vpp# set acl-plugin acl permit src 2a00:c940::/32
> ACL index:35
> vpp# show acl-plugin acl index 35
> acl-index 35 count 1 tag {cli}
>   0: ipv6 permit src 2a00:c940::/32 dst ::/0 proto 0 sport 0-65535 
> dport 0-65535
> 
> Example (wrong): 
> vpp# set acl-plugin acl permit dst 2a00:c940::/32
> ACL index:36
> vpp# show acl-plugin acl index 36
> acl-index 36 count 1 tag {cli}
>   0: ipv4 permit src 0.0.0.0/0 dst 0.0.0.0/32 proto 0 sport 0-65535 
> dport 0-65535
> 
> The ACL 36 is IPv4, not IPv6. I found no way to create a ACL with ::/0 as 
> source (with "set acl-plugin acl")
> 
> My workaround is use "binary-api acl_add_replace" instand of "set acl-plugin 
> acl". But i think, the better way is fix this issue.
> 
> Mit freundlichen Grüßen
> Jens Rösiger
> 
> ---
> Jens Rösiger tops.net GmbH u. 
> Co KG
> Linux Systemadministrator  Holtorfer 
> Straße. 35
>  D-53229 Bonn / 
> Germany
> E-Mail:  je...@tops.net   Tel: +49 228 
> 9771 111
> www   : http://www.tops.net   Fax: +49 228 
> 9771 199
> ---
> Handelsregister Bonn: HRA 4251
> Geschäftsführerin: tops.net GmbH, Holtorfer Straße 35, 53229 Bonn 
> 
> Handelsregister Bonn: HRB 7323
> Vertreten einzeln durch den Geschäftsführer: Tamás Lányi
> 
> tops.net wird ausschließlich auf der Grundlage seiner jeweils
> aktuellen Allgemeinen Geschäftsbedingungen tätig, die jederzeit
> zugänglich unter http://agb.tops.net hinterlegt sind. 
> Änderungen, Irrtümer und Tippfehler vorbehalten. 
> Preisangaben verstehen sich zuzüglich der jeweils geltenden
> gesetzlichen Umsatzsteuer von derzeit 19%.
> ---
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22152): https://lists.fd.io/g/vpp-dev/message/22152
Mute This Topic: https://lists.fd.io/mt/94893014/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP ACL IPv6 with vppctl

2022-11-08 Thread Jens Rösiger via lists . fd . io

Dear VPP Dev Team,

i found a problem with creating a VPP ACL with IPv6 SRC / DST Address. The ACL 
created incorrectly. The problem occurs when "any" ::/0 is used as the SRC 
address. 



Example (correct)

vpp# set acl-plugin acl permit src 2a00:c940::/32
ACL index:35
vpp# show acl-plugin acl index 35
acl-index 35 count 1 tag {cli}
          0: ipv6 permit src 2a00:c940::/32 dst ::/0 proto 0 sport 0-65535 
dport 0-65535


Example (wrong): 

vpp# set acl-plugin acl permit dst 2a00:c940::/32
ACL index:36
vpp# show acl-plugin acl index 36
acl-index 36 count 1 tag {cli}
          0: ipv4 permit src 0.0.0.0/0 dst 0.0.0.0/32 proto 0 sport 0-65535 
dport 0-65535


The ACL 36 is IPv4, not IPv6. I found no way to create a ACL with ::/0 as 
source (with "set acl-plugin acl")


My workaround is use "binary-api acl_add_replace" instand of "set acl-plugin 
acl". But i think, the better way is fix this issue.



Mit freundlichen Grüßen
Jens Rösiger


---
Jens Rösiger                                                 tops.net GmbH u. 
Co KG
Linux Systemadministrator                                      Holtorfer 
Straße. 35
                                                             D-53229 Bonn / 
Germany
E-Mail:  je...@tops.net                                       Tel: +49 228 9771 
111
www   : http://www.tops.net                                   Fax: +49 228 9771 
199
---
Handelsregister Bonn: HRA 4251
Geschäftsführerin: tops.net GmbH, Holtorfer Straße 35, 53229 Bonn 


Handelsregister Bonn: HRB 7323
Vertreten einzeln durch den Geschäftsführer: Tamás Lányi


tops.net wird ausschließlich auf der Grundlage seiner jeweils
aktuellen Allgemeinen Geschäftsbedingungen tätig, die jederzeit
zugänglich unter http://agb.tops.net hinterlegt sind. 
Änderungen, Irrtümer und Tippfehler vorbehalten. 
Preisangaben verstehen sich zuzüglich der jeweils geltenden
gesetzlichen Umsatzsteuer von derzeit 19%.
---

smime.p7s
Description: Electronic Signature S/MIME

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22151): https://lists.fd.io/g/vpp-dev/message/22151
Mute This Topic: https://lists.fd.io/mt/94893014/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] "tx frame not ready " error in host-vpp1out tx

2022-11-08 Thread Guangming
Hi,Mohsin

I used  linux veth pari interface as af-packet interfeace in vpp. 
When  the vpp run for some time,  there are many "tx frame not ready " error  
in  host-vpp1out-tx.
If there is error, we can not ping from linux side to vpp side. Must need 
restart vpp to restore. 
From old mail. you say "tx frame not ready"​ means VPP didn't find empty queue 
at linux side.  But in my
envirment ,the trafiic is not big .Could give me some clues? 

vvp# monitor interface 
host-vpp1out interval 1 count 10
rx: 34.74Kpps 294.82Mbps tx: 17.68Kpps 32.39Mbps
rx: 42.78Kpps 372.72Mbps tx: 19.97Kpps 37.45Mbps
rx: 38.29Kpps 322.97Mbps tx: 18.50Kpps 33.34Mbps
rx: 33.73Kpps 273.07Mbps tx: 16.19Kpps 35.59Mbps
rx: 37.39Kpps 317.19Mbps tx: 15.38Kpps 33.30Mbps
rx: 33.81Kpps 274.42Mbps tx: 16.21Kpps 34.01Mbps
rx: 39.40Kpps 339.30Mbps tx: 16.12Kpps 33.15Mbps
rx: 37.98Kpps 330.83Mbps tx: 15.55Kpps 31.44Mbps
rx: 39.59Kpps 340.14Mbps tx: 16.37Kpps 36.92Mbps
rx: 39.44Kpps 340.84Mbps tx: 15.48Kpps 35.64Mbps

linux side
vpp1host: flags=4163  mtu 1600
inet 10.155.32.76  netmask 255.0.0.0  broadcast 10.255.255.255
ether e2:27:5e:cb:38:6b  txqueuelen 1000  (Ethernet)
RX packets 375729479  bytes 105801826082 (98.5 GiB)
RX errors 0  dropped 21875  overruns 0  frame 0
TX packets 829224788  bytes 867242223136 (807.6 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vpp1out: flags=4163  mtu 1600
ether 2e:1d:20:a2:bd:9c  txqueuelen 1000  (Ethernet)
RX packets 829224804  bytes 867242242079 (807.6 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 375729488  bytes 105801827209 (98.5 GiB)
TX errors 0  dropped 10937 overruns 0  carrier 0  collisions 0

VPP side
vpp# show interface addr 
TenGigabitEthernet19/0/0 (up):
  L3 10.155.32.6/27
TenGigabitEthernet19/0/2 (up):
  L3 10.155.32.68/29
host-vpp1out (up):
  L3 10.155.32.75/29
local0 (dn):


host-vpp1out   3 up   host-vpp1out
  Link speed: unknown
  RX Queues:
queue thread mode  
0 vpp_wk_0 (1)   interrupt 
  Ethernet address 02:fe:06:f0:ef:d6
  Linux PACKET socket interface
  block:10485760 frame:10240
  next frame:867
  available:0 request:0 sending:0 wrong:1024 total:1024

Thanks 
Guangming




zhangguangm...@baicells.com

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22150): https://lists.fd.io/g/vpp-dev/message/22150
Mute This Topic: https://lists.fd.io/mt/94886139/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-