Re: [vpp-dev] [VCL] Memory access error for different size of mutex with different glibc versions in VPP and VCL app

2020-12-07 Thread Florin Coras
Hi Hanlin, 

You mean pthread_cond_wait doesn’t “fix” the mutex after the peer dies? I guess 
we could try to fix that but what’s the behavior of the app/vpp once that 
happens? The fact that the mutex is unusable after vcl dies should not matter 
much as vpp should forcefully cleanup/detach the app once it detects the crash 
(the socket api should be pretty fast). 

Also, how do the apps/vpp end up in this situation? Are they killed or do they 
crash? Obviously, the latter would be a problem … 

Regards,
Florin

> On Dec 7, 2020, at 10:36 PM, wanghanlin  wrote:
> 
> Hi Florin,
> Same problem for pthread_cond_wait of svm_queue_t, but pthread_cond_wait 
> seems not robust like pthread_mutex_lock.
> 
> Regards,
> Hanlin 
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> 
> 签名由 网易邮箱大师  定制
> On 12/1/2020 00:10,Florin Coras 
>  wrote: 
> Hi Hanlin, 
> 
> And here’s the patch [1].
> 
> Regards,
> Florin
> 
> [1] https://gerrit.fd.io/r/c/vpp/+/30185 
> 
> 
>> On Nov 27, 2020, at 9:07 AM, Florin Coras via lists.fd.io 
>>  > > wrote:
>> 
>> Hi Hanlin, 
>> 
>> Good point! I actually have that on my todo list to see how/if it affects 
>> performance. The other way around we should be somewhat safer after the 
>> switch to the new socket api (instead of the binary api with socket 
>> transport). That is, vpp detects if an app goes down almost instantly. 
>> 
>> Regards,
>> Florin
>> 
>>> On Nov 26, 2020, at 11:55 PM, wanghanlin >> > wrote:
>>> 
>>> Hi Florin,
>>> I have another problem about this, that is VPP crash with a mutex lock of 
>>> svm_queue_t, then VCL hang when locking this mutex, vice versa.
>>> Should we use pthread_mutexattr_setrobust_np to solve this problem?
>>> 
>>> Regards,
>>> Hanlin
>>> 
>>> 
>>> wanghanlin
>>> 
>>> wanghan...@corp.netease.com
>>>  
>>> 
>>> 签名由 网易邮箱大师  定制
>>> On 3/24/2020 19:56,Dave Barach (dbarach) 
>>>  wrote: 
>>> This limitation should come as no surprise, and it’s hardly a “big” 
>>> limitation.
>>>
>>> Options include building container images which match the host distro, or 
>>> using a vpp snap image on the host which corresponds to the container 
>>> images.
>>>
>>> Given that there are two ways to deal with the situation, pick your 
>>> favorite and move on.
>>>
>>> From: vpp-dev@lists.fd.io  >> > On Behalf Of wanghanlin
>>> Sent: Monday, March 23, 2020 10:16 PM
>>> To: fcoras.li...@gmail.com 
>>> Cc: vpp-dev@lists.fd.io 
>>> Subject: Re: [vpp-dev] [VCL] Memory access error for different size of 
>>> mutex with different glibc versions in VPP and VCL app
>>>
>>> Hi Florin,
>>> It's not only regarding compiled with the same glibc version, but running 
>>> with the same glibc version also because libpthread is dynamically linked 
>>> into VCL and VPP.
>>> This is really a big limitation.
>>>
>>> Regards,
>>> Hanlin
>>>
>>> 
>>> wanghanlin
>>> 
>>> wanghan...@corp.netease.com 
>>> 签名由 网易邮箱大师  定制
>>> On 3/23/2020 23:31,Florin Coras 
>>>  wrote:
>>> Hi Hanlin, 
>>>
>>> Unfortunately, you’ll have to make sure all code has been compiled with the 
>>> same glibc version. I’ve heard that glibc changed in ubuntu 20.04 but I 
>>> haven’t done any testing with it yet. 
>>>
>>> Note that the binary api also makes use of svm_queue_t. 
>>>
>>> Regards,
>>> Florin
>>> 
>>> 
>>> On Mar 22, 2020, at 10:49 PM, wanghanlin >> > wrote:
>>>
>>> Hi All,
>>> Now, VCL app and VPP shared some data structures, such as svm_queue_t.  In 
>>> svm_queue_t, there are mutex and condvar variables that depends on 
>>> specified glibc version. 
>>> When VPP run in host and VCL app run in a docker container, glibc versions 
>>> maybe different between VPP and VCL app, and then result in memory access 
>>> error for different size of mutex  and condvar.
>>> Has anyone noticed this?
>>>
>>> Regards,
>>> Hanlin

Re: [vpp-dev] why not VPP wireguard add peer: input error ?

2020-12-07 Thread li.xia
OK,  Thanks ben!


| |
li.xia
|
|
lxlee_li...@163.com
|
签名由网易邮箱大师定制
On 12/7/2020 20:40,Benoit Ganne (bganne) wrote:
Hi,

The doc is actually wrong, you should use 'port' instead of 'dst-port':
vpp# wireguard peer add wg0 public-key 
3xeh8iCP8nSTT0HKGPjQOb/pQXIUncC8Te8oyCU4fVg= endpoint 172.20.20.2 allowed-ip 
10.100.0.0/24 port 8001 persistent-keepalive 25

Best
ben

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of li.xia
Sent: lundi 7 décembre 2020 11:09
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] why not VPP wireguard add peer: input error ?

Hi, vpp-dev


I made a test for VPP's wireguard VPN  with version 20.09 , and I found a
error below:
"wireguard peer add: Input error"

my topological graph below


1、ping from srv1(or srv2) to peer is ok, by 172.20.20.x in vppctl
2、VPP srv1 wireguard create like below

vpp# wireguard create listen-port 8001 private-key
MA4By9Ymc38GoI1zaHq+Ruv7qN58ngAPVOKD5YiTMVo= src 172.20.20.1
vpp# show wireguard interface
[0] wg0 src:172.20.20.1 port:8001 private-
key:MA4By9Ymc38GoI1zaHq+Ruv7qN58ngAPVOKD5YiTMVo=
300e01cbd626737f06a08d73687abe46ebfba8de7c9e000f54e283e58893315a public-
key:3xeh8iCP8nSTT0HKGPjQOb/pQXIUncC8Te8oyCU4fVg=
df17a1f2208ff274934f41ca18f8d039bfe94172149dc0bc4def28c825387d58 mac-key:
075349f1b6956d834555c5384bbdbcdbaf6d2aba8f0e8aea249f9b84e1b15d20
vpp# wireguard peer add wg0 public-key
3xeh8iCP8nSTT0HKGPjQOb/pQXIUncC8Te8oyCU4fVg= endpoint 172.20.20.2 allowed-
ip 10.100.0.0/24 dst-port 8001  persistent-keepalive 25
wireguard peer add: Input error



3、In VPP srv2 the same error happened
4、Two VPP runs in ubuntu 18.04 OS of vmware 15.


Is there something wrong with my configures ?


addition for VPP log


vpp# sh log
2020/12/07 01:32:07:366 notice plugin/loadLoaded plugin:
abf_plugin.so (Access Control List (ACL) Based Forwarding)
2020/12/07 01:32:07:367 notice plugin/loadLoaded plugin:
acl_plugin.so (Access Control Lists (ACL))
2020/12/07 01:32:07:367 notice plugin/loadLoaded plugin:
adl_plugin.so (Allow/deny list plugin)
2020/12/07 01:32:07:368 notice plugin/loadLoaded plugin:
avf_plugin.so (Intel Adaptive Virtual Function (AVF) Device Driver)
2020/12/07 01:32:07:368 notice plugin/loadLoaded plugin:
builtinurl_plugin.so (vpp built-in URL support)
2020/12/07 01:32:07:368 notice plugin/loadLoaded plugin:
cdp_plugin.so (Cisco Discovery Protocol (CDP))
2020/12/07 01:32:07:368 notice plugin/loadLoaded plugin:
cnat_plugin.so (CNat Translate)
2020/12/07 01:32:07:375 notice plugin/loadLoaded plugin:
crypto_ipsecmb_plugin.so (Intel IPSEC Multi-buffer Crypto Engine)
2020/12/07 01:32:07:375 notice plugin/loadLoaded plugin:
crypto_native_plugin.so (Intel IA32 Software Crypto Engine)
2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
crypto_openssl_plugin.so (OpenSSL Crypto Engine)
2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
crypto_sw_scheduler_plugin.so (SW Scheduler Crypto Async Engine plugin)
2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
ct6_plugin.so (IPv6 Connection Tracker)
2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
det44_plugin.so (Deterministic NAT (CGN))
2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
dhcp_plugin.so (Dynamic Host Configuration Protocol (DHCP))
2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
dns_plugin.so (Simple DNS name resolver)
2020/12/07 01:32:07:392 notice plugin/loadLoaded plugin:
dpdk_plugin.so (Data Plane Development Kit (DPDK))
2020/12/07 01:32:07:392 notice plugin/loadLoaded plugin:
dslite_plugin.so (Dual-Stack Lite)
2020/12/07 01:32:07:392 notice plugin/loadLoaded plugin:
flowprobe_plugin.so (Flow per Packet)
2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
gbp_plugin.so (Group Based Policy (GBP))
2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
gtpu_plugin.so (GPRS Tunnelling Protocol, User Data (GTPv1-U))
2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
hs_apps_plugin.so (Host Stack Applications)
2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
http_static_plugin.so (HTTP Static Server)
2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
igmp_plugin.so (Internet Group Management Protocol (IGMP))
2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
ikev2_plugin.so (Internet Key Exchange (IKEv2) Protocol)
2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
ila_plugin.so (Identifier Locator Addressing (ILA) for IPv6)
2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
ioam_plugin.so (Inbound Operations, Administration, and Maintenance (OAM))
2020/12/07 01:32:07:394 notice plugin/loadLoaded plugin:
l2e_plugin.so (Layer 2 (L2) Emulation)
2020/12/07 01:32:07:394 notice plugin/loadLoaded plugin:
l3xc_plugin.so (L3 Cross-Connect (L3XC))
2020/12/07 01:32:07:394 notice plugin/load

Re: [vpp-dev] Problem with native avf

2020-12-07 Thread Christian Hopps


> On Dec 7, 2020, at 5:16 PM, Damjan Marion  wrote:
> 
>> On 07.12.2020., at 22:55, Christian Hopps  wrote:
> 
> I just bumped it to 2.13.10 and added REMAKE_INITRD=“yes” into debian/dkms.in 
> so that should be fixed:
> 
> $ lsinitramfs /boot/initrd.img | grep i40e
> usr/lib/modules/5.8.0-31-generic/updates/dkms/i40e.ko

Updated, and works.

> 
>> 
>>> also, I asked for your crete interface config….
>> 
>> create interface avf :65:0a.0

I switched this to also include "rx-queue-size 2048" to see if it helped with 
my other problem, but it didn't.

>> 
>> As mentioned above, when I rmmod/modprobe the new driver I don't hit the lut 
>> error anymore.
>> 
>> Now I need to figure out why I see such bad performance (tons of rx 
>> discards) using these interfaces (when using 2 of them), but not when using 
>> 1 avf VF and the other interface is a 10G i520 nic (dpdk driver).
>> 
>> Bad Side:
>> 
>> $ docker-compose exec p1 vppctl show hard
>>  NameIdx   Link  Hardware
>> avf-0/65/2/0   2 up   avf-0/65/2/0
>>  Link speed: 10 Gbps
>>  Ethernet address 02:41:0d:0d:0d:0b
>>  flags: initialized admin-up vaddr-dma link-up rx-interrupts
>>  offload features: l2 adv-link-speed vlan rx-polling rss-pf
>>  num-queue-pairs 6 max-vectors 5 max-mtu 0 rss-key-size 52 rss-lut-size 64
>>  speed
>>  stats:
>>rx bytes 1520728906
>>rx unicast   184910147
>>rx discards  129083610
>>tx bytes 4226915088
>>tx unicast   184349288
>> avf-0/65/a/0   1 up   avf-0/65/a/0
>>  Link speed: 10 Gbps
>>  Ethernet address 02:41:0b:0b:0b:0b
>>  flags: initialized admin-up vaddr-dma link-up rx-interrupts
>>  offload features: l2 adv-link-speed vlan rx-polling rss-pf
>>  num-queue-pairs 6 max-vectors 5 max-mtu 0 rss-key-size 52 rss-lut-size 64
>>  speed
>>  stats:
>>rx bytes 3781507000
>>rx unicast   93533516
>>rx broadcast 3
>>rx discards  73223358
>>tx bytes 2212324998
>>tx unicast   13714424
>> ipsec0 3 up   ipsec0
>>  Link speed: unknown
>>  IPSec
>> local0 0down  local0
>>  Link speed: unknown
>>  local
>> loop0  4 up   loop0
>>  Link speed: unknown
>>  Ethernet address de:ad:00:00:00:00
>> 
>> Good Side:
>> 
>> $ docker-compose exec p2 vppctl show hard
>>  NameIdx   Link  Hardware
>> TenGigabitEthernet17/0/0   1 up   TenGigabitEthernet17/0/0
>>  Link speed: 10 Gbps
>>  Ethernet address f8:f2:1e:3c:15:ec
>>  Intel 82599
>>carrier up full duplex mtu 9206
>>flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum 
>> rx-ip4-cksum
>>Devargs:
>>rx: queues 1 (max 128), desc 1024 (min 32 max 4096 align 8)
>>tx: queues 6 (max 64), desc 1024 (min 32 max 4096 align 8)
>>pci: device 8086:10fb subsystem 8086:0003 address :17:00.00 numa 0
>>max rx packet len: 15872
>>promiscuous: unicast off all-multicast on
>>vlan offload: strip off filter off qinq off
>>rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>>   macsec-strip vlan-filter vlan-extend jumbo-frame 
>> scatter
>>   security keep-crc rss-hash
>>rx offload active: ipv4-cksum jumbo-frame scatter
>>tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>>   tcp-tso macsec-insert multi-segs security
>>tx offload active: udp-cksum tcp-cksum multi-segs
>>rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
>>   ipv6-udp ipv6-ex ipv6
>>rss active:none
>>tx burst function: ixgbe_xmit_pkts
>>rx burst function: ixgbe_recv_scattered_pkts_vec
>> 
>>tx frames ok   183386223
>>tx bytes ok 277646741622
>>rx frames ok   182834391
>>rx bytes ok 276811244995
>>extended stats:
>>  rx_good_packets  182834488
>>  tx_good_packets  183386223
>>  rx_good_bytes 276811391853
>>  tx_good_bytes 277646741622
>>  rx_q0packets 182834488
>>  rx_q0bytes276811391853
>>  tx_q0packets 183386223
>>  tx_q0bytes277646741622
>>  rx_size_65_to_127_packets   15
>>  rx_size_1024_to_max_packets  182834425
>>  rx_multicast_packets15
>>  rx_total_packets 182834439
>>  rx_total_bytes276811319181
>>  

Re: [vpp-dev] Problem with native avf

2020-12-07 Thread Christian Hopps


> On Dec 7, 2020, at 3:02 PM, Damjan Marion  wrote:
> 
> 
> 
>> On 07.12.2020., at 20:41, Christian Hopps > > wrote:
>> 
>> 
>> 
>>> On Dec 7, 2020, at 1:44 PM, Damjan Marion >> > wrote:
>>> 
 
 On 07.12.2020., at 17:02, Christian Hopps >>> > wrote:
 
>>> 
>>> please send me output of: extras/scripts/lsnet script and exact “create 
>>> interface avf” commands you use….
>> 
>> PCI Address  MAC address   Device NameDriver StateSpeed  
>> Port Type
>>  = == ==  == 
>> 
>> :65:00.0 40:a6:b7:4b:62:08 enp101s0f0 i40e   down 1Mb/s  
>> Direct Attach Copper
>> :65:00.1 40:a6:b7:4b:62:09 enp101s0f1 i40e   down 1Mb/s  
>> Direct Attach Copper
>> :65:00.2 40:a6:b7:4b:62:0a enp101s0f2 i40e   down 1Mb/s  
>> Direct Attach Copper
>> :65:00.3 40:a6:b7:4b:62:0b enp101s0f3 i40e   down 1Mb/s  
>> Direct Attach Copper
>> :b3:00.0 00:e0:8d:7e:1f:36 enp179s0f0 ixgbe  down Unknown!   
>> Direct Attach Copper
>> :b3:00.1 00:e0:8d:7e:1f:37 enp179s0f1 ixgbe  down Unknown!   
>> Direct Attach Copper
>> :01:00.0 a0:42:3f:3c:f8:ee enp1s0f0   ixgbe  up   1Mb/s  
>> Twisted Pair
>> :01:00.1 a0:42:3f:3c:f8:ef enp1s0f1   ixgbe  down Unknown!   
>> Twisted Pair
>> :17:00.0 f8:f2:1e:3c:15:ec enp23s0f0  ixgbe  down Unknown!   
>> Direct Attach Copper
>> :17:00.1 f8:f2:1e:3c:15:ed enp23s0f1  ixgbe  down Unknown!   
>> Direct Attach Copper
>> :01:10.0 52:bf:27:59:df:50 eth0   ixgbevfdown Unknown!   
>> Other
>> :01:10.2 ee:24:0b:0c:93:3f eth1   ixgbevfdown Unknown!   
>> Other
>> :01:10.4 9e:8e:ce:da:38:f5 eth2   ixgbevfdown Unknown!   
>> Other
>> :01:10.6 2a:f4:a2:ea:4c:5d eth3   ixgbevfdown Unknown!   
>> Other
>> 
> 
> I dont see your VFs on the list?Do you have them created?
> Do you see them with “lspci | grep Ether”?

[ The VFs are not in the list b/c they are created as part of the automation 
that also launches vpp, I ran this before running that b/c once vpp is up the 
VFs also don't show up in that list b/c they have been rebound. :) ]

First, things work with 2.12.; however, 2.12 does not load on reboot, I must 
rmmod and modprobe after rebooting to get the 2.12 driver. I do have another 
problem though (mentioned at end)...

> also, I asked for your crete interface config….

create interface avf :65:0a.0

As mentioned above, when I rmmod/modprobe the new driver I don't hit the lut 
error anymore.

Now I need to figure out why I see such bad performance (tons of rx discards) 
using these interfaces (when using 2 of them), but not when using 1 avf VF and 
the other interface is a 10G i520 nic (dpdk driver).

Bad Side:

$ docker-compose exec p1 vppctl show hard
  NameIdx   Link  Hardware
avf-0/65/2/0   2 up   avf-0/65/2/0
  Link speed: 10 Gbps
  Ethernet address 02:41:0d:0d:0d:0b
  flags: initialized admin-up vaddr-dma link-up rx-interrupts
  offload features: l2 adv-link-speed vlan rx-polling rss-pf
  num-queue-pairs 6 max-vectors 5 max-mtu 0 rss-key-size 52 rss-lut-size 64
  speed
  stats:
rx bytes 1520728906
rx unicast   184910147
rx discards  129083610
tx bytes 4226915088
tx unicast   184349288
avf-0/65/a/0   1 up   avf-0/65/a/0
  Link speed: 10 Gbps
  Ethernet address 02:41:0b:0b:0b:0b
  flags: initialized admin-up vaddr-dma link-up rx-interrupts
  offload features: l2 adv-link-speed vlan rx-polling rss-pf
  num-queue-pairs 6 max-vectors 5 max-mtu 0 rss-key-size 52 rss-lut-size 64
  speed
  stats:
rx bytes 3781507000
rx unicast   93533516
rx broadcast 3
rx discards  73223358
tx bytes 2212324998
tx unicast   13714424
ipsec0 3 up   ipsec0
  Link speed: unknown
  IPSec
local0 0down  local0
  Link speed: unknown
  local
loop0  4 up   loop0
  Link speed: unknown
  Ethernet address de:ad:00:00:00:00

Good Side:

$ docker-compose exec p2 vppctl show hard
  NameIdx   Link  Hardware
TenGigabitEthernet17/0/0   1 up   TenGigabitEthernet17/0/0
  Link speed: 10 Gbps
  Ethernet address f8:f2:1e:3c:15:ec
  Intel 82599
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum
Devargs:
rx: queues 1 (max 128), desc 1024 (min 32 max 4096 align 8)
tx: queues 6 (max 64), desc 1024 (min 32 max 4096 align 8)
pci: device 8086:10fb subsystem 8086:0003 address 

[vpp-dev] Plugin communication with Host Application. #plugin #vpp-dev #control_and_data_plane_together

2020-12-07 Thread RaviKiran Veldanda
[Edited Message Follows]

Hi Team,
We developed a plugin and forwarding packets to several interfaces depending on 
some filtering rules.
Now we have a requirement to create these filtering rules from Host 
Application. We have a problem to use VPPCOM/Binary API.
DO we have any communication method from HOSTAPP to VPP Plugin? can you please 
provide any pointer which can help us to refer and develop?
The Plugin currently registered with VPP and working fine. Can we make plugin 
to monitor on a file/event for these filtering rules which needed from HostAPP 
without impacting the normal fast path processing of plugin?
Any pointers for which does "Control" and "Data plane" functionality in the 
same plugin without using VPPCOM/Binary API?

Thanks for your support.
//Ravi.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18277): https://lists.fd.io/g/vpp-dev/message/18277
Mute This Topic: https://lists.fd.io/mt/78788956/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Plugin communication with Host Application. #plugin #vpp-dev #control_and_data_plane_together

2020-12-07 Thread RaviKiran Veldanda
Hi Team,
We developed a plugin and forwarding packets to several interfaces depending on 
some filtering rules.
Now we have a requirement to create these filtering rules from Host 
Application. We have a problem to use VPPCOM/Binary API.
DO we have any communication method from HOSTAPP to VPP Plugin? can you please 
provide any pointer which can help us to refer and develop?
The Plugin currently registered with VPP and working fine. Can we make plugin 
to monitor on a file/event for these filtering rules which needed from HostAPP 
without impacting the normal fast path processing of plugin?
Any pointers for which does "Control" and "Data plane" functionality same 
without VPPCOM/Binary API?

Thanks for your support.
//Ravi.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18277): https://lists.fd.io/g/vpp-dev/message/18277
Mute This Topic: https://lists.fd.io/mt/78788956/21656
Mute #plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/plugin
Mute #vpp-dev:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-dev
Mute 
#control_and_data_plane_together:https://lists.fd.io/g/vpp-dev/mutehashtag/control_and_data_plane_together
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Calico/vpp 0.10.0 was released today

2020-12-07 Thread Jerome Tollet via lists.fd.io
Hello,
Folks may be interested in this message just posted on Calico #vpp slack 
channel:

Just released Calico/VPP v0.10.0, included are
* Wireguard support
* MTU configuration (in VPP)
* Uplink driver autodetection
As usual, updated docs are available here 
https://github.com/projectcalico/vpp-dataplane/wiki/Getting-started
And we're working to have the policies support under the Christmas tree…


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18276): https://lists.fd.io/g/vpp-dev/message/18276
Mute This Topic: https://lists.fd.io/mt/78788360/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Problem with native avf

2020-12-07 Thread Damjan Marion

> On 07.12.2020., at 17:02, Christian Hopps  wrote:
> 
> 
> 
>> On Dec 7, 2020, at 9:57 AM, Damjan Marion  wrote:
>> 
>> 
>> Anything in dmesg output on PF side?
> 
> VF setup script running
> 
>> [  168.765633] i40e :65:00.2: FW LLDP is enabled
>> [  168.765635] i40e :65:00.2: Allocating 1 VFs.
>> [  168.870431] pci :65:0a.0: [8086:154c] type 00 class 0x02
>> [  168.870441] pci :65:0a.0: enabling Extended Tags
>> [  168.870609] pci :65:0a.0: Adding to iommu group 101
>> [  168.881032] iavf: Intel(R) Ethernet Adaptive Virtual Function Network 
>> Driver - version 3.2.3-k
>> [  168.881033] Copyright (c) 2013 - 2018 Intel Corporation.
>> [  168.881136] iavf :65:0a.0: enabling device ( -> 0002)
>> [  168.925644] iavf :65:0a.0: Device is still in reset (-16), retrying
>> [  168.971405] i40e :65:00.2: Setting MAC 02:41:0b:0b:0b:0b on VF 0
>> [  169.061729] i40e :65:00.2: Bring down and up the VF interface to make 
>> this change effective.
>> [  169.154400] i40e :65:00.2: VF 0 is now trusted
>> [  169.332527] i40e :65:00.0: FW LLDP is enabled
>> [  169.332528] i40e :65:00.0: Allocating 1 VFs.
>> [  169.438431] pci :65:02.0: [8086:154c] type 00 class 0x02
>> [  169.438441] pci :65:02.0: enabling Extended Tags
>> [  169.438588] pci :65:02.0: Adding to iommu group 102
>> [  169.438668] iavf :65:02.0: enabling device ( -> 0002)
>> [  169.483272] iavf :65:02.0: Device is still in reset (-16), retrying
>> [  169.539244] i40e :65:00.0: Setting MAC 02:41:0d:0d:0d:0b on VF 0
>> [  169.629728] i40e :65:00.0: Bring down and up the VF interface to make 
>> this change effective.
>> [  169.722387] i40e :65:00.0: VF 0 is now trusted
>> [  169.902916] i40e :65:00.3: FW LLDP is enabled
>> [  169.902917] i40e :65:00.3: Allocating 1 VFs.
>> [  170.010432] pci :65:0e.0: [8086:154c] type 00 class 0x02
>> [  170.010441] pci :65:0e.0: enabling Extended Tags
>> [  170.010573] pci :65:0e.0: Adding to iommu group 103
>> [  170.010641] iavf :65:0e.0: enabling device ( -> 0002)
>> [  170.055217] iavf :65:0e.0: Device is still in reset (-16), retrying
>> [  170.110941] i40e :65:00.3: Setting MAC 02:42:0c:0c:0c:0c on VF 0
>> [  170.201737] i40e :65:00.3: Bring down and up the VF interface to make 
>> this change effective.
>> [  170.294398] i40e :65:00.3: VF 0 is now trusted
> 
> VPP starts running
> 
>> [  178.922603] ixgbe :17:00.0: complete
>> [  178.928006] i40e :65:00.0 enp101s0f0: NIC Link is Down
>> [  179.186499] vfio-pci :17:00.0: enabling device (0142 -> 0143)
>> [  180.105670] i40e :65:00.0 enp101s0f0: NIC Link is Up, 10 Gbps Full 
>> Duplex, Flow Control: None
>> [  180.247071] vfio-pci :65:0a.0: enabling device ( -> 0002)
>> [  180.841280] i40e :65:00.2: VF 0 failed opcode 24, retval: -5
>> [  181.721005] vfio-pci :65:0e.0: enabling device ( -> 0002)
>> [  182.304338] i40e :65:00.3: VF 0 failed opcode 24, retval: -5

please send me output of: extras/scripts/lsnet script and exact “create 
interface avf” commands you use….


> 
>> What kernel and driver version do you use?
> 
> Host Config:
> 
> $ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=20.10
> DISTRIB_CODENAME=groovy
> DISTRIB_DESCRIPTION="Ubuntu 20.10"
> $ uname -a
> Linux labnh 5.8.0-31-generic #33-Ubuntu SMP Mon Nov 23 18:44:54 UTC 2020 
> x86_64 x86_64 x86_64 GNU/Linux
> 
> Docker Config (compiled and run inside):
> 
> root@p1:/# cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=18.04
> DISTRIB_CODENAME=bionic
> DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
> 
>> 
>> Have you tried latest PF driver from intel?
> 
> No; however, I am running such a new ubuntu (5.8 kernel) so I was hoping that 
> was sufficient.

Agree, still, please, do me a favour and try with latest, so I know i’m looking 
at the same thing.

I maintain deb packaging for i40 DKMS here:

https://github.com/dmarion/deb-i40e

You just need to run ./build and you should get .deb with latest i40 driver…





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18275): https://lists.fd.io/g/vpp-dev/message/18275
Mute This Topic: https://lists.fd.io/mt/78779141/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Problem with native avf

2020-12-07 Thread Damjan Marion


> On 07.12.2020., at 20:41, Christian Hopps  wrote:
> 
> 
> 
>> On Dec 7, 2020, at 1:44 PM, Damjan Marion > > wrote:
>> 
>>> 
>>> On 07.12.2020., at 17:02, Christian Hopps >> > wrote:
>>> 
>> 
>> please send me output of: extras/scripts/lsnet script and exact “create 
>> interface avf” commands you use….
> 
> PCI Address  MAC address   Device NameDriver StateSpeed  
> Port Type
>  = == ==  == 
> 
> :65:00.0 40:a6:b7:4b:62:08 enp101s0f0 i40e   down 1Mb/s  
> Direct Attach Copper
> :65:00.1 40:a6:b7:4b:62:09 enp101s0f1 i40e   down 1Mb/s  
> Direct Attach Copper
> :65:00.2 40:a6:b7:4b:62:0a enp101s0f2 i40e   down 1Mb/s  
> Direct Attach Copper
> :65:00.3 40:a6:b7:4b:62:0b enp101s0f3 i40e   down 1Mb/s  
> Direct Attach Copper
> :b3:00.0 00:e0:8d:7e:1f:36 enp179s0f0 ixgbe  down Unknown!   
> Direct Attach Copper
> :b3:00.1 00:e0:8d:7e:1f:37 enp179s0f1 ixgbe  down Unknown!   
> Direct Attach Copper
> :01:00.0 a0:42:3f:3c:f8:ee enp1s0f0   ixgbe  up   1Mb/s  
> Twisted Pair
> :01:00.1 a0:42:3f:3c:f8:ef enp1s0f1   ixgbe  down Unknown!   
> Twisted Pair
> :17:00.0 f8:f2:1e:3c:15:ec enp23s0f0  ixgbe  down Unknown!   
> Direct Attach Copper
> :17:00.1 f8:f2:1e:3c:15:ed enp23s0f1  ixgbe  down Unknown!   
> Direct Attach Copper
> :01:10.0 52:bf:27:59:df:50 eth0   ixgbevfdown Unknown!   
> Other
> :01:10.2 ee:24:0b:0c:93:3f eth1   ixgbevfdown Unknown!   
> Other
> :01:10.4 9e:8e:ce:da:38:f5 eth2   ixgbevfdown Unknown!   
> Other
> :01:10.6 2a:f4:a2:ea:4c:5d eth3   ixgbevfdown Unknown!   
> Other
> 

I dont see your VFs on the list?Do you have them created?
Do you see them with “lspci | grep Ether”?

also, I asked for your crete interface config….

> 
>> 
>>> 
 What kernel and driver version do you use?
>>> 
>>> Host Config:
>>> 
>>> $ cat /etc/lsb-release
>>> DISTRIB_ID=Ubuntu
>>> DISTRIB_RELEASE=20.10
>>> DISTRIB_CODENAME=groovy
>>> DISTRIB_DESCRIPTION="Ubuntu 20.10"
>>> $ uname -a
>>> Linux labnh 5.8.0-31-generic #33-Ubuntu SMP Mon Nov 23 18:44:54 UTC 2020 
>>> x86_64 x86_64 x86_64 GNU/Linux
>>> 
>>> Docker Config (compiled and run inside):
>>> 
>>> root@p1:/# cat /etc/lsb-release
>>> DISTRIB_ID=Ubuntu
>>> DISTRIB_RELEASE=18.04
>>> DISTRIB_CODENAME=bionic
>>> DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
>>> 
 
 Have you tried latest PF driver from intel?
>>> 
>>> No; however, I am running such a new ubuntu (5.8 kernel) so I was hoping 
>>> that was sufficient.
>> 
>> Agree, still, please, do me a favour and try with latest, so I know i’m 
>> looking at the same thing.
> 
> I did the build and installed the deb, I also updated the firmware on the 
> NIC; however,
> 
> $ dpkg -l | grep i40
> ii  i40e-dkms   2.12.6
>   all  Intel i40e adapter driver
> 
> $ sudo ethtool -i enp101s0f0
> driver: i40e
> version: 2.8.20-k
> firmware-version: 8.15 0x80009621 1.2829.0
> expansion-rom-version:
> bus-info: :65:00.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: yes
> supports-register-dump: yes
> supports-priv-flags: yes
> 
> It doesn't seem to have used it. Is there something else I need to do to have 
> it use the dkms driver?

rmmod i40e; modprobe i40e
or reboot…




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18274): https://lists.fd.io/g/vpp-dev/message/18274
Mute This Topic: https://lists.fd.io/mt/78779141/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Problem with native avf

2020-12-07 Thread Christian Hopps


> On Dec 7, 2020, at 1:44 PM, Damjan Marion  wrote:
> 
>> 
>> On 07.12.2020., at 17:02, Christian Hopps  wrote:
>> 
> 
> please send me output of: extras/scripts/lsnet script and exact “create 
> interface avf” commands you use….

PCI Address  MAC address   Device NameDriver StateSpeed  
Port Type
 = == ==  == 

:65:00.0 40:a6:b7:4b:62:08 enp101s0f0 i40e   down 1Mb/s  
Direct Attach Copper
:65:00.1 40:a6:b7:4b:62:09 enp101s0f1 i40e   down 1Mb/s  
Direct Attach Copper
:65:00.2 40:a6:b7:4b:62:0a enp101s0f2 i40e   down 1Mb/s  
Direct Attach Copper
:65:00.3 40:a6:b7:4b:62:0b enp101s0f3 i40e   down 1Mb/s  
Direct Attach Copper
:b3:00.0 00:e0:8d:7e:1f:36 enp179s0f0 ixgbe  down Unknown!   
Direct Attach Copper
:b3:00.1 00:e0:8d:7e:1f:37 enp179s0f1 ixgbe  down Unknown!   
Direct Attach Copper
:01:00.0 a0:42:3f:3c:f8:ee enp1s0f0   ixgbe  up   1Mb/s  
Twisted Pair
:01:00.1 a0:42:3f:3c:f8:ef enp1s0f1   ixgbe  down Unknown!   
Twisted Pair
:17:00.0 f8:f2:1e:3c:15:ec enp23s0f0  ixgbe  down Unknown!   
Direct Attach Copper
:17:00.1 f8:f2:1e:3c:15:ed enp23s0f1  ixgbe  down Unknown!   
Direct Attach Copper
:01:10.0 52:bf:27:59:df:50 eth0   ixgbevfdown Unknown!   
Other
:01:10.2 ee:24:0b:0c:93:3f eth1   ixgbevfdown Unknown!   
Other
:01:10.4 9e:8e:ce:da:38:f5 eth2   ixgbevfdown Unknown!   
Other
:01:10.6 2a:f4:a2:ea:4c:5d eth3   ixgbevfdown Unknown!   
Other


> 
>> 
>>> What kernel and driver version do you use?
>> 
>> Host Config:
>> 
>> $ cat /etc/lsb-release
>> DISTRIB_ID=Ubuntu
>> DISTRIB_RELEASE=20.10
>> DISTRIB_CODENAME=groovy
>> DISTRIB_DESCRIPTION="Ubuntu 20.10"
>> $ uname -a
>> Linux labnh 5.8.0-31-generic #33-Ubuntu SMP Mon Nov 23 18:44:54 UTC 2020 
>> x86_64 x86_64 x86_64 GNU/Linux
>> 
>> Docker Config (compiled and run inside):
>> 
>> root@p1:/# cat /etc/lsb-release
>> DISTRIB_ID=Ubuntu
>> DISTRIB_RELEASE=18.04
>> DISTRIB_CODENAME=bionic
>> DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"
>> 
>>> 
>>> Have you tried latest PF driver from intel?
>> 
>> No; however, I am running such a new ubuntu (5.8 kernel) so I was hoping 
>> that was sufficient.
> 
> Agree, still, please, do me a favour and try with latest, so I know i’m 
> looking at the same thing.

I did the build and installed the deb, I also updated the firmware on the NIC; 
however,

$ dpkg -l | grep i40
ii  i40e-dkms   2.12.6  
all  Intel i40e adapter driver

$ sudo ethtool -i enp101s0f0
driver: i40e
version: 2.8.20-k
firmware-version: 8.15 0x80009621 1.2829.0
expansion-rom-version:
bus-info: :65:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

It doesn't seem to have used it. Is there something else I need to do to have 
it use the dkms driver?

Thanks,
Chris.


signature.asc
Description: Message signed with OpenPGP

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18273): https://lists.fd.io/g/vpp-dev/message/18273
Mute This Topic: https://lists.fd.io/mt/78779141/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vppcom and binary api #binapi #vapi #vppcom

2020-12-07 Thread Florin Coras
Hi Venu, 

You’ll find documentation regarding the host stack in general and vcl in 
particular here [1] (see for instance [2]). As for code examples, check here 
[3] the vcl test client/server apps. 

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack 

[2] https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf 

[3] https://git.fd.io/vpp/tree/src/plugins/hs_apps/vcl 



> On Dec 7, 2020, at 11:24 AM, Venumadhav Josyula  wrote:
> 
> Hi Florin,
> 
> This is nothing to do with the subject being discussed. 
> 
> Can you please direct me to the following
> i) example(s) of VCL ?
> ii) Any tutorial explaining the working of VCL ?
> 
> Thanks & Regards,
> Venu
> 
> On Fri, 4 Dec 2020 at 01:28, Florin Coras  > wrote:
> Hi Ravi, 
> 
> VCL is not part of the vpp app/process, it’s a library that applications can 
> link against to be able to interact with the session layer in a more 
> posix-like manner. So if your app needs a binary api connection to vpp, it 
> needs to set it up independent of vcl. 
> 
> With regard to the “vpe-api” region, it’s worth nothing that the binary api 
> can work over two “transports”. Namely:
>
> 1) posix shared memory, and this is the one that relies on the vpe-api region 
> for bootstrapping and 
> 2) an af_unix socket which is configurable in startup.conf, i.e., by doing 
> something like socksvr { socket-name  }.
> 
> This is somewhat further complicated by the fact that a binary api connection 
> bootstrapped over a socket (option 2) can be switched to a shared memory 
> transport. Although similar to option1, in this case, the memory is allocated 
> and negotiated per binary api client using memfds and the socket. This is 
> what VCL uses underneath if configured to use the binary api with the socket 
> transport. 
> 
> Previously, VCL could also be configured to use option 1, but support for 
> this has been recently dropped.
> 
> Regards,
> Florin
> 
>> On Dec 3, 2020, at 7:22 AM, RaviKiran Veldanda > > wrote:
>> 
>> Hi Florin,
>> Thanks for your response, however I have a question, If I want to use 
>> another Binary API initialization do we need to do at our Application or in 
>> VPP code.
>> When I check VPP source code, the initialization always with "vpe-api" 
>> vpe_api_init calls always vl_set_memory_region_name ("/vpe-api"); so 
>> wondering do we have any other region present?
>> 
>> FYI: We are using 20.05 stable version.
>> Regards,
>> Ravi.
>> 
>> 
>> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18272): https://lists.fd.io/g/vpp-dev/message/18272
Mute This Topic: https://lists.fd.io/mt/78677945/21656
Mute #binapi:https://lists.fd.io/g/vpp-dev/mutehashtag/binapi
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vppcom and binary api #binapi #vapi #vppcom

2020-12-07 Thread Venumadhav Josyula
Hi Florin,

This is nothing to do with the subject being discussed.

Can you please direct me to the following
i) example(s) of VCL ?
ii) Any tutorial explaining the working of VCL ?

Thanks & Regards,
Venu

On Fri, 4 Dec 2020 at 01:28, Florin Coras  wrote:

> Hi Ravi,
>
> VCL is not part of the vpp app/process, it’s a library that applications
> can link against to be able to interact with the session layer in a more
> posix-like manner. So if your app needs a binary api connection to vpp, it
> needs to set it up independent of vcl.
>
> With regard to the “vpe-api” region, it’s worth nothing that the binary
> api can work over two “transports”. Namely:
>
> 1) posix shared memory, and this is the one that relies on the vpe-api
> region for bootstrapping and
> 2) an af_unix socket which is configurable in startup.conf, i.e., by doing
> something like socksvr { socket-name  }.
>
> This is somewhat further complicated by the fact that a binary api
> connection bootstrapped over a socket (option 2) can be switched to a
> shared memory transport. Although similar to option1, in this case, the
> memory is allocated and negotiated per binary api client using memfds and
> the socket. This is what VCL uses underneath if configured to use the
> binary api with the socket transport.
>
> Previously, VCL could also be configured to use option 1, but support for
> this has been recently dropped.
>
> Regards,
> Florin
>
> On Dec 3, 2020, at 7:22 AM, RaviKiran Veldanda 
> wrote:
>
> Hi Florin,
> Thanks for your response, however I have a question, If I want to use
> another Binary API initialization do we need to do at our Application or in
> VPP code.
> When I check VPP source code, the initialization always with "vpe-api"
> vpe_api_init calls always vl_set_memory_region_name ("/vpe-api"); so
> wondering do we have any other region present?
>
> FYI: We are using 20.05 stable version.
> Regards,
> Ravi.
>
>
>
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18271): https://lists.fd.io/g/vpp-dev/message/18271
Mute This Topic: https://lists.fd.io/mt/78677945/21656
Mute #binapi:https://lists.fd.io/g/vpp-dev/mutehashtag/binapi
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Jenkins.fd.io response time is very slow

2020-12-07 Thread Dave Wallace
Vanessa put Jenkins into shutdown mode and restarted Jenkins to resolve 
the issue.


Please let me know if you are seeing any issues with FD.io CI jobs.

Thanks,
-daw-

On 12/7/2020 11:41 AM, Dave Wallace via lists.fd.io wrote:

Folks,

I have opened a case [0] with the LF Help Desk to address the slow 
response of updates to jenkins.fd.io requests which are taking almost 
a minute to get results.


Thanks,
-daw-
[0] 
https://jira.linuxfoundation.org/plugins/servlet/theme/portal/2/IT-21159







-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18270): https://lists.fd.io/g/vpp-dev/message/18270
Mute This Topic: https://lists.fd.io/mt/78782143/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vppcom and binary api #binapi #vapi #vppcom

2020-12-07 Thread RaviKiran Veldanda
Hi Florin/Team of experts,
I have one more question, The basic requirement we have is
We have to send some data from our application to our own plugin.
While going through previous messages and documentation, we found we can use 
memif or quick sockets.
Our question is, do we have any other VPP provided methods of communication 
between a Plugin and Application? if yes can you please provide a pointer.
That may resolve our issue.
//Ravi.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18269): https://lists.fd.io/g/vpp-dev/message/18269
Mute This Topic: https://lists.fd.io/mt/78677945/21656
Mute #binapi:https://lists.fd.io/g/vpp-dev/mutehashtag/binapi
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Jenkins.fd.io response time is very slow

2020-12-07 Thread Dave Wallace

Folks,

I have opened a case [0] with the LF Help Desk to address the slow 
response of updates to jenkins.fd.io requests which are taking almost a 
minute to get results.


Thanks,
-daw-
[0] https://jira.linuxfoundation.org/plugins/servlet/theme/portal/2/IT-21159

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18268): https://lists.fd.io/g/vpp-dev/message/18268
Mute This Topic: https://lists.fd.io/mt/78782143/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] devicetest verify failures

2020-12-07 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via lists.fd.io

When new CSIT oper branch got created today,
VPP verify jobs started to failing, even though each test passed.
That should be fixed [0] now [1].

We are working on verify job improvements on CSIT side
to prevent similar issues in the future.

Vratko.

[0] https://gerrit.fd.io/r/c/csit/+/30329
[1] https://gerrit.fd.io/r/c/csit/+/30299

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18267): https://lists.fd.io/g/vpp-dev/message/18267
Mute This Topic: https://lists.fd.io/mt/78781748/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Problem with native avf

2020-12-07 Thread Christian Hopps


> On Dec 7, 2020, at 9:57 AM, Damjan Marion  wrote:
> 
> 
> Anything in dmesg output on PF side?

VF setup script running

> [  168.765633] i40e :65:00.2: FW LLDP is enabled
> [  168.765635] i40e :65:00.2: Allocating 1 VFs.
> [  168.870431] pci :65:0a.0: [8086:154c] type 00 class 0x02
> [  168.870441] pci :65:0a.0: enabling Extended Tags
> [  168.870609] pci :65:0a.0: Adding to iommu group 101
> [  168.881032] iavf: Intel(R) Ethernet Adaptive Virtual Function Network 
> Driver - version 3.2.3-k
> [  168.881033] Copyright (c) 2013 - 2018 Intel Corporation.
> [  168.881136] iavf :65:0a.0: enabling device ( -> 0002)
> [  168.925644] iavf :65:0a.0: Device is still in reset (-16), retrying
> [  168.971405] i40e :65:00.2: Setting MAC 02:41:0b:0b:0b:0b on VF 0
> [  169.061729] i40e :65:00.2: Bring down and up the VF interface to make 
> this change effective.
> [  169.154400] i40e :65:00.2: VF 0 is now trusted
> [  169.332527] i40e :65:00.0: FW LLDP is enabled
> [  169.332528] i40e :65:00.0: Allocating 1 VFs.
> [  169.438431] pci :65:02.0: [8086:154c] type 00 class 0x02
> [  169.438441] pci :65:02.0: enabling Extended Tags
> [  169.438588] pci :65:02.0: Adding to iommu group 102
> [  169.438668] iavf :65:02.0: enabling device ( -> 0002)
> [  169.483272] iavf :65:02.0: Device is still in reset (-16), retrying
> [  169.539244] i40e :65:00.0: Setting MAC 02:41:0d:0d:0d:0b on VF 0
> [  169.629728] i40e :65:00.0: Bring down and up the VF interface to make 
> this change effective.
> [  169.722387] i40e :65:00.0: VF 0 is now trusted
> [  169.902916] i40e :65:00.3: FW LLDP is enabled
> [  169.902917] i40e :65:00.3: Allocating 1 VFs.
> [  170.010432] pci :65:0e.0: [8086:154c] type 00 class 0x02
> [  170.010441] pci :65:0e.0: enabling Extended Tags
> [  170.010573] pci :65:0e.0: Adding to iommu group 103
> [  170.010641] iavf :65:0e.0: enabling device ( -> 0002)
> [  170.055217] iavf :65:0e.0: Device is still in reset (-16), retrying
> [  170.110941] i40e :65:00.3: Setting MAC 02:42:0c:0c:0c:0c on VF 0
> [  170.201737] i40e :65:00.3: Bring down and up the VF interface to make 
> this change effective.
> [  170.294398] i40e :65:00.3: VF 0 is now trusted

VPP starts running

> [  178.922603] ixgbe :17:00.0: complete
> [  178.928006] i40e :65:00.0 enp101s0f0: NIC Link is Down
> [  179.186499] vfio-pci :17:00.0: enabling device (0142 -> 0143)
> [  180.105670] i40e :65:00.0 enp101s0f0: NIC Link is Up, 10 Gbps Full 
> Duplex, Flow Control: None
> [  180.247071] vfio-pci :65:0a.0: enabling device ( -> 0002)
> [  180.841280] i40e :65:00.2: VF 0 failed opcode 24, retval: -5
> [  181.721005] vfio-pci :65:0e.0: enabling device ( -> 0002)
> [  182.304338] i40e :65:00.3: VF 0 failed opcode 24, retval: -5

> What kernel and driver version do you use?

Host Config:

$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.10
DISTRIB_CODENAME=groovy
DISTRIB_DESCRIPTION="Ubuntu 20.10"
$ uname -a
Linux labnh 5.8.0-31-generic #33-Ubuntu SMP Mon Nov 23 18:44:54 UTC 2020 x86_64 
x86_64 x86_64 GNU/Linux

Docker Config (compiled and run inside):

root@p1:/# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"

> 
> Have you tried latest PF driver from intel?

No; however, I am running such a new ubuntu (5.8 kernel) so I was hoping that 
was sufficient.

Thanks,
Chris.

> 
> —
> Damjan
> 
>> On 07.12.2020., at 15:42, Christian Hopps  wrote:
>> 
>> I'm hitting a problem with the native AVF driver. The issue does not seem to 
>> exist when using the DPDK driver. This is on stable/2009, with the DPDK bump 
>> to 2008 reverted (b/c 2008 doesn't work with mellanox anymore apparently).
>> 
>> I have an x710 card configured with a 4x10G breakout cable. The code that is 
>> failing is
>> 
>> if ((ad->feature_bitmap & VIRTCHNL_VF_OFFLOAD_RSS_PF) &&
>> (error = avf_op_config_rss_lut (vm, ad)))
>>   return error;
>> 
>> Here's the debug log:
>> 
>> 2020/12/07 14:19:25:639 debug  avf:65:0a.0: 
>> request_queues: num_queue_pairs 6
>> 2020/12/07 14:19:25:779 debug  avf:65:0a.0: version: 
>> major 1 minor 1
>> 2020/12/07 14:19:25:779 debug  avf:65:0a.0: 
>> get_vf_reqources: bitmap 0xb00a1
>> 2020/12/07 14:19:25:789 debug  avf:65:0a.0: 
>> get_vf_reqources: num_vsis 1 num_queue_pairs 6 max_vectors 5 max_mtu 0 
>> vf_offload_flags 0xb000 rss_key_size 52 rss_lut_size 64
>> 2020/12/07 14:19:25:789 debug  avf:65:0a.0: 
>> get_vf_reqources_vsi[0]: vsi_id 18 num_queue_pairs 6 vsi_type 6 qset_handle 
>> 6 default_mac_addr 02:41:0b:0b:0b:0b
>> 2020/12/07 14:19:25:789 debug  avf:65:0a.0: 
>> disable_vlan_stripping
>> 2020/12/07 14:19:25:800 debug  avf:65:0a.0: 
>> 

Re: [vpp-dev] Problem with native avf

2020-12-07 Thread Damjan Marion

Anything in dmesg output on PF side? 

What kernel and driver version do you use?

Have you tried latest PF driver from intel?

— 
Damjan

> On 07.12.2020., at 15:42, Christian Hopps  wrote:
> 
> I'm hitting a problem with the native AVF driver. The issue does not seem to 
> exist when using the DPDK driver. This is on stable/2009, with the DPDK bump 
> to 2008 reverted (b/c 2008 doesn't work with mellanox anymore apparently).
> 
> I have an x710 card configured with a 4x10G breakout cable. The code that is 
> failing is
> 
>  if ((ad->feature_bitmap & VIRTCHNL_VF_OFFLOAD_RSS_PF) &&
>  (error = avf_op_config_rss_lut (vm, ad)))
>return error;
> 
> Here's the debug log:
> 
> 2020/12/07 14:19:25:639 debug  avf:65:0a.0: 
> request_queues: num_queue_pairs 6
> 2020/12/07 14:19:25:779 debug  avf:65:0a.0: version: 
> major 1 minor 1
> 2020/12/07 14:19:25:779 debug  avf:65:0a.0: 
> get_vf_reqources: bitmap 0xb00a1
> 2020/12/07 14:19:25:789 debug  avf:65:0a.0: 
> get_vf_reqources: num_vsis 1 num_queue_pairs 6 max_vectors 5 max_mtu 0 
> vf_offload_flags 0xb000 rss_key_size 52 rss_lut_size 64
> 2020/12/07 14:19:25:789 debug  avf:65:0a.0: 
> get_vf_reqources_vsi[0]: vsi_id 18 num_queue_pairs 6 vsi_type 6 qset_handle 6 
> default_mac_addr 02:41:0b:0b:0b:0b
> 2020/12/07 14:19:25:789 debug  avf:65:0a.0: 
> disable_vlan_stripping
> 2020/12/07 14:19:25:800 debug  avf:65:0a.0: 
> config_rss_lut: vsi_id 18 rss_lut_size 64 lut 
> 0x
> 2020/12/07 14:19:25:906 erravf1313:13:13.0: error: 
> avf_send_to_pf: error [v_opcode = 24, v_retval -5]
> 
> If I comment out the 2 calls to VIRTCHNL_CF_OFFLOAD_RSS_PF, the device 
> initializes.
> 
> Is there something about the breakout cable configuration that needs special 
> attention from this code?
> 
> Thanks,
> Chris.
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18265): https://lists.fd.io/g/vpp-dev/message/18265
Mute This Topic: https://lists.fd.io/mt/78779141/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Problem with native avf

2020-12-07 Thread Christian Hopps
I'm hitting a problem with the native AVF driver. The issue does not seem to 
exist when using the DPDK driver. This is on stable/2009, with the DPDK bump to 
2008 reverted (b/c 2008 doesn't work with mellanox anymore apparently).

I have an x710 card configured with a 4x10G breakout cable. The code that is 
failing is

  if ((ad->feature_bitmap & VIRTCHNL_VF_OFFLOAD_RSS_PF) &&
  (error = avf_op_config_rss_lut (vm, ad)))
return error;

Here's the debug log:

2020/12/07 14:19:25:639 debug  avf:65:0a.0: request_queues: 
num_queue_pairs 6
2020/12/07 14:19:25:779 debug  avf:65:0a.0: version: major 
1 minor 1
2020/12/07 14:19:25:779 debug  avf:65:0a.0: 
get_vf_reqources: bitmap 0xb00a1
2020/12/07 14:19:25:789 debug  avf:65:0a.0: 
get_vf_reqources: num_vsis 1 num_queue_pairs 6 max_vectors 5 max_mtu 0 
vf_offload_flags 0xb000 rss_key_size 52 rss_lut_size 64
2020/12/07 14:19:25:789 debug  avf:65:0a.0: 
get_vf_reqources_vsi[0]: vsi_id 18 num_queue_pairs 6 vsi_type 6 qset_handle 6 
default_mac_addr 02:41:0b:0b:0b:0b
2020/12/07 14:19:25:789 debug  avf:65:0a.0: 
disable_vlan_stripping
2020/12/07 14:19:25:800 debug  avf:65:0a.0: config_rss_lut: 
vsi_id 18 rss_lut_size 64 lut 
0x
2020/12/07 14:19:25:906 erravf1313:13:13.0: error: 
avf_send_to_pf: error [v_opcode = 24, v_retval -5]

If I comment out the 2 calls to VIRTCHNL_CF_OFFLOAD_RSS_PF, the device 
initializes.

Is there something about the breakout cable configuration that needs special 
attention from this code?

Thanks,
Chris.


signature.asc
Description: Message signed with OpenPGP

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18264): https://lists.fd.io/g/vpp-dev/message/18264
Mute This Topic: https://lists.fd.io/mt/78779141/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] why not VPP wireguard add peer: input error ?

2020-12-07 Thread Benoit Ganne (bganne) via lists.fd.io
Hi,

The doc is actually wrong, you should use 'port' instead of 'dst-port':
vpp# wireguard peer add wg0 public-key 
3xeh8iCP8nSTT0HKGPjQOb/pQXIUncC8Te8oyCU4fVg= endpoint 172.20.20.2 allowed-ip 
10.100.0.0/24 port 8001 persistent-keepalive 25

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of li.xia
> Sent: lundi 7 décembre 2020 11:09
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] why not VPP wireguard add peer: input error ?
> 
> Hi, vpp-dev
> 
> 
> I made a test for VPP's wireguard VPN  with version 20.09 , and I found a
> error below:
> "wireguard peer add: Input error"
> 
> my topological graph below
> 
> 
> 1、ping from srv1(or srv2) to peer is ok, by 172.20.20.x in vppctl
> 2、VPP srv1 wireguard create like below
> 
>   vpp# wireguard create listen-port 8001 private-key
> MA4By9Ymc38GoI1zaHq+Ruv7qN58ngAPVOKD5YiTMVo= src 172.20.20.1
>   vpp# show wireguard interface
>   [0] wg0 src:172.20.20.1 port:8001 private-
> key:MA4By9Ymc38GoI1zaHq+Ruv7qN58ngAPVOKD5YiTMVo=
> 300e01cbd626737f06a08d73687abe46ebfba8de7c9e000f54e283e58893315a public-
> key:3xeh8iCP8nSTT0HKGPjQOb/pQXIUncC8Te8oyCU4fVg=
> df17a1f2208ff274934f41ca18f8d039bfe94172149dc0bc4def28c825387d58 mac-key:
> 075349f1b6956d834555c5384bbdbcdbaf6d2aba8f0e8aea249f9b84e1b15d20
>   vpp# wireguard peer add wg0 public-key
> 3xeh8iCP8nSTT0HKGPjQOb/pQXIUncC8Te8oyCU4fVg= endpoint 172.20.20.2 allowed-
> ip 10.100.0.0/24 dst-port 8001  persistent-keepalive 25
>   wireguard peer add: Input error
> 
> 
> 
> 3、In VPP srv2 the same error happened
> 4、Two VPP runs in ubuntu 18.04 OS of vmware 15.
> 
> 
> Is there something wrong with my configures ?
> 
> 
> addition for VPP log
> 
> 
> vpp# sh log
> 2020/12/07 01:32:07:366 notice plugin/loadLoaded plugin:
> abf_plugin.so (Access Control List (ACL) Based Forwarding)
> 2020/12/07 01:32:07:367 notice plugin/loadLoaded plugin:
> acl_plugin.so (Access Control Lists (ACL))
> 2020/12/07 01:32:07:367 notice plugin/loadLoaded plugin:
> adl_plugin.so (Allow/deny list plugin)
> 2020/12/07 01:32:07:368 notice plugin/loadLoaded plugin:
> avf_plugin.so (Intel Adaptive Virtual Function (AVF) Device Driver)
> 2020/12/07 01:32:07:368 notice plugin/loadLoaded plugin:
> builtinurl_plugin.so (vpp built-in URL support)
> 2020/12/07 01:32:07:368 notice plugin/loadLoaded plugin:
> cdp_plugin.so (Cisco Discovery Protocol (CDP))
> 2020/12/07 01:32:07:368 notice plugin/loadLoaded plugin:
> cnat_plugin.so (CNat Translate)
> 2020/12/07 01:32:07:375 notice plugin/loadLoaded plugin:
> crypto_ipsecmb_plugin.so (Intel IPSEC Multi-buffer Crypto Engine)
> 2020/12/07 01:32:07:375 notice plugin/loadLoaded plugin:
> crypto_native_plugin.so (Intel IA32 Software Crypto Engine)
> 2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
> crypto_openssl_plugin.so (OpenSSL Crypto Engine)
> 2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
> crypto_sw_scheduler_plugin.so (SW Scheduler Crypto Async Engine plugin)
> 2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
> ct6_plugin.so (IPv6 Connection Tracker)
> 2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
> det44_plugin.so (Deterministic NAT (CGN))
> 2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
> dhcp_plugin.so (Dynamic Host Configuration Protocol (DHCP))
> 2020/12/07 01:32:07:376 notice plugin/loadLoaded plugin:
> dns_plugin.so (Simple DNS name resolver)
> 2020/12/07 01:32:07:392 notice plugin/loadLoaded plugin:
> dpdk_plugin.so (Data Plane Development Kit (DPDK))
> 2020/12/07 01:32:07:392 notice plugin/loadLoaded plugin:
> dslite_plugin.so (Dual-Stack Lite)
> 2020/12/07 01:32:07:392 notice plugin/loadLoaded plugin:
> flowprobe_plugin.so (Flow per Packet)
> 2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
> gbp_plugin.so (Group Based Policy (GBP))
> 2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
> gtpu_plugin.so (GPRS Tunnelling Protocol, User Data (GTPv1-U))
> 2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
> hs_apps_plugin.so (Host Stack Applications)
> 2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
> http_static_plugin.so (HTTP Static Server)
> 2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
> igmp_plugin.so (Internet Group Management Protocol (IGMP))
> 2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
> ikev2_plugin.so (Internet Key Exchange (IKEv2) Protocol)
> 2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
> ila_plugin.so (Identifier Locator Addressing (ILA) for IPv6)
> 2020/12/07 01:32:07:393 notice plugin/loadLoaded plugin:
> ioam_plugin.so (Inbound Operations, Administration, and Maintenance (OAM))
> 2020/12/07 01:32:07:394 notice plugin/loadLoaded plugin:
> l2e_plugin.so (Layer 2 (L2) Emulation)
> 2020/12/07 01:32:07:394 notice 

Re: [vpp-dev] SVE/SVE2 based vectorization optimization

2020-12-07 Thread Lijian Zhang
Hi Damjan,
Sorry for being late. It took me some time to investigate writing SVE/SVE2 code 
for fixed vector register size.

I can totally get your concerns on the code branch caused by deploying scalable 
vector code, which is quite different from existing SIMD code using NEON, avx2, 
avx512.

In the patch https://gerrit.fd.io/r/c/vpp/+/29943/2, it totally rewrote 
ethernet-input node, which makes the code seems not easy to maintain and may 
cause your concern.

Maybe I should limit the usage of SVE/SVE2 in small scale code segment. Could 
you take a look at patches https://gerrit.fd.io/r/c/vpp/+/29942/2 and 
https://gerrit.fd.io/r/c/vpp/+/30326? Both are deploying SVE in function 
is_dmac_bad_x4().
The former one is using scalable type for all possible SVE vector register 
size, and latter one is writing code for SVE 256-bit register size only.

The scalable coding works for all possible VEC vector registers size, while in 
the fixed coding style, we have to provide the code separately for all possible 
SVE register size.
Another benefit of scalable coding is that the tail-loop will not be required, 
which will save CPU cycles.
Coding for fixed SVE vector register size will lose the two benefits above.
Please let us know your decision/suggestion?

For people having no access to SVE/SVE2 hardware, they can use the software 
emulator available in below steps.
[1] Install Arm QEMU/Docker on x86 servers to verify SVE/SVE2 code
sudo apt-get install qemu binfmt-support qemu-user-static # Install the qemu 
packages
sudo docker run --rm --privileged multiarch/qemu-user-static --reset -p yes # 
This step will execute the registering scripts
sudo docker run --rm -t arm64v8/ubuntu uname -m # Testing the emulation 
environment aarch64
gcc-10 -march=armv8.3-a+crc+crypto+sve2 main.c
Thanks.
From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion via 
lists.fd.io
Sent: 2020年11月17日 20:15
To: Lijian Zhang 
Cc: nd ; Nitin Saxena ; Govindarajan 
Mohandoss ; Honnappa Nagarahalli 
; Jieqiang Wang ; vpp-dev 

Subject: Re: [vpp-dev] SVE/SVE2 based vectorization optimization


Hi Lijian,

I looked at your patches and I’m quite concerned about this approach, as you 
basically wrote completely different code path for the feature.
I don't see how we can maintain such code easily specially because today we 
don't have ARM hardware which can run that code.
If we merge that code two things can happen:
a) without testing - that code will fall out of sync quickly
a) with testing - people will not be able to modify existing code without 
updating also SVE code and that may be problem if they don't have access to 
hardware

Majority of the code we have is always dealing with the fixed vector size.
Vector size is mainly human decision in VPP code and it takes into account many 
factors including the size and locality of the data.
So it makes more sense to me that we provide SVE based VEC256 and VEC512 
functions which will make existing code to just work on arm instead of trying 
to implenet and maintain separate code paths…

—
Damjan



> On 16.11.2020., at 06:10, Lijian Zhang 
> mailto:lijian.zh...@arm.com>> wrote:
>
> Hi Damjan,
> I applied SVE based vectorization in ethernet-input node functions.
> Could you please take time to review below patches?
>
> The patches are committed as the proposal for your comments.
> I have verified the functionality of the code on software emulation platform, 
> and will do performance benchmarking when CPUs with SVE feature are available.
>
> https://gerrit.fd.io/r/c/vpp/+/29939 vppinfra: apply SVE/SVE2 based 
> vectorization [NEW]
> https://gerrit.fd.io/r/c/vpp/+/29940 ethernet: determine next[] node using 
> SVE [NEW]
> https://gerrit.fd.io/r/c/vpp/+/29941 ethernet: secondary DMAC check using SVE 
> [NEW]
> https://gerrit.fd.io/r/c/vpp/+/29942 ethernet: DMAC check using SVE [NEW]
> https://gerrit.fd.io/r/c/vpp/+/29943 ethernet: DMAC/ethertype parse using SVE 
> [NEW]
> https://gerrit.fd.io/r/c/vpp/+/29944 vlib: SVE based vlib_buffer operations 
> [NEW]
>
> Thanks.
>
>> -Original Message-
>> From: Damjan Marion mailto:dmar...@me.com>>
>> Sent: 2020年10月22日 20:33
>> To: Lijian Zhang mailto:lijian.zh...@arm.com>>
>> Cc: nd mailto:n...@arm.com>>; Nitin Saxena 
>> mailto:nsax...@marvell.com>>; Govindarajan
>> Mohandoss 
>> mailto:govindarajan.mohand...@arm.com>>; 
>> Honnappa Nagarahalli
>> mailto:honnappa.nagaraha...@arm.com>>; 
>> Jieqiang Wang
>> mailto:jieqiang.w...@arm.com>>; vpp-dev 
>> mailto:vpp-dev@lists.fd.io>>
>> Subject: Re: SVE/SVE2 based vectorization optimization
>>
>>
>> Dear Lijian,
>>
>> You took very uncommon example of vector usage in the VPP codebase.
>> Common usage is big packet processing loop which is dealing with 2, 4 or 8
>> packets in one iteration.
>>
>> I.e. How we will leverage use of SVE in src/vnet/ethernet/node.c ?
>>
>> Thanks,
>>
>> —
>> Damjan
>>
>>
>>
>>> On 22.10.2020., at 14:08, Lijian Zhang 
>>> mailto:lijian.zh...@arm.com>> wrote:
>>>
>>> Hi Damjan,
>>> I