Re: [vpp-dev] LACP issues w/ cdma/connectX 6

2022-12-05 Thread Eyle Brinkhuis
Hi, thanks for your reply.

That’s the weird thing.. we have two identical hosts connected to the same 
switch (sn2700), with same OS, same VPP version, same minx_ofed and same 
everything except for the NIC. The box with a CX5 works like a charm, the box 
with the CX6 doesn’t.. but also does, when I create the bond in netplan..

Regards,

Eyle

> On 5 Dec 2022, at 15:07, najieb  wrote:
> 
> I once experienced LACP not UP because the mode on vpp (lacp-static) did not 
> match the mode on switch (lacp-dynamic). try changing the lacp mode on your 
> switch. 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22282): https://lists.fd.io/g/vpp-dev/message/22282
Mute This Topic: https://lists.fd.io/mt/95468251/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] LACP issues w/ cdma/connectX 6

2022-12-05 Thread Eyle Brinkhuis
Hi Ben,

We have a few new boxes that have a connectX 6 fitted (Mellanox ConnectX-6 Dx 
100GbE QSFP56 2-port PCIe 4 Ethernet Adapter) and we run into an issue with 
LACP (seems a popular topic these days.. :-)). We are not able to get LACP up, 
while running this:


create int rdma host-if ens3f0 name rdma0

create int rdma host-if ens3f1 name rdma1

set interface state rdma1 up

set interface state rdma0 up

create bond mode lacp

bond add BondEthernet0 rdma0

bond add BondEthernet0 rdma1

set int state BondEthernet0 up


On the switch side (Mellanox sn2700) everything is the same as with our 
Mellanox CX5 NICs (also on RDMA, with same lacp configuration as above). CX5 
stuff all works a charm.


For the CX6, we receive the BDPU’s from RDMA, and they seem to be processed:


Packet 1


00:00:52:494905: rdma-input

  rdma: rdma0 (1) next-node bond-input

00:00:52:494920: bond-input

  src b8:59:9f:67:fa:ba, dst 01:80:c2:00:00:02, rdma0 -> rdma0

00:00:52:494926: ethernet-input

  SLOW_PROTOCOLS: b8:59:9f:67:fa:ba -> 01:80:c2:00:00:02

00:00:52:494930: lacp-input

  rdma0:

Length: 110

  LACPv1

  Actor Information TLV: length 20

System b8:59:9f:67:fa:80

System priority 32768

Key 13834

Port priority 32768

Port number 25

State 0x45

  LACP_STATE_LACP_ACTIVITY (0)

  LACP_STATE_AGGREGATION (2)

  LACP_STATE_DEFAULTED (6)

  Partner Information TLV: length 20

System 00:00:00:00:00:00

System priority 0

Key 0

Port priority 0

Port number 0

State 0x7c

  LACP_STATE_AGGREGATION (2)

  LACP_STATE_SYNCHRONIZATION (3)

  LACP_STATE_COLLECTIING (4)

  LACP_STATE_DISTRIBUTING (5)

  LACP_STATE_DEFAULTED (6)

  0x:  0101 0114 8000 b859 9f67 fa80 360a 8000

  0x0010:  0019 4500  0214    

  0x0020:     7c00  0310  

  0x0030:         

  0x0040:         

  0x0050:         

  0x0060:        

00:00:52:494936: error-drop

  rx:rdma0

00:00:52:494937: drop

  lacp-input: good lacp packets — consumed


vpp# sh lacpactor state 
 partner state

interface namesw_if_index  bond interface   
exp/def/dis/col/syn/agg/tim/act  exp/def/dis/col/syn/agg/tim/act

rdma0 1BondEthernet0  0   1   0   0   1   1 
  1   10   0   0   0   0   0   0   0

  LAG ID: [(,02-fe-c6-4c-c3-62,0003,00ff,0001), 
(,00-00-00-00-00-00,0003,00ff,0001)]

  RX-state: DEFAULTED, TX-state: TRANSMIT, MUX-state: ATTACHED, PTX-state: 
PERIODIC_TX

rdma1 2BondEthernet0  0   1   0   0   0   1 
  1   10   0   0   0   0   0   0   0

  LAG ID: [(,02-fe-c6-4c-c3-62,0003,00ff,0002), 
(,00-00-00-00-00-00,0003,00ff,0002)]

  RX-state: DEFAULTED, TX-state: TRANSMIT, MUX-state: DETACHED, PTX-state: 
PERIODIC_TX


vpp# sh bond details

BondEthernet0

  mode: lacp

  load balance: l2

  number of active members: 0

  number of members: 2

rdma0

rdma1

  device instance: 0

  interface id: 0

  sw_if_index: 3

  hw_if_index: 3



vpp# sh log

2022/12/05 12:43:14:840 notice plugin/loadLoaded plugin: abf_plugin.so 
(Access Control List (ACL) Based Forwarding)

2022/12/05 12:43:14:841 notice plugin/loadLoaded plugin: acl_plugin.so 
(Access Control Lists (ACL))

2022/12/05 12:43:14:842 notice plugin/loadLoaded plugin: adl_plugin.so 
(Allow/deny list plugin)

2022/12/05 12:43:14:842 notice plugin/loadLoaded plugin: 
af_xdp_plugin.so (AF_XDP Device Plugin)

2022/12/05 12:43:14:842 notice plugin/loadLoaded plugin: 
arping_plugin.so (Arping (arping))

2022/12/05 12:43:14:843 notice plugin/loadLoaded plugin: avf_plugin.so 
(Intel Adaptive Virtual Function (AVF) Device Driver)

2022/12/05 12:43:14:843 notice plugin/loadLoaded plugin: 
builtinurl_plugin.so (vpp built-in URL support)

2022/12/05 12:43:14:843 notice plugin/loadLoaded plugin: cdp_plugin.so 
(Cisco Discovery Protocol (CDP))

2022/12/05 12:43:14:844 notice plugin/loadLoaded plugin: cnat_plugin.so 
(CNat Translate)

2022/12/05 12:43:14:860 notice plugin/loadLoaded plugin: 
crypto_ipsecmb_plugin.so (Intel IPSEC Multi-buffer Crypto Engine)

2022/12/05 12:43:14:860 notice plugin/loadLoaded plugin: 
crypto_native_plugin.so (Intel IA32 Software Crypto Engine)

2022/12/05 12:43:14:860 notice plugin/loadLoaded plugin: 
crypto_openssl_plugin.so (OpenSSL Crypto Engine)

2022/12/05 12:43:14:861 notice plugin/loadLoaded plugin: 
crypto_sw_scheduler_plugin.so (SW Scheduler Crypto Async Engine plugin)

2022/12/05 12:43:14:861 notice plugin/loadLoaded plugin: ct6_plugin.so 
(IPv6 Connection Tracker)

2022/12/05 12:43:14:861 notice plugin/loadLoaded plugin: 
det44_plugin.so 

Re: [vpp-dev] Feedback on a tool: vppcfg

2022-04-04 Thread Eyle Brinkhuis
Hoi Pim,

This solves a challenge we are having in our infrastructure. Will give this a 
spin shortly!

—
Eyle

> On 3 Apr 2022, at 18:26, Pim van Pelt  wrote:
> 
> Hoi,
> 
> That's so much more enthusiasm than I had anticipated, thank you very much 
> for your kind and encouraging words! I've spent some time writing a starting 
> set of documentation.
> User Guide: https://github.com/pimvanpelt/vppcfg/blob/main/docs/user-guide.md 
> 
> Config Guide: 
> https://github.com/pimvanpelt/vppcfg/blob/main/docs/config-guide.md 
> 
> 
> The config guide mentions a few caveats, nothing major, but just a few 
> considerations in case we do want to elevate this and use it more broadly 
> than just my basement ISP. While I wait around a bit for community feedback 
> on its use,  I've cleaned up the code a bit, so that it can serve as a 
> standalone compiled binary now:
> usage: vppcfg [-h] [-d] [-q] [-f] {check,dump,plan,apply} ...
> 
> positional arguments:
>   {check,dump,plan,apply}
> check   check given YAML config for validity (no VPP)
> dumpdump current running VPP configuration (VPP readonly)
> planplan changes from current VPP dataplane to target 
> config (VPP readonly)
> apply   apply changes from current VPP dataplane to target 
> config
> 
> optional arguments:
>   -h, --helpshow this help message and exit
>   -d, --debug   enable debug logging, default False
>   -q, --quiet   be quiet (only warnings/errors), default False
>   -f, --force   force progress despite warnings, default False
> 
> Please see vppcfg  -h   for per-command arguments
> 
> groet,
> Pim
> 
> On Sun, Apr 3, 2022 at 8:46 AM Jerome Tollet (jtollet)  > wrote:
> Hi Pim,
> 
> Over the past few years, we had many discussions about how best can VPP be 
> configured by end users.
> 
> What is really nice with your proposal is that it’s pragmatic and  simple. 
> Actually much more simple than the Netconf/yang (remember Honeycomb…) and 
> probably cover many use cases.
> 
> I’ve not yet tried it but will certainly do it soon.
> 
> Thanks !
> 
> Jerome
> 
>  
> 
> De : vpp-dev@lists.fd.io   > de la part de Pim van Pelt  >
> Date : samedi, 2 avril 2022 à 17:18
> À : vpp-dev mailto:vpp-dev@lists.fd.io>>
> Objet : [vpp-dev] Feedback on a tool: vppcfg
> 
> Hoi colleagues,
> 
>  
> 
> I know there exist several smaller and larger scale VPP configuration 
> harnesses out there, some more complex and feature complete than others. I 
> wanted to share my work on an approach based on a YAML configuration with 
> strict syntax and semantic validation, and a path planner that brings the 
> dataplane from any configuration state safely to any other configuration 
> state, as defined by these YAML files.
> 
>  
> 
> A bit of a storyline on the validator: 
> https://ipng.ch/s/articles/2022/03/27/vppcfg-1.html 
> 
> A bit of background on the DAG path planner: 
> https://ipng.ch/s/articles/2022/04/02/vppcfg-2.html 
> 
> Code with tests on https://github.com/pimvanpelt/vppcfg 
> 
>  
> 
> The config and planner supports interfaces, bondethernets, vxlan tunnels, 
> l2xc, bridgedomains and, quelle surprise, linux-cp configurations of all 
> sorts. If anybody feels like giving it a spin, I'd certainly appreciate 
> feedback and if you can manage to create two configuration states that the 
> planner cannot reconcile, I'd love to hear about those too.
> 
>  
> 
> For now, the path planner works by reading the API configuration state 
> exactly once (at startup), and then it figures out the CLI calls to print 
> without needing to consult VPP again. This is super useful as it’s a 
> non-intrusive way to inspect the changes before applying them, and it’s a 
> property I’d like to carry forward. However, I don’t necessarily think that 
> emitting the CLI statements is the best user experience, it’s more for the 
> purposes of analysis that they can be useful. What I really want to do is 
> emit API calls after the plan is created and reviewed/approved, directly 
> reprogramming the VPP dataplane. However, the VPP API set needed to do this 
> is not 100% baked yet. For example, I observed crashes when tinkering with 
> BVIs and Loopbacks (see my thread from last week, thanks for the response 
> Neale), and fixed a few obvious errors in the Linux CP API (gerrit) but there 
> are still a few more issues to work through before I can set the next step 
> with vppcfg.
> 
> If this tool proves to be useful to others, I'm happy to upstream it to 
> extras/ somewhere.
> 
>  
> 
> -- 
> 
> Pim 

Re: [vpp-dev] RDMA plugin in 21.10.1

2021-12-10 Thread Eyle Brinkhuis
Hi Ben,

Thanks. We managed to get it tot work with the newer version:

vpp_21.10.1-2~g0a485f517~b13_amd64, 
vpp-plugin-core_21.10.1-2~g0a485f517~b13_amd64, 
python3-vpp-api_21.10.1-2~g0a485f517~b13_amd64, 
libvppinfra_21.10.1-2~g0a485f517~b13_amd64 (manually picked from package cloud).

Thanks for the quick reply!

Cheers,

Eyle

> On 10 Dec 2021, at 11:17, Benoit Ganne (bganne) via lists.fd.io 
>  wrote:
> 
> Hi Eyle,
> 
> There was a bug in the compilation of libibverb which the rdma driver depends 
> upon, this was fixed with https://gerrit.fd.io/r/c/vpp/+/34515 but too late 
> for the official 21.10...
> My apologizes.
> 
> Best
> ben
> 
>> -Original Message-
>> From: vpp-dev@lists.fd.io  On Behalf Of Eyle
>> Brinkhuis
>> Sent: vendredi 10 décembre 2021 10:51
>> To: vpp-dev@lists.fd.io
>> Subject: [vpp-dev] RDMA plugin in 21.10.1
>> 
>> Hi all,
>> 
>> We just upgraded VPP to 21.10.1 from packages on a few test machines, but
>> are missing the RDMA plugin:
>> 
>> vpp# show plugins
>> Plugin path is: /usr/lib/x86_64-linux-
>> gnu/vpp_plugins:/usr/lib/vpp_plugins
>> 
>> 
>> Plugin   Version
>> Description
>>  1. ioam_plugin.so   21.10.1-release
>> Inbound Operations, Administration, and Maintenance (OAM)
>>  2. lldp_plugin.so   21.10.1-release
>> Link Layer Discovery Protocol (LLDP)
>>  3. mss_clamp_plugin.so  21.10.1-release
>> TCP MSS clamping plugin
>>  4. urpf_plugin.so   21.10.1-release
>> Unicast Reverse Path Forwarding (uRPF)
>>  5. tlspicotls_plugin.so 21.10.1-release
>> Transport Layer Security (TLS) Engine, Picotls Based
>>  6. l3xc_plugin.so   21.10.1-release
>> L3 Cross-Connect (L3XC)
>>  7. mdata_plugin.so  21.10.1-release
>> Buffer metadata change tracker.
>>  8. ping_plugin.so   21.10.1-release
>> Ping (ping)
>>  9. avf_plugin.so21.10.1-release
>> Intel Adaptive Virtual Function (AVF) Device Driver
>> 10. l2tp_plugin.so   21.10.1-release
>> Layer 2 Tunneling Protocol v3 (L2TP)
>> 11. pppoe_plugin.so  21.10.1-release
>> PPP over Ethernet (PPPoE)
>> 12. pnat_plugin.so   21.10.1-release
>> Policy 1:1 NAT
>> 13. crypto_native_plugin.so  21.10.1-release
>> Intel IA32 Software Crypto Engine
>> 14. srv6am_plugin.so 21.10.1-release
>> Masquerading Segment Routing for IPv6 (SRv6) Proxy
>> 15. l2e_plugin.so21.10.1-release
>> Layer 2 (L2) Emulation
>> 16. det44_plugin.so  21.10.1-release
>> Deterministic NAT (CGN)
>> 17. adl_plugin.so21.10.1-release
>> Allow/deny list plugin
>> 18. acl_plugin.so21.10.1-release
>> Access Control Lists (ACL)
>> 19. crypto_openssl_plugin.so 21.10.1-release
>> OpenSSL Crypto Engine
>> 20. dslite_plugin.so 21.10.1-release
>> Dual-Stack Lite
>> 21. tlsmbedtls_plugin.so 21.10.1-release
>> Transport Layer Security (TLS) Engine, Mbedtls Based
>> 22. ikev2_plugin.so  21.10.1-release
>> Internet Key Exchange (IKEv2) Protocol
>> 23. svs_plugin.so21.10.1-release
>> Source Virtual Routing and Fowarding (VRF) Select
>> 24. vrrp_plugin.so   21.10.1-release
>> VRRP v3 (RFC 5798)
>> 25. hs_apps_plugin.so21.10.1-release
>> Host Stack Applications
>> 26. nsim_plugin.so   21.10.1-release
>> Network Delay Simulator
>> 27. dns_plugin.so21.10.1-release
>> Simple DNS name resolver
>> 28. cnat_plugin.so   21.10.1-release
>> CNat Translate
>> 29. dhcp_plugin.so   21.10.1-release
>> Dynamic Host Configuration Protocol (DHCP)
>> 30. gbp_plugin.so21.10.1-release
>> Group Based Policy (GBP)
>> 31. igmp_plugin.so   21.10.1-release
>> Internet Group Management Protocol (IGMP)
>> 32. nat_plugin.so21.10.1-release
>> Network Address Translation (NAT)
>> 33. memif_plugin.so  21.10.1-release
>> Packet Memory Interface (memif) -- Experim

[vpp-dev] RDMA plugin in 21.10.1

2021-12-10 Thread Eyle Brinkhuis
Hi all,

We just upgraded VPP to 21.10.1 from packages on a few test machines, but are 
missing the RDMA plugin:

vpp# show plugins
 Plugin path is: /usr/lib/x86_64-linux-gnu/vpp_plugins:/usr/lib/vpp_plugins

 Plugin   Version  
Description
  1. ioam_plugin.so   21.10.1-release  
Inbound Operations, Administration, and Maintenance (OAM)
  2. lldp_plugin.so   21.10.1-release  
Link Layer Discovery Protocol (LLDP)
  3. mss_clamp_plugin.so  21.10.1-release  
TCP MSS clamping plugin
  4. urpf_plugin.so   21.10.1-release  
Unicast Reverse Path Forwarding (uRPF)
  5. tlspicotls_plugin.so 21.10.1-release  
Transport Layer Security (TLS) Engine, Picotls Based
  6. l3xc_plugin.so   21.10.1-release  
L3 Cross-Connect (L3XC)
  7. mdata_plugin.so  21.10.1-release  
Buffer metadata change tracker.
  8. ping_plugin.so   21.10.1-release  
Ping (ping)
  9. avf_plugin.so21.10.1-release  
Intel Adaptive Virtual Function (AVF) Device Driver
 10. l2tp_plugin.so   21.10.1-release  
Layer 2 Tunneling Protocol v3 (L2TP)
 11. pppoe_plugin.so  21.10.1-release  
PPP over Ethernet (PPPoE)
 12. pnat_plugin.so   21.10.1-release  
Policy 1:1 NAT
 13. crypto_native_plugin.so  21.10.1-release  
Intel IA32 Software Crypto Engine
 14. srv6am_plugin.so 21.10.1-release  
Masquerading Segment Routing for IPv6 (SRv6) Proxy
 15. l2e_plugin.so21.10.1-release  
Layer 2 (L2) Emulation
 16. det44_plugin.so  21.10.1-release  
Deterministic NAT (CGN)
 17. adl_plugin.so21.10.1-release  
Allow/deny list plugin
 18. acl_plugin.so21.10.1-release  
Access Control Lists (ACL)
 19. crypto_openssl_plugin.so 21.10.1-release  
OpenSSL Crypto Engine
 20. dslite_plugin.so 21.10.1-release  
Dual-Stack Lite
 21. tlsmbedtls_plugin.so 21.10.1-release  
Transport Layer Security (TLS) Engine, Mbedtls Based
 22. ikev2_plugin.so  21.10.1-release  
Internet Key Exchange (IKEv2) Protocol
 23. svs_plugin.so21.10.1-release  
Source Virtual Routing and Fowarding (VRF) Select
 24. vrrp_plugin.so   21.10.1-release  
VRRP v3 (RFC 5798)
 25. hs_apps_plugin.so21.10.1-release  
Host Stack Applications
 26. nsim_plugin.so   21.10.1-release  
Network Delay Simulator
 27. dns_plugin.so21.10.1-release  
Simple DNS name resolver
 28. cnat_plugin.so   21.10.1-release  
CNat Translate
 29. dhcp_plugin.so   21.10.1-release  
Dynamic Host Configuration Protocol (DHCP)
 30. gbp_plugin.so21.10.1-release  
Group Based Policy (GBP)
 31. igmp_plugin.so   21.10.1-release  
Internet Group Management Protocol (IGMP)
 32. nat_plugin.so21.10.1-release  
Network Address Translation (NAT)
 33. memif_plugin.so  21.10.1-release  
Packet Memory Interface (memif) -- Experimental
 34. crypto_ipsecmb_plugin.so 21.10.1-release  
Intel IPSEC Multi-buffer Crypto Engine
 35. nsh_plugin.so21.10.1-release  
Network Service Header (NSH)
 36. nat44_ei_plugin.so   21.10.1-release  
IPv4 Endpoint-Independent NAT (NAT44 EI)
 37. nat64_plugin.so  21.10.1-release  
NAT64
 38. lisp_plugin.so   21.10.1-release  
Locator ID Separation Protocol (LISP)
 39. abf_plugin.so21.10.1-release  
Access Control List (ACL) Based Forwarding
 40. ila_plugin.so21.10.1-release  
Identifier Locator Addressing (ILA) for IPv6
 41. srv6mobile_plugin.so 21.10.1-release  
SRv6 GTP Endpoint Functions
 42. tlsopenssl_plugin.so 21.10.1-release  

Re: [vpp-dev] Unexpected return from python3 api

2021-10-04 Thread Eyle Brinkhuis
Thanks Matthew,

Thats a clear example of me needing to start the weekend early.

Regards,

Eyle

On 2 Oct 2021, at 17:34, Matthew Smith 
mailto:mgsm...@netgate.com>> wrote:


The value of retval from the 2nd reply can be looked up in src/vnet/api_errno.h:

_(VLAN_ALREADY_EXISTS, -56, "VLAN subif already exists")

The attempt to create a VLAN subif is failing because the one you're trying to 
create has already been created.


On Sat, Oct 2, 2021 at 3:23 AM Eyle Brinkhuis 
mailto:eyle.brinkh...@surf.nl>> wrote:
Hi,

While using a small piece of python code to create a vlan subinterface, we get 
some unexpected returns:

We run:
vpp.api.create_vlan_subif(sw_if_index=3, vlan_id=101);

The return for this varies, one time it is:

create_vlan_subif_reply(_0=121, context=4, retval=0, sw_if_index=5)

But it is not consistent, we get
create_vlan_subif_reply(_0=121, context=4, retval=-56, sw_if_index=4294967295)
Back as well, which is of no use to us. Anyone experienced the same?

We run vpp 21.06 (installed from package cloud)

Regards,

Eyle




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20279): https://lists.fd.io/g/vpp-dev/message/20279
Mute This Topic: https://lists.fd.io/mt/86019022/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Unexpected return from python3 api

2021-10-02 Thread Eyle Brinkhuis
Hi,

While using a small piece of python code to create a vlan subinterface, we get 
some unexpected returns:

We run:
vpp.api.create_vlan_subif(sw_if_index=3, vlan_id=101);

The return for this varies, one time it is:

create_vlan_subif_reply(_0=121, context=4, retval=0, sw_if_index=5)

But it is not consistent, we get 
create_vlan_subif_reply(_0=121, context=4, retval=-56, sw_if_index=4294967295)
Back as well, which is of no use to us. Anyone experienced the same?

We run vpp 21.06 (installed from package cloud)

Regards,

Eyle
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20276): https://lists.fd.io/g/vpp-dev/message/20276
Mute This Topic: https://lists.fd.io/mt/86019022/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashes on vhost-user VM reboot

2021-04-14 Thread Eyle Brinkhuis
All,

Moving VPP to 21.01 seems to solve this issue for now.

Eyle

> On 9 Apr 2021, at 09:59, Eyle Brinkhuis  wrote:
> 
> Hi guys,
> 
> We are experiencing crashes left and right on VPP 20.05.1 that we installed 
> from packagecloud. VPP crashes when a VM with 3 interfaces (all three 
> connected to VPP on vhost-user) reboots or gets spawned. I was able to 
> capture a core dump, which is available here:
> https://surfdrive.surf.nl/files/index.php/s/qIR01agXQD4Jkha 
> <https://surfdrive.surf.nl/files/index.php/s/qIR01agXQD4Jkha>
> 
> Anyone want to take a look at this? I am planning on moving to 20.09 today, 
> and will see if that solves our problem but maybe it is something worth 
> looking at. I will report back with my experience on 20.09.
> 
> Regards,
> 
> Eyle
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19187): https://lists.fd.io/g/vpp-dev/message/19187
Mute This Topic: https://lists.fd.io/mt/81963208/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP crashes on vhost-user VM reboot

2021-04-09 Thread Eyle Brinkhuis
Hi guys,

We are experiencing crashes left and right on VPP 20.05.1 that we installed 
from packagecloud. VPP crashes when a VM with 3 interfaces (all three connected 
to VPP on vhost-user) reboots or gets spawned. I was able to capture a core 
dump, which is available here:
https://surfdrive.surf.nl/files/index.php/s/qIR01agXQD4Jkha

Anyone want to take a look at this? I am planning on moving to 20.09 today, and 
will see if that solves our problem but maybe it is something worth looking at. 
I will report back with my experience on 20.09.

Regards,

Eyle

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19167): https://lists.fd.io/g/vpp-dev/message/19167
Mute This Topic: https://lists.fd.io/mt/81963208/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-10 Thread Eyle Brinkhuis
Ben, Steven,

Just tested it, VPP now stays up, which is a good thing. The VM eventually 
reports an error state, so that is fine. We know the limits now, how about 
raising them? :-)

Regards,

Eyle

On 9 Dec 2020, at 19:31, steven luong via lists.fd.io 
mailto:sluong=cisco@lists.fd.io>> wrote:

Right, it should not crash. With the patch, the VM just refuses to come up 
unless we raise the queue support.

Steven

On 12/9/20, 10:24 AM, "Benoit Ganne (bganne)" 
mailto:bga...@cisco.com>> wrote:

This argument in your qemu command line,
queues=16,
is over our current limit. We support up to 8. I can submit an improvement
patch. But I think it will be master only.

   Yes but we should not crash 
   I actually forgot some additional checks in my initial patch. I updated it 
https://gerrit.fd.io/r/c/vpp/+/30346
   Eyle, could you check if the crash still happens with queues=16?

   Best
   ben





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18302): https://lists.fd.io/g/vpp-dev/message/18302
Mute This Topic: https://lists.fd.io/mt/78659780/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread Eyle Brinkhuis
Hi Steven, This is the command line:

libvirt+ 1620511   1  0 17:19 ?00:00:00 /usr/bin/qemu-system-x86_64 
-name guest=instance-02be,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-96-instance-02be/master-key.aes
 -machine pc-i440fx-4.0,accel=kvm,usb=off,dump-guest-core=off -cpu host -m 8192 
-overcommit mem-lock=off -smp 16,sockets=16,cores=1,threads=1 -object 
memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/96-instance-02be,share=yes,size=8589934592,host-nodes=0,policy=bind
 -numa node,nodeid=0,cpus=0-15,memdev=ram-node0 -uuid 
e2dcaeda-1b7c-4d4d-b860-b56d58cf1e86 -smbios type=1,manufacturer=OpenStack 
Foundation,product=OpenStack 
Nova,version=20.3.0,serial=e2dcaeda-1b7c-4d4d-b860-b56d58cf1e86,uuid=e2dcaeda-1b7c-4d4d-b860-b56d58cf1e86,family=Virtual
 Machine -no-user-config -nodefaults -chardev 
socket,id=charmonitor,fd=25,server,nowait -mon 
chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global 
kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object 
secret,id=virtio-disk0-secret0,data=6heG0DJExrHzsPjvdMMDZEgCRzMTVhEQNM1q+t/PeVI=,keyid=masterKey0,iv=q1A9BiAx0eW1MsIpYrU56A==,format=base64
 -drive 
file=rbd:cinder-ceph/volume-22c67810-cd55-4cc2-a830-1433488003eb:id=cinder-ceph:auth_supported=cephx\;none:mon_host=10.0.91.205\:6789\;10.0.91.206\:6789\;10.0.91.207\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on,serial=22c67810-cd55-4cc2-a830-1433488003eb
 -chardev 
socket,id=charnet0,path=/tmp/15873ca6-0488-4826-9f50-bab037271c93,server 
-netdev vhost-user,chardev=charnet0,queues=16,id=hostnet0 -device 
virtio-net-pci,mq=on,vectors=34,rx_queue_size=1024,tx_queue_size=1024,netdev=hostnet0,id=net0,mac=fa:16:3e:ce:e4:df,bus=pci.0,addr=0x3
 -add-fd set=1,fd=28 -chardev 
pty,id=charserial0,logfile=/dev/fdset/1,logappend=on -device 
isa-serial,chardev=charserial0,id=serial0 -device 
usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.0.92.191:1 -k en-us -device 
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg 
timestamp=on

It looks like it is only requesting 16 queues.

@Ben, I have put those in the same file share as well 
(https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb)

Regards,

Eyle

On 9 Dec 2020, at 18:00, Steven Luong (sluong) 
mailto:slu...@cisco.com>> wrote:

Eyle,

Can you also show me the qemu command line to bring up the VM? I think it is 
asking for more than 16 queues. VPP supports up to 16.

Steven

On 12/9/20, 8:22 AM, "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> on 
behalf of Benoit Ganne (bganne) via lists.fd.io<http://lists.fd.io>" 
mailto:vpp-dev@lists.fd.io> on behalf of 
bganne=cisco@lists.fd.io<mailto:bganne=cisco@lists.fd.io>> wrote:

   Hi Eyle, could you share the associated .deb files you built (esp. vpp, 
vpp-dbg, libvppinfra , vpp-plugin-core and vpp-plugin-dpdk)?
   I cannot exploit the core without those, as you rebuilt vpp.

   Best
   ben

-Original Message-
From: Eyle Brinkhuis mailto:eyle.brinkh...@surf.nl>>
Sent: mercredi 9 décembre 2020 17:02
To: Benoit Ganne (bganne) mailto:bga...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

Hi Ben,

I have built a new 20.05.1 version with this fix cherry-picked. It gets a
lot further now: VM is actually spawning and I can see the interface being
created inside VPP. However, a little while later, VPP crashes once again.
I have created a new core dump and api-post mortem, which can be found
here:

https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb

BTW, havent yet tried this with 20.09. Let me know if you want me to do
that first. Once again, thanks for your quick reply.

Regards,

Eyle


On 8 Dec 2020, at 19:14, Benoit Ganne (bganne) via lists.fd.io
<http://lists.fd.io>  mailto:bganne=cisco@lists.fd.io> > wrote:

Hi Eyle,

Thanks for the core, I think I identified the issue.
Can you check if https://gerrit.fd.io/r/c/vpp/+/30346 fix the issue?
It should apply to 20.05 without conflicts.

Best
ben



-Original Message-
From: Eyle Brinkhuis mailto:eyle.brinkh...@surf.nl> >
Sent: mercredi 2 décembre 2020 17:13
To: Benoit Ganne (bganne) mailto:bga...@cisco.com> >
Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
Subject: Re: Vpp crashes with core dump vhost-user interface

Hi Ben, all,

I’m sorry, I forgot about adding a backtrace. I have now
posted it here:
https://surfdrive.surf.nl/files/ind

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-09 Thread Eyle Brinkhuis
Hi Ben,

I have built a new 20.05.1 version with this fix cherry-picked. It gets a lot 
further now: VM is actually spawning and I can see the interface being created 
inside VPP. However, a little while later, VPP crashes once again. I have 
created a new core dump and api-post mortem, which can be found here:

https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb

BTW, havent yet tried this with 20.09. Let me know if you want me to do that 
first. Once again, thanks for your quick reply.

Regards,

Eyle

On 8 Dec 2020, at 19:14, Benoit Ganne (bganne) via 
lists.fd.io<http://lists.fd.io> 
mailto:bganne=cisco@lists.fd.io>> wrote:

Hi Eyle,

Thanks for the core, I think I identified the issue.
Can you check if https://gerrit.fd.io/r/c/vpp/+/30346 fix the issue? It should 
apply to 20.05 without conflicts.

Best
ben

-Original Message-
From: Eyle Brinkhuis mailto:eyle.brinkh...@surf.nl>>
Sent: mercredi 2 décembre 2020 17:13
To: Benoit Ganne (bganne) mailto:bga...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: Vpp crashes with core dump vhost-user interface

Hi Ben, all,

I’m sorry, I forgot about adding a backtrace. I have now posted it here:
https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb


I am not too familiar with the openstack integration, but now that
20.09 is out, can't you move to 20.09? At least in your lab to check
whether you still see this issue.

The last “guaranteed to work” version is 20.05.1 against networking-vpp. I
can still try though, in my testbed, but I’d like to keep to the known
working combinations as much as possible. Ill let you know if anything
comes up!

Thanks for the quick replies, both you and Steven.

Regards,

Eyle


On 2 Dec 2020, at 16:35, Benoit Ganne (bganne) mailto:bga...@cisco.com> > wrote:

Hi Eyle,

I am not too familiar with the openstack integration, but now that
20.09 is out, can't you move to 20.09? At least in your lab to check
whether you still see this issue.
Apart from that, we'd need to decipher the backtrace to be able to
help. The best should be to share a coredump as explained here:
https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingiss
ues.html#core-files
<https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingis
sues.html#core-files>

Best
ben



-Original Message-
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<mailto:vpp-dev@lists.fd.io>  mailto:d...@lists.fd.io> <mailto:vpp-dev@lists.fd.io> > On 
Behalf Of Eyle
Brinkhuis
Sent: mercredi 2 décembre 2020 14:59
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> <mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] Vpp crashes with core dump vhost-user
interface

Hi all,

In our environment (vpp 20.05.1, ubuntu 18.04.5, networking-
vpp 20.05.1,
Openstack train) we are running into an issue. When we spawn a
VM (regular
ubuntu 1804.4) with 16 CPU cores and 8G memory and a VPP
backed interface,
our VPP instance dies:

Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]:
linux_epoll_file_update:120: epoll_ctl: Operation not
permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]:
linux_epoll_file_update:120: epoll_ctl: Operation not
permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]:
received signal SIGSEGV, PC 0x7fdf80653188, faulting address
0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]:
received signal
SIGSEGV, PC 0x7fdf80653188, faulting address 0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #0
0x7fdf806556d5 0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #0
0x7fdf806556d5 0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #1
0x7fdf7feab8a0 0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #1
0x7fdf7feab8a0 0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #2
0x7fdf80653188 0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #2
0x7fdf80653188 0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #3
0x7fdf81f29e52 0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #3
0x7fdf81f29e52 0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #4
0x7fdf80653b79 0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #4
0x7fdf80653b79 0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #5
0x7fdf805f1bdb 0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #5
0x7fdf805f1bdb 0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161]: #6
0x7fdf805f18c0 0x7fdf805f18c0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
/usr/bin/vpp[1788161

Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-02 Thread Eyle Brinkhuis
Hi Ben, all,

I’m sorry, I forgot about adding a backtrace. I have now posted it here:
https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb

I am not too familiar with the openstack integration, but now that 20.09 is 
out, can't you move to 20.09? At least in your lab to check whether you still 
see this issue.
The last “guaranteed to work” version is 20.05.1 against networking-vpp. I can 
still try though, in my testbed, but I’d like to keep to the known working 
combinations as much as possible. Ill let you know if anything comes up!

Thanks for the quick replies, both you and Steven.

Regards,

Eyle

On 2 Dec 2020, at 16:35, Benoit Ganne (bganne) 
mailto:bga...@cisco.com>> wrote:

Hi Eyle,

I am not too familiar with the openstack integration, but now that 20.09 is 
out, can't you move to 20.09? At least in your lab to check whether you still 
see this issue.
Apart from that, we'd need to decipher the backtrace to be able to help. The 
best should be to share a coredump as explained here: 
https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html#core-files

Best
ben

-Original Message-
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Eyle
Brinkhuis
Sent: mercredi 2 décembre 2020 14:59
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] Vpp crashes with core dump vhost-user interface

Hi all,

In our environment (vpp 20.05.1, ubuntu 18.04.5, networking-vpp 20.05.1,
Openstack train) we are running into an issue. When we spawn a VM (regular
ubuntu 1804.4) with 16 CPU cores and 8G memory and a VPP backed interface,
our VPP instance dies:

Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]:
linux_epoll_file_update:120: epoll_ctl: Operation not permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]:
linux_epoll_file_update:120: epoll_ctl: Operation not permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]:
received signal SIGSEGV, PC 0x7fdf80653188, faulting address
0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: received signal
SIGSEGV, PC 0x7fdf80653188, faulting address 0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #0
0x7fdf806556d5 0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #0
0x7fdf806556d5 0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #1
0x7fdf7feab8a0 0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #1
0x7fdf7feab8a0 0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #2
0x7fdf80653188 0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #2
0x7fdf80653188 0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #3
0x7fdf81f29e52 0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #3
0x7fdf81f29e52 0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #4
0x7fdf80653b79 0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #4
0x7fdf80653b79 0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #5
0x7fdf805f1bdb 0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #5
0x7fdf805f1bdb 0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #6
0x7fdf805f18c0 0x7fdf805f18c0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #7
0x7fdf80655076 0x7fdf80655076
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #6
0x7fdf805f18c0 0x7fdf805f18c0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #8
0x7fdf7fa3b3f4 0x7fdf7fa3b3f4
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #7
0x7fdf80655076 0x7fdf80655076
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #8
0x7fdf7fa3b3f4 0x7fdf7fa3b3f4
Dec 02 13:39:39 compute03-asd002a systemd[1]: vpp.service: Main process
exited, code=dumped, status=6/ABRT
Dec 02 13:39:39 compute03-asd002a systemd[1]: vpp.service: Failed with
result 'core-dump'.


While we are able to run 8 core VMs, we’d like to be able to create
beefier. VPP restarts, but never makes it to create the vhost-user
interface.. Anyone ran into the same issue?

Regards,

Eyle



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18230): https://lists.fd.io/g/vpp-dev/message/18230
Mute This Topic: https://lists.fd.io/mt/78659780/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-02 Thread Eyle Brinkhuis
Hi all,

In our environment (vpp 20.05.1, ubuntu 18.04.5, networking-vpp 20.05.1, 
Openstack train) we are running into an issue. When we spawn a VM (regular 
ubuntu 1804.4) with 16 CPU cores and 8G memory and a VPP backed interface, our 
VPP instance dies:

Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: 
linux_epoll_file_update:120: epoll_ctl: Operation not permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: 
linux_epoll_file_update:120: epoll_ctl: Operation not permitted (errno 1)
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: received 
signal SIGSEGV, PC 0x7fdf80653188, faulting address 0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: received signal 
SIGSEGV, PC 0x7fdf80653188, faulting address 0x7ffe414b8680
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #0  
0x7fdf806556d5 0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #0  0x7fdf806556d5 
0x7fdf806556d5
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #1  0x7fdf7feab8a0 
0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #1  
0x7fdf7feab8a0 0x7fdf7feab8a0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #2  
0x7fdf80653188 0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #2  0x7fdf80653188 
0x7fdf80653188
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #3  0x7fdf81f29e52 
0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #3  
0x7fdf81f29e52 0x7fdf81f29e52
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #4  
0x7fdf80653b79 0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #4  0x7fdf80653b79 
0x7fdf80653b79
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #5  
0x7fdf805f1bdb 0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #5  0x7fdf805f1bdb 
0x7fdf805f1bdb
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #6  
0x7fdf805f18c0 0x7fdf805f18c0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #7  
0x7fdf80655076 0x7fdf80655076
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #6  0x7fdf805f18c0 
0x7fdf805f18c0
Dec 02 13:39:39 compute03-asd002a vpp[1788161]: /usr/bin/vpp[1788161]: #8  
0x7fdf7fa3b3f4 0x7fdf7fa3b3f4
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #7  0x7fdf80655076 
0x7fdf80655076
Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #8  0x7fdf7fa3b3f4 
0x7fdf7fa3b3f4
Dec 02 13:39:39 compute03-asd002a systemd[1]: vpp.service: Main process exited, 
code=dumped, status=6/ABRT
Dec 02 13:39:39 compute03-asd002a systemd[1]: vpp.service: Failed with result 
'core-dump'.


While we are able to run 8 core VMs, we’d like to be able to create beefier. 
VPP restarts, but never makes it to create the vhost-user interface.. Anyone 
ran into the same issue?

Regards,

Eyle


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18219): https://lists.fd.io/g/vpp-dev/message/18219
Mute This Topic: https://lists.fd.io/mt/78659780/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Devstack & vpp

2019-12-02 Thread Eyle Brinkhuis
Hi Guys,

Don’t know if anyone has already experienced this, but it seems that something 
goes wrong with deploying VPP and networking-vpp in a clean devstack setup:
2019-12-02 17:25:34.194 | +functions-common:service_check:1622   for 
service in ${ENABLED_SERVICES//,/ }
2019-12-02 17:25:34.198 | +functions-common:service_check:1624   sudo 
systemctl is-enabled 
devstack@vpp-agent.service
2019-12-02 17:25:34.208 | enabled
2019-12-02 17:25:34.212 | +functions-common:service_check:1628   sudo 
systemctl status devstack@vpp-agent.service 
--no-pager
2019-12-02 17:25:34.222 | ● 
devstack@vpp-agent.service - Devstack 
devstack@vpp-agent.service
2019-12-02 17:25:34.222 |Loaded: loaded 
(/etc/systemd/system/devstack@vpp-agent.service;
 enabled; vendor preset: enabled)
2019-12-02 17:25:34.222 |Active: failed (Result: exit-code) since Mon 
2019-12-02 18:24:14 CET; 1min 19s ago
2019-12-02 17:25:34.222 |  Main PID: 32189 (code=exited, status=1/FAILURE)
2019-12-02 17:25:34.222 |
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: Traceback 
(most recent call last):
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]:   File 
"/usr/local/bin/vpp-agent", line 6, in 
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: from 
networking_vpp.agent.server import main
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]:   File 
"/opt/stack/networking-vpp/networking_vpp/agent/server.py", line 49, in 
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: from 
networking_vpp.agent import vpp
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]:   File 
"/opt/stack/networking-vpp/networking_vpp/agent/vpp.py", line 33, in 
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: import 
vpp_papi  # type: ignore
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 vpp-agent[32189]: 
ModuleNotFoundError: No module named 'vpp_papi'
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 systemd[1]: 
devstack@vpp-agent.service: Main process 
exited, code=exited, status=1/FAILURE
2019-12-02 17:25:34.222 | Dec 02 18:24:14 node2 systemd[1]: 
devstack@vpp-agent.service: Failed with 
result 'exit-code'.


This is on Ubuntu 18.04.3 while it pulls VPP 19.08.1 and networking-vpp which 
for 19.08.1.

Anyone here for a quick fix?

Cheers,

Eyle

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14754): https://lists.fd.io/g/vpp-dev/message/14754
Mute This Topic: https://lists.fd.io/mt/65075143/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ACL drops while pinging another interface

2019-09-06 Thread Eyle Brinkhuis
Hi Andrew,

Awesome, thanks so much for your explanation! I have fixed the problem. 
Apparently, these rules are set from openstack, which applies these rules on 
the ports. That means that adding the right ranges to this port will allow the 
traffic, which it does.

I thought I had it all arranged by disabling port security in the openstack 
environment, but apparently, port-security is only disabled in openstack when 
one does not have any security group nor allowed-address pair assigned to the 
instance. I guess we'll have to sort security on a different place then.

Thanks so much once again!

Regards,

Eyle

On 06/09/2019, 11:18, "Andrew   Yourtchenko"  wrote:

Ok so I will explain the logic a bit more and maybe this will solve the 
puzzle.

Assumption: that the trace in your first mail and the show output in
the last one correspond to the same state of the VPP and the stuff
attahed to it (sw_if_index, etc.)

The purpose in life of the MACIP ACLs is to give a stateless way to
enforce the correspondence of IP and MAC, as well as to ensure the
hosts on a given VPP interface are using the MAC address they are
supposed to use.

the ingress packet in the frame 3 (the dropped one) in the trace
received on VirtualEthernet0/0/3 is:
  l2-input: sw_if_index 9 dst fa:16:3e:93:0c:50 src fa:16:3e:26:3e:0e

so we  need to have a MACIP ACL that tells tells that
fa:16:3e:26:3e:0e is okay to hang off that interface.

We can indeed find that it is the macip acl #2:

MACIP acl_index: 2, count: 2 (true len 2) tag {} is free pool slot: 0

  ip4_table_index 17, ip6_table_index 17, l2_table_index 17

  out_ip4_table_index -1, out_ip6_table_index -1, out_l2_table_index -1

rule 0: ipv4 action 1 ip 0.0.0.0/32 mac fa:16:3e:26:3e:0e mask
ff:ff:ff:ff:ff:ff

rule 1: ipv4 action 1 ip 145.144.1.78/32 mac fa:16:3e:26:3e:0e
mask ff:ff:ff:ff:ff:ff

  applied on sw_if_index(s): 9

so, it has the good MAC address and it is applied, but this ACL says
"I can either allow the source IP of 0.0.0.0 or a source IP of
145.144.1.78 incoming on VirtualEthernet0/0/3 with the source of
fa:16:3e:26:3e:0e.

So from the trace, the packet is:

   lc_index 0 l3 ip4 145.144.1.53 -> 145.144.1.84 l4
lsb_of_sw_if_index 9 proto 1 l4_is_input 1 l4_slow_path 1 l4_flags
0x03 port 0 -> 0 tcp flags (invalid) 00 rsvd 0

The source of 145.144.1.53 is *not*  the 145.144.1.78 which MACIP ACL
permits to appear on that interface, so from this I can conclude it
all WorksAsConfigured(tm).

I think by default the OpenStack assumes the endpoint connected to VPP
is a host, which in your case it isn't. So you would need to relax the
whichever policies govern that. I am not sure where, since I deal with
the VPP side, all I see is the API calls :-) - maybe Naveen might give
a pointer or two here...

This all is fairly old code, so I think tweaking the config will work.

But, for curiosity/if my assumption in the beginning is wrong, or in
case some curious reader finds this mail later, I will go a bit
further and tell what the next steps would have been if we *did* find
that the MACIP ACL was configured correctly to permit the source
IP+MAC that we saw in frame 3 of the trace. For that we need to know
how MACIP ACL functions under the hood.

ACL plugin reuses the pre-existing core feature of VPP called "IPACL
with classifier tables".
A bit confusing, but that's why you see it in the trace.
Classifier tables are a pretty cool thing - you can match up to 5*u32
worth of contiguous bits with an arbitrary bitmask pretty much
anywhere in the packet, with a fixed offset. But the drawback is that
all the possible values matched need to be the *same* place and the
same bitmask. So to deal with that, the classifier tables can be
chained - and the lookup made sequentially.

If you do "show classify tables verbose" you will see the tables in
all the glory, including the contents. There one can verify at a lower
level that what the ACL plugin programs does make sense.

hope this helps!

--a
    

    


On 9/6/19, Eyle Brinkhuis  wrote:
> Okay.. so when I pull up the macip acl’s:
>
> vpp# show acl-plugin macip acl
> MACIP acl_index: 0, count: 2 (true len 2) tag {} is free pool slot: 0
>   ip4_table_index 5, ip6_table_index 5, l2_table_index 5
>   out_ip4_table_index -1, out_ip6_table_index -1, out_l2_table_index -1
> rule 0: ipv4 action 1 ip 0.0.0.0/32 mac fa:16:3e:79:87:05 mask
> ff:ff:ff:ff:ff:ff
> rule 1: ipv4 action 1 ip 145.144.1.5/32 mac fa:16:3e:79:87:05 mask
> ff:ff:ff:ff:ff:ff
>   applied on sw_if_index(s):
 

Re: [vpp-dev] ACL drops while pinging another interface

2019-09-06 Thread Eyle Brinkhuis
Okay.. so when I pull up the macip acl’s:

vpp# show acl-plugin macip acl
MACIP acl_index: 0, count: 2 (true len 2) tag {} is free pool slot: 0
  ip4_table_index 5, ip6_table_index 5, l2_table_index 5
  out_ip4_table_index -1, out_ip6_table_index -1, out_l2_table_index -1
rule 0: ipv4 action 1 ip 0.0.0.0/32 mac fa:16:3e:79:87:05 mask 
ff:ff:ff:ff:ff:ff
rule 1: ipv4 action 1 ip 145.144.1.5/32 mac fa:16:3e:79:87:05 mask 
ff:ff:ff:ff:ff:ff
  applied on sw_if_index(s):
MACIP acl_index: 1, count: 2 (true len 2) tag {} is free pool slot: 0
  ip4_table_index 11, ip6_table_index 11, l2_table_index 11
  out_ip4_table_index -1, out_ip6_table_index -1, out_l2_table_index -1
rule 0: ipv4 action 1 ip 0.0.0.0/32 mac fa:16:3e:79:87:05 mask 
ff:ff:ff:ff:ff:ff
rule 1: ipv4 action 1 ip 145.144.1.5/32 mac fa:16:3e:79:87:05 mask 
ff:ff:ff:ff:ff:ff
  applied on sw_if_index(s): 4
MACIP acl_index: 2, count: 2 (true len 2) tag {} is free pool slot: 0
  ip4_table_index 17, ip6_table_index 17, l2_table_index 17
  out_ip4_table_index -1, out_ip6_table_index -1, out_l2_table_index -1
rule 0: ipv4 action 1 ip 0.0.0.0/32 mac fa:16:3e:26:3e:0e mask 
ff:ff:ff:ff:ff:ff
rule 1: ipv4 action 1 ip 145.144.1.78/32 mac fa:16:3e:26:3e:0e mask 
ff:ff:ff:ff:ff:ff
  applied on sw_if_index(s): 9
MACIP acl_index: 3, count: 2 (true len 2) tag {} is free pool slot: 0
  ip4_table_index 23, ip6_table_index 23, l2_table_index 23
  out_ip4_table_index -1, out_ip6_table_index -1, out_l2_table_index -1
rule 0: ipv4 action 1 ip 0.0.0.0/32 mac fa:16:3e:10:04:3e mask 
ff:ff:ff:ff:ff:ff
rule 1: ipv4 action 1 ip 145.144.1.29/32 mac fa:16:3e:10:04:3e mask 
ff:ff:ff:ff:ff:ff
  applied on sw_if_index(s): 5
MACIP acl_index: 4, count: 2 (true len 2) tag {} is free pool slot: 0
  ip4_table_index 29, ip6_table_index 29, l2_table_index 29
  out_ip4_table_index -1, out_ip6_table_index -1, out_l2_table_index -1
rule 0: ipv4 action 1 ip 0.0.0.0/32 mac fa:16:3e:7c:96:d0 mask 
ff:ff:ff:ff:ff:ff
rule 1: ipv4 action 1 ip 145.144.1.53/32 mac fa:16:3e:7c:96:d0 mask 
ff:ff:ff:ff:ff:ff
  applied on sw_if_index(s): 7
MACIP acl_index: 6, count: 2 (true len 2) tag {} is free pool slot: 0
  ip4_table_index 41, ip6_table_index 41, l2_table_index 41
  out_ip4_table_index -1, out_ip6_table_index -1, out_l2_table_index -1
rule 0: ipv4 action 1 ip 0.0.0.0/32 mac fa:16:3e:93:0c:50 mask 
ff:ff:ff:ff:ff:ff
rule 1: ipv4 action 1 ip 145.144.1.84/32 mac fa:16:3e:93:0c:50 mask 
ff:ff:ff:ff:ff:ff
  applied on sw_if_index(s): 10



But then compare that to the show hardware:
vpp# sh hard
  NameIdx   Link  Hardware
VirtualEthernet0/0/0   3 up   VirtualEthernet0/0/0
  Link speed: unknown
  Ethernet address fa:16:3c:c9:a8:50
VirtualEthernet0/0/1   4 up   VirtualEthernet0/0/1
  Link speed: unknown
  Ethernet address fa:16:3c:70:15:b3
VirtualEthernet0/0/2   5 up   VirtualEthernet0/0/2
  Link speed: unknown
  Ethernet address fa:16:3c:05:66:7c
VirtualEthernet0/0/3   6 up   VirtualEthernet0/0/3
  Link speed: unknown
  Ethernet address fa:16:3c:f0:21:0a
VirtualEthernet0/0/4   7 up   VirtualEthernet0/0/4
  Link speed: unknown
  Ethernet address fa:16:3c:0f:9d:5d
local0 0down  local0
  Link speed: unknown
  local
rdma0  1 up   rdma0
  Link speed: 40 Gbps
  Ethernet address 02:fe:99:32:82:4f
  flags: admin-up promiscuous
rdma1  2 up   rdma1
  Link speed: 40 Gbps
  Ethernet address 02:fe:27:ea:09:82
  flags: admin-up

It looks like there doesn’t even exist an acl for VirtualEthernet0/0/3? Is that 
why it is dropped?

Eyle

From: Andrew  Yourtchenko 
Date: Thursday, 5 September 2019 at 19:20
To: "Naveen Joy (najoy)" 
Cc: Eyle Brinkhuis , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] ACL drops while pinging another interface

It hits the session, so it does pass the L3 acl. Just before ...
--a

On 5 Sep 2019, at 18:52, Naveen Joy (najoy) 
mailto:na...@cisco.com>> wrote:
From the trace, it appears like ICMP Echo reply is not permitted by the 
security-group applied on neutron’s port corresponding to VirtualEthernet0/0/3.
This could be causing the ICMP reply packet from the firewall to drop.

   lc_index 0 l3 ip4 145.144.1.53 -> 145.144.1.84 l4 lsb_of_sw_if_index 9 proto 
1 l4_is_input 1 l4_slow_path 1 l4_flags 0x03 port 0 -> 0 tcp flags (invalid) 00 
rsvd 0

00:53:47:316359: l2-input-feat-arc-end

  IN-FEAT-ARC: head 0 feature_bitmap 100525 ethertype 0 sw_if_index -1, 
next_index 17

00:53:47:316360: l2-input-acl

  INACL: sw_if_index 9, next_index 0, table 12, offset -1

00:53:47:316361: error-drop

  rx:VirtualEthernet0/0/3

-Naveen

From: mailto:vpp-dev@lists.fd.io>> on behalf of Andrew 
Yourtchenko mailto:ayour...@gmail.com>>
Date: Thursday, September 5, 2019 at 7:20 AM
To: Eyle Brinkhuis mailto:eyle.brinkh...@surf

[vpp-dev] VPP on POWER9

2019-06-27 Thread Eyle Brinkhuis
Hi,

Just out of curiosity: has anyone here tried  VPP on POWER9? I’m curious..

Regards,

Eyle

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13386): https://lists.fd.io/g/vpp-dev/message/13386
Mute This Topic: https://lists.fd.io/mt/32231579/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-16 Thread Eyle Brinkhuis
Sure! Just done that, see https://jira.fd.io/projects/VPP/issues/VPP-1679

Regards,

Eyle

On 16 May 2019, at 09:28, Benoit Ganne (bganne) 
mailto:bga...@cisco.com>> wrote:

Hi Eyle, could you create a ticket on Jira, similar to 
https://jira.fd.io/browse/VPP-1640 ?

Thx
Ben

-Original Message-
From: Eyle Brinkhuis 
mailto:eyle.brinkh...@surfnet.nl>>
Sent: jeudi 16 mai 2019 08:27
To: Benoit Ganne (bganne) mailto:bga...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] VPP & Mellanox

Yes, I installed vpp-selinux. Let me know if I can help with anything
regarding these problems, these are test-machines after all.

Regards,

Eyle

On 15 May 2019, at 13:58, Benoit Ganne (bganne) 
mailto:bga...@cisco.com>>
wrote:

I wonder if that is the problem with DPDK for the MLX cards as well.
Let me check on another node.
Well.. that’s that..

Ok good. No surprise: they both are based on rdma-core/libibverb. Did
you installed vpp-selinux? If so maybe we are missing some rules in there,
but I'll have to let that to more knowledgeable people...

ben


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13056): https://lists.fd.io/g/vpp-dev/message/13056
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-16 Thread Eyle Brinkhuis
Yes, I installed vpp-selinux. Let me know if I can help with anything regarding 
these problems, these are test-machines after all.

Regards,

Eyle

> On 15 May 2019, at 13:58, Benoit Ganne (bganne)  wrote:
> 
>>> I wonder if that is the problem with DPDK for the MLX cards as well. Let me 
>>> check on another node.
>> Well.. that’s that..
> 
> Ok good. No surprise: they both are based on rdma-core/libibverb. Did you 
> installed vpp-selinux? If so maybe we are missing some rules in there, but 
> I'll have to let that to more knowledgeable people...
> 
> ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13053): https://lists.fd.io/g/vpp-dev/message/13053
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Eyle Brinkhuis
Well.. that’s that..
vpp# show int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
FiftySixGigabitEthernet1/0/0  1 down 9000/0/0/0
FiftySixGigabitEthernet1/0/1  2 down 9000/0/0/0
local00 down  0/0/0/0

Regards,

Eyle

> On 15 May 2019, at 11:54, Eyle Brinkhuis  wrote:
> 
> Hi Ben,
> 
> There we go..
> vpp# create int rdma host-if enp1s0f1 name rdma-0
> vpp# sh int
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count
> local00 down  0/0/0/0
> rdma-01 down 9000/0/0/0
> vpp#
> 
> I wonder if that is the problem with DPDK for the MLX cards as well. Let me 
> check on another node.
> 
> Cheers,
> 
> Eyle
> 
>> On 15 May 2019, at 11:52, Benoit Ganne (bganne) > <mailto:bga...@cisco.com>> wrote:
>> 
>>> Sure, by any means: https://pastebin.com/w6PAsUzN 
>>> <https://pastebin.com/w6PAsUzN> is the VPP log
>>> See https://pastebin.com/uqT6C9Td <https://pastebin.com/uqT6C9Td> for the 
>>> DMESG output.
>> 
>> Nothing stand out from a quick glance :/
>> Just to make sure, could you disable selinux and retry?
>> ~# setenforce 0
>> 
>> Thx
>> ben
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13040): https://lists.fd.io/g/vpp-dev/message/13040
> Mute This Topic: https://lists.fd.io/mt/31576338/1681911
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [eyle.brinkh...@surfnet.nl]
> -=-=-=-=-=-=-=-=-=-=-=-



signature.asc
Description: Message signed with OpenPGP
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13042): https://lists.fd.io/g/vpp-dev/message/13042
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Eyle Brinkhuis
Hi Ben,

There we go..
vpp# create int rdma host-if enp1s0f1 name rdma-0
vpp# sh int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
local00 down  0/0/0/0
rdma-01 down 9000/0/0/0
vpp#

I wonder if that is the problem with DPDK for the MLX cards as well. Let me 
check on another node.

Cheers,

Eyle

On 15 May 2019, at 11:52, Benoit Ganne (bganne) 
mailto:bga...@cisco.com>> wrote:

Sure, by any means: https://pastebin.com/w6PAsUzN is the VPP log
See https://pastebin.com/uqT6C9Td for the DMESG output.

Nothing stand out from a quick glance :/
Just to make sure, could you disable selinux and retry?
~# setenforce 0

Thx
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13040): https://lists.fd.io/g/vpp-dev/message/13040
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Eyle Brinkhuis
Hi Ben,

Sure, by any means: https://pastebin.com/w6PAsUzN 
 is the VPP log
See https://pastebin.com/uqT6C9Td  for the DMESG 
output.

Thanks again!

Regards,

Eyle

> On 15 May 2019, at 11:12, Benoit Ganne (bganne)  wrote:
> 
> Hi Eyle,
> 
>> I guess you are looking for this part:
> [...]
>> If necessary, I can provide the full log.
> 
> Yes please, looks like I am missing the errno = 13 part in particular. Also, 
> could you share dmesg output too? As it is an issue between userspace and 
> kernelspace, kernel logs will help.
> The best would be to share the whole strace & dmesg output (eg. through 
> pastebin or equivalent).
> 
> Thx,
> ben



signature.asc
Description: Message signed with OpenPGP
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13038): https://lists.fd.io/g/vpp-dev/message/13038
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Eyle Brinkhuis
Hi Ben,

I guess you are looking for this part:

open("/home/centos/rdma.vpp", O_RDONLY) = 8
fstat(8, {st_mode=S_IFREG|0664, st_size=45, ...}) = 0
read(8, "create int rdma host-if enp1s0f1"..., 4096) = 45
readlink("/sys/class/net/enp1s0f1/device/driver/module", 
"../../../../module/mlx5_core", 63) = 28
readlink("/sys/class/net/enp1s0f1/device", "../../../:01:00.1", 63) = 21
getuid()= 0
geteuid()   = 0
open("/sys/class/infiniband_verbs/abi_version", O_RDONLY|O_CLOEXEC) = 9
read(9, "6\n", 8)   = 2
close(9)= 0
open("/sys/class/infiniband_verbs/abi_version", O_RDONLY|O_CLOEXEC) = 9
read(9, "6\n", 8)   = 2
close(9)= 0
geteuid()   = 0
openat(AT_FDCWD, "/sys/class/infiniband_verbs", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 9
getdents(9, /* 4 entries */, 32768) = 112
stat("/sys/class/infiniband_verbs/abi_version", {st_mode=S_IFREG|0444, 
st_size=4096, ...}) = 0
stat("/sys/class/infiniband_verbs/uverbs1", {st_mode=S_IFDIR|0755, st_size=0, 
...}) = 0
open("/sys/class/infiniband_verbs/uverbs1/ibdev", O_RDONLY|O_CLOEXEC) = 10
read(10, "mlx5_1\n", 64)= 7
close(10)   = 0
stat("/sys/class/infiniband/mlx5_1", {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
stat("/dev/infiniband/uverbs1", {st_mode=S_IFCHR|0777, st_rdev=makedev(231, 
193), ...}) = 0
open("/sys/class/infiniband_verbs/uverbs1/abi_version", O_RDONLY|O_CLOEXEC) = 10
read(10, "1\n", 8)  = 2
close(10)   = 0
open("/sys/class/infiniband_verbs/uverbs1/device/modalias", O_RDONLY|O_CLOEXEC) 
= 10
read(10, "pci:v15B3d1013sv15B3"..., 512) = 54
close(10)   = 0
getdents(9, /* 0 entries */, 32768) = 0
close(9)= 0
open("/sys/class/infiniband/mlx5_1/node_type", O_RDONLY|O_CLOEXEC) = 9
read(9, "1: CA\n", 16)  = 6
close(9)= 0
readlink("/sys/class/infiniband_verbs/uverbs1/device", "../../../:01:00.1", 
63) = 21
open("/dev/infiniband/uverbs1", O_RDWR|O_CLOEXEC) = 9
mmap(NULL, 204800, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f4de4935000
ioctl(9, _IOC(_IOC_READ|_IOC_WRITE, 0x1b, 0x01, 0x18), 0x7f4da149b240) = -1 
ENOTTY (Inappropriate ioctl for device)
uname({sysname="Linux", 
nodename="node4.nfv.surfnet.nl", ...}) = 0
write(9, 
"\0\0\0\0\f\0\24\0p\263I\241M\177\0\0\20\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0"..., 48) 
= 48
brk(NULL)   = 0x18b7000
brk(0x18d8000)  = 0x18d8000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0) = 0x7f4de4a81000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x1000) = 0x7f4de4a8
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x2000) = 0x7f4de4a7f000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x3000) = 0x7f4de4a7e000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x4000) = 0x7f4de4a7d000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x5000) = 0x7f4de4a7c000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x6000) = 0x7f4de4a7b000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x7000) = 0x7f4de4a7a000
mmap(NULL, 4096, PROT_READ, MAP_SHARED, 9, 0x50) = 0x7f4de4a79000
mmap(NULL, 4096, PROT_READ, MAP_SHARED, 9, 0x70) = 0x7f4de4a78000
open("/proc/cpuinfo", O_RDONLY) = 11
fstat(11, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f4de4a77000
read(11, "processor\t: 0\nvendor_id\t: Genuin"..., 1024) = 1024
read(11, "hwp_epp spec_ctrl intel_stibp fl"..., 1024) = 1024
read(11, "sbase tsc_adjust bmi1 hle avx2 s"..., 1024) = 1024
read(11, " x2apic movbe popcnt tsc_deadlin"..., 1024) = 1024
read(11, "n pebs bts rep_good nopl xtopolo"..., 1024) = 1024
read(11, " pse tsc msr pae mce cx8 apic se"..., 1024) = 1024
read(11, "KB\nphysical id\t: 0\nsiblings\t: 8\n"..., 1024) = 1024
read(11, "d\t: GenuineIntel\ncpu family\t: 6\n"..., 1024) = 1024
read(11, "l_stibp flush_l1d\nbogomips\t: 720"..., 1024) = 1024
read(11, "hle avx2 smep bmi2 erms invpcid "..., 1024) = 312
read(11, "", 1024)  = 0
close(11)   = 0
munmap(0x7f4de4a77000, 4096)= 0
write(9, 
"\1\0\0\200\1\0&\0@\241I\241M\177\0\0\0\0\r\0\0\0\0\0\0\0\0\0\0\0\0\0", 32) = 32
ioctl(9, _IOC(_IOC_READ|_IOC_WRITE, 0x1b, 0x01, 0x18), 0x7f4da149a070) = -1 
ENOTTY (Inappropriate ioctl for device)
ioctl(9, _IOC(_IOC_READ|_IOC_WRITE, 0x1b, 0x01, 0x18), 0x7f4da149b260) = -1 
ENOTTY (Inappropriate ioctl for device)
write(9, "\2\0\0\0\6\0\n\0\340\261I\241M\177\0\0\1\0\0\0\0\0\0\0", 24) = 24
write(9, "\3\0\0\0\4\0\2\0\260\311I\241M\177\0\0", 16) = 16
write(9, 
"\22\0\0\0\20\0\4\0`\310I\241M\177\0\0\340\365\210\1\0\0\0\0\377\3\0\0\0\0\0\0"...,
 64) = 64
write(9, 

Re: [vpp-dev] VPP & Mellanox

2019-05-14 Thread Eyle Brinkhuis
Hi all,

Sorry for the late reply.

With testpmd the interfaces are usable. I have tried using the new RDMA driver, 
but run into permission issues there..

vpp# create int rdma host-if enp1s0f1 name rdma-0
create interface rdma: no RDMA devices available, errno = 13. Is the ib_uverbs 
module loaded?: Permission denied

ib_uverbs is loaded..

Thanks,

Eyle

> On 10 May 2019, at 17:18, Benoit Ganne (bganne)  wrote:
> 
>> Current versions of Mellanox DPDK do not require OFED.
>> With a recent DPDK > 18.05 and recent kernel all you need is the rdma core
>> library.
> 
> Sure but VPP 18.04 was mentioned (based on DPDK 18.02 AFAIK) among the 
> versions, plus I am unsure about CentOS/RHEL rdma-core version, so w/o more 
> info...
> Anyway, my main point is to 1st check if testpmd is working.
> 
> ben



signature.asc
Description: Message signed with OpenPGP
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13025): https://lists.fd.io/g/vpp-dev/message/13025
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP & Mellanox

2019-05-10 Thread Eyle Brinkhuis
Hi guys,

I’m having varied results with compiling VPP for use with Mellanox ConnectX 5 
drivers.

I have one ubuntu 18.04 box that compiled VPP 18.10 without problems and runs a 
Mellanox ConnectX 5 card without problems, however:

I have several machines that are fitted with ConnextX 4 cards that I want to 
run VPP on, on a CentOS environment. However: VPP with Mellanox connectX 4/5 
driver (mlx5) compiles fine, but the NICs are not useable in VPP. The cards 
are, however, available when I run “show pci”:

vpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:01:00.0  15b3:1013   8.0 GT/s x8  mlx5_core   CX414A - ConnectX-4 
QSFP28  PN: MCX414A-BCAT

   EC: A7

   SN: MT1622X00459

   V0: 0x 50 43 49 65 47 65 6e 33 ...

   RV: 0x 40
:01:00.1  15b3:1013   8.0 GT/s x8  mlx5_core   CX414A - ConnectX-4 
QSFP28  PN: MCX414A-BCAT

   EC: A7
This happens while I use various guides. The working environment has been set 
up using http://www.jimmdenton.com/vpp-1810-mellanox/

While the not working setups are with the same guide, or even the guides posted 
on the Mellanox forum: 
https://community.mellanox.com/s/article/How-to-Build-VPP-FD-IO-18-07-18-10.

These problems occur on CentOS 7, and RHEL 7.6 with VPP 1804, 1810 and 1901..

The debug logging shows nothing except for this:
/usr/bin/vpp[17639]: svm_map_region:633: segment chown [ok if client starts 
first]: Operation not permitted (errno 1)


Anyone had the same problem or share a solution? I’ve found a few guys online 
that had the same issues but did not find a solution

Regards,

Eyle

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12978): https://lists.fd.io/g/vpp-dev/message/12978
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Support for non TCP traffic

2019-03-18 Thread Eyle Brinkhuis
Hi all,

I couldn’t find an answer on the wiki for this case:

We are looking into building a VM platform (on openstack) with vpp. For one of 
our use cases we’d need to provide interconnectivity between VM’s for a HA 
setup (in this case FortiGate VM’s for a HA firewallinsetup). These FWs make 
use of non-TCP packets with ethertype 0x8890, 0x8891 and 0x8993. Can VPP work 
with these? If not, what would be the solution here?

Thanks,

Regards,

Eyle



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12568): https://lists.fd.io/g/vpp-dev/message/12568
Mute This Topic: https://lists.fd.io/mt/30471946/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Openstack networking-vpp

2019-02-14 Thread Eyle Brinkhuis
Hi all,

Sorry if this is not the right place to ask, but I’m interested in messing 
around with the Openstack networking VPP plugin. Is there any documentation 
other than the git-repo available?

Thanks!

Regards,

Eyle Brinkhuis



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12253): https://lists.fd.io/g/vpp-dev/message/12253
Mute This Topic: https://lists.fd.io/mt/29837743/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-