Re: [vpp-dev] issues with running VPP on a Fortville NIC

2017-05-08 Thread Damjan Marion (damarion)

Can you try this one:

https://gerrit.fd.io/r/#/c/6614/


It should fix PF case….

On 8 May 2017, at 17:01, Mircea Orban 
mailto:mior...@hotmail.com>> wrote:

It would be the same output because it’s the same server:

-  :0b:00.0-3 are the four 10G PFs
-  :0b:02.0 to 4 - 5 VFs for  :0b:00.0
-  And :0b:06.0 to 4 - 5VFs for :0b:00.1.

Thanks,
Mircea


From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, May 08, 2017 10:51 AM
To: Mircea Orban mailto:mior...@hotmail.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] issues with running VPP on a Fortville NIC


Thanks,

what about PF case? Can you also grab output for PF case?

On 8 May 2017, at 16:47, Mircea Orban 
mailto:mior...@hotmail.com>> wrote:

Here it is.

Thanks,
Mircea

vpp#
vpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:09:00.0   0  15b3:1007   8.0 GT/s x8  mlx4_core
:0b:00.0   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.1   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.2   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.3   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:02.0   0  8086:154c   unknown  vfio-pci
:0b:02.1   0  8086:154c   unknown  vfio-pci
:0b:02.2   0  8086:154c   unknown  vfio-pci
:0b:02.3   0  8086:154c   unknown  vfio-pci
:0b:02.4   0  8086:154c   unknown  vfio-pci
:0b:06.0   0  8086:154c   unknown  vfio-pci
:0b:06.1   0  8086:154c   unknown  vfio-pci
:0b:06.2   0  8086:154c   unknown  vfio-pci
:0b:06.3   0  8086:154c   unknown  vfio-pci
:0b:06.4   0  8086:154c   unknown  vfio-pci
vpp#
vpp#

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, May 08, 2017 10:18 AM
To: Mircea Orban mailto:mior...@hotmail.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] issues with running VPP on a Fortville NIC




On 5 May 2017, at 22:37, Mircea Orban 
mailto:mior...@hotmail.com>> wrote:


I have a Fortville NIC (XL710-QDA1) with one QSFP+ port that supports two 
modes: 1X40g and 4X10g.

While in 1X40g mode everything seems to be fine,  when I run VPP in 4X10g mode 
some issues seems to occur:

-  When I use PFs  it’s all good except that the link speed is not 
detected properly (VPP thinks these are 40G links)
-  Additionally, with VFs, VPP seems to be confused with the VF Id 
numbering scheme (I think). Only one of the whitelisted VFs it’s picked up (out 
of 6 configured), and when I try to bring it up VPP crashes (see attached log).

Please let me know if it can get fixed.

I think problem here is very simple, nobody added support for 4x10G mode :)

Can you send output of “show pci” ?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] issues with running VPP on a Fortville NIC

2017-05-08 Thread Mircea Orban
It would be the same output because it’s the same server:


-  :0b:00.0-3 are the four 10G PFs

-  :0b:02.0 to 4 - 5 VFs for  :0b:00.0

-  And :0b:06.0 to 4 - 5VFs for :0b:00.1.

Thanks,
Mircea


From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, May 08, 2017 10:51 AM
To: Mircea Orban 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] issues with running VPP on a Fortville NIC


Thanks,

what about PF case? Can you also grab output for PF case?

On 8 May 2017, at 16:47, Mircea Orban 
mailto:mior...@hotmail.com>> wrote:

Here it is.

Thanks,
Mircea

vpp#
vpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:09:00.0   0  15b3:1007   8.0 GT/s x8  mlx4_core
:0b:00.0   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.1   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.2   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.3   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:02.0   0  8086:154c   unknown  vfio-pci
:0b:02.1   0  8086:154c   unknown  vfio-pci
:0b:02.2   0  8086:154c   unknown  vfio-pci
:0b:02.3   0  8086:154c   unknown  vfio-pci
:0b:02.4   0  8086:154c   unknown  vfio-pci
:0b:06.0   0  8086:154c   unknown  vfio-pci
:0b:06.1   0  8086:154c   unknown  vfio-pci
:0b:06.2   0  8086:154c   unknown  vfio-pci
:0b:06.3   0  8086:154c   unknown  vfio-pci
:0b:06.4   0  8086:154c   unknown  vfio-pci
vpp#
vpp#

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, May 08, 2017 10:18 AM
To: Mircea Orban mailto:mior...@hotmail.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] issues with running VPP on a Fortville NIC




On 5 May 2017, at 22:37, Mircea Orban 
mailto:mior...@hotmail.com>> wrote:


I have a Fortville NIC (XL710-QDA1) with one QSFP+ port that supports two 
modes: 1X40g and 4X10g.

While in 1X40g mode everything seems to be fine,  when I run VPP in 4X10g mode 
some issues seems to occur:

-  When I use PFs  it’s all good except that the link speed is not 
detected properly (VPP thinks these are 40G links)
-  Additionally, with VFs, VPP seems to be confused with the VF Id 
numbering scheme (I think). Only one of the whitelisted VFs it’s picked up (out 
of 6 configured), and when I try to bring it up VPP crashes (see attached log).

Please let me know if it can get fixed.

I think problem here is very simple, nobody added support for 4x10G mode :)

Can you send output of “show pci” ?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] issues with running VPP on a Fortville NIC

2017-05-08 Thread Damjan Marion (damarion)

Thanks,

what about PF case? Can you also grab output for PF case?

On 8 May 2017, at 16:47, Mircea Orban 
mailto:mior...@hotmail.com>> wrote:

Here it is.

Thanks,
Mircea

vpp#
vpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:09:00.0   0  15b3:1007   8.0 GT/s x8  mlx4_core
:0b:00.0   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.1   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.2   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.3   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:02.0   0  8086:154c   unknown  vfio-pci
:0b:02.1   0  8086:154c   unknown  vfio-pci
:0b:02.2   0  8086:154c   unknown  vfio-pci
:0b:02.3   0  8086:154c   unknown  vfio-pci
:0b:02.4   0  8086:154c   unknown  vfio-pci
:0b:06.0   0  8086:154c   unknown  vfio-pci
:0b:06.1   0  8086:154c   unknown  vfio-pci
:0b:06.2   0  8086:154c   unknown  vfio-pci
:0b:06.3   0  8086:154c   unknown  vfio-pci
:0b:06.4   0  8086:154c   unknown  vfio-pci
vpp#
vpp#

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, May 08, 2017 10:18 AM
To: Mircea Orban mailto:mior...@hotmail.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] issues with running VPP on a Fortville NIC




On 5 May 2017, at 22:37, Mircea Orban 
mailto:mior...@hotmail.com>> wrote:


I have a Fortville NIC (XL710-QDA1) with one QSFP+ port that supports two 
modes: 1X40g and 4X10g.

While in 1X40g mode everything seems to be fine,  when I run VPP in 4X10g mode 
some issues seems to occur:

-  When I use PFs  it’s all good except that the link speed is not 
detected properly (VPP thinks these are 40G links)
-  Additionally, with VFs, VPP seems to be confused with the VF Id 
numbering scheme (I think). Only one of the whitelisted VFs it’s picked up (out 
of 6 configured), and when I try to bring it up VPP crashes (see attached log).

Please let me know if it can get fixed.

I think problem here is very simple, nobody added support for 4x10G mode :)

Can you send output of “show pci” ?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] issues with running VPP on a Fortville NIC

2017-05-08 Thread Mircea Orban
Here it is.

Thanks,
Mircea

vpp#
vpp# show pci
Address  Sock VID:PID Link Speed   Driver  Product Name 
   Vital Product Data
:09:00.0   0  15b3:1007   8.0 GT/s x8  mlx4_core
:0b:00.0   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.1   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.2   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:00.3   0  8086:1584   8.0 GT/s x8  i40eXL710 40GbE 
Controller  RV: 0x 86
:0b:02.0   0  8086:154c   unknown  vfio-pci
:0b:02.1   0  8086:154c   unknown  vfio-pci
:0b:02.2   0  8086:154c   unknown  vfio-pci
:0b:02.3   0  8086:154c   unknown  vfio-pci
:0b:02.4   0  8086:154c   unknown  vfio-pci
:0b:06.0   0  8086:154c   unknown  vfio-pci
:0b:06.1   0  8086:154c   unknown  vfio-pci
:0b:06.2   0  8086:154c   unknown  vfio-pci
:0b:06.3   0  8086:154c   unknown  vfio-pci
:0b:06.4   0  8086:154c   unknown  vfio-pci
vpp#
vpp#

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, May 08, 2017 10:18 AM
To: Mircea Orban 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] issues with running VPP on a Fortville NIC




On 5 May 2017, at 22:37, Mircea Orban 
mailto:mior...@hotmail.com>> wrote:


I have a Fortville NIC (XL710-QDA1) with one QSFP+ port that supports two 
modes: 1X40g and 4X10g.

While in 1X40g mode everything seems to be fine,  when I run VPP in 4X10g mode 
some issues seems to occur:

-  When I use PFs  it’s all good except that the link speed is not 
detected properly (VPP thinks these are 40G links)
-  Additionally, with VFs, VPP seems to be confused with the VF Id 
numbering scheme (I think). Only one of the whitelisted VFs it’s picked up (out 
of 6 configured), and when I try to bring it up VPP crashes (see attached log).

Please let me know if it can get fixed.

I think problem here is very simple, nobody added support for 4x10G mode :)

Can you send output of “show pci” ?
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] issues with running VPP on a Fortville NIC

2017-05-08 Thread Damjan Marion (damarion)



On 5 May 2017, at 22:37, Mircea Orban 
mailto:mior...@hotmail.com>> wrote:


I have a Fortville NIC (XL710-QDA1) with one QSFP+ port that supports two 
modes: 1X40g and 4X10g.

While in 1X40g mode everything seems to be fine,  when I run VPP in 4X10g mode 
some issues seems to occur:

-  When I use PFs  it’s all good except that the link speed is not 
detected properly (VPP thinks these are 40G links)
-  Additionally, with VFs, VPP seems to be confused with the VF Id 
numbering scheme (I think). Only one of the whitelisted VFs it’s picked up (out 
of 6 configured), and when I try to bring it up VPP crashes (see attached log).

Please let me know if it can get fixed.

I think problem here is very simple, nobody added support for 4x10G mode :)

Can you send output of “show pci” ?
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] issues with running VPP on a Fortville NIC

2017-05-06 Thread Mircea Orban
I have a Fortville NIC (XL710-QDA1) with one QSFP+ port that supports two 
modes: 1X40g and 4X10g.

While in 1X40g mode everything seems to be fine,  when I run VPP in 4X10g mode 
some issues seems to occur:


-  When I use PFs  it's all good except that the link speed is not 
detected properly (VPP thinks these are 40G links)

-  Additionally, with VFs, VPP seems to be confused with the VF Id 
numbering scheme (I think). Only one of the whitelisted VFs it's picked up (out 
of 6 configured), and when I try to bring it up VPP crashes (see attached log).

Please let me know if it can get fixed.

Thanks,
Mircea Orban
vpp -c ./vpp/startup.i40.conf 
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:184: Loaded plugin: flowperpkt_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))
load_one_plugin:184: Loaded plugin: snat_plugin.so (Network Address Translation)
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/lb_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/snat_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/flowperpkt_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/acl_test_plugin.so
load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
EAL: Detected 32 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device :0b:02.0 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
EAL:   using IOMMU type 1 (Type 1)
EAL: PCI device :0b:02.1 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
EAL: PCI device :0b:02.2 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
EAL: PCI device :0b:06.0 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
EAL: PCI device :0b:06.1 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
EAL: PCI device :0b:06.2 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
DPDK physical memory layout:
Segment 0: phys:0x3380, len:41943040, virt:0x7f6aff40, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x6700, len:69206016, virt:0x7f6afb00, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: phys:0x6ba0, len:2097152, virt:0x7f6afac0, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: phys:0x71c0, len:88080384, virt:0x7f6af560, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: phys:0x7740, len:12582912, virt:0x7f6af480, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 5: phys:0xfbd00, len:2097152, virt:0x7f6af440, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
Segment 6: phys:0xfbd40, len:52428800, virt:0x7f6a8160, socket_id:0, 
hugepage_sz:2097152, nchannel:0, nrank:0
0: dpdk_ipsec_process:239: not enough Cryptodevs, default to OpenSSL IPsec
PMD: i40evf_dev_configure(): VF can't disable HW CRC Strip
0: dpdk_port_setup: rte_eth_dev_configure[0]: err -22
_____   _  ___ 
 __/ __/ _ \  (_)__| | / / _ \/ _ \
 _/ _// // / / / _ \   | |/ / ___/ ___/
 /_/ /(_)_/\___/   |___/_/  /_/

vpp# 
vpp# 
vpp# show int   
  Name   Idx   State  Counter  
Count 
FortyGigabitEthernetb/2/0 1down  
local00down  
vpp# 
vpp# 
vpp# show hardware
  NameIdx   Link  Hardware
FortyGigabitEthernetb/2/0  1down  FortyGigabitEthernetb/2/0
  Ethernet address fa:14:ed:1a:10:64
  Intel X710/XL710 Family VF
carrier down 
rx queues 1, rx desc 1024, tx queues 1, tx desc 1024
cpu socket 0

rx