On Wed, Dec 11, 2019 at 7:57 AM Christophe Grosse < christophe.gro...@6wind.com> wrote:
> Hello, > > I am using DPDK 18.11.5 from dpdk-stable repository with a Broadcom > network adapter in VF mode : > Broadcom Limited BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet > Controller (rev 01) > # ./bnxtnvm listdev > Broadcom P225p NetXtreme-E Dual-port 10Gb/25Gb Ethernet PCIe Adapter #1 > Device Interface Name : ntfp2 > MACAddress : 00-0A-F7-B6-E3-D1 > PCI Device Name : 0000:02:00.1 > > # ./bnxtnvm -dev=ntfp2 devid > PCI VendorID : 14e4 > PCI DeviceID : 16d7 > PCI Subsys VendorID : 14e4 > PCI Subsys DeviceID : 1402 > PCI Device Name : 0000:02:00:1 > > # ./bnxtnvm -dev=ntfp2 pkgver > Active Package version : 20.06.01.06 > Package version on NVM : 20.06.01.06 > > I added a log inside function __bnxt_hwrm_func_qcaps() and I notice > that max_rx_em_flows value is inconsistent. It changes right after > port init. > > max_mac_addrs is also inconsistent because it depends on > max_rx_em_flows since commit : > > http://scm.6wind.com/pub/dpdk.org/dpdk-stable/commit/?h=18.11&id=15f42f5e426d > , > > This is the log I have, when I launch ethtool example application. > For port 1, max_rx_em_flows=51116 and then max_rx_em_flows=61292 > > # ethtool -l 0-1 -n 4 > EAL: Detected 12 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Some devices want iova as va but pa will be used because.. EAL: > vfio-noiommu mode configured > EAL: No free hugepages reported in hugepages-1048576kB > EAL: Probing VFIO support... > EAL: VFIO support initialized > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using > unreliable clock cycles ! > EAL: PCI device 0000:00:03.0 on NUMA socket -1 > EAL: Invalid NUMA socket, default to 0 > EAL: probe driver: 1af4:1000 net_virtio > EAL: PCI device 0000:00:04.0 on NUMA socket -1 > EAL: Invalid NUMA socket, default to 0 > EAL: probe driver: 14e4:16dc net_bnxt > EAL: using IOMMU type 8 (No-IOMMU) > bnxt_dev_init(): Broadcom NetXtreme driver bnxt > > bnxt_hwrm_ver_get(): 1.7.6:20.6.107 > bnxt_hwrm_ver_get(): Driver HWRM version: 1.9.2 > __bnxt_hwrm_func_qcaps(): !!!!!!!!!!!! bp->max_l2_ctx= 4 + 51116 = 51120 > bnxt_dev_init(): bnxt found at mem feb04000, node addr 0x1100800000M > EAL: PCI device 0000:00:05.0 on NUMA socket -1 > EAL: Invalid NUMA socket, default to 0 > EAL: probe driver: 14e4:16dc net_bnxt > bnxt_hwrm_ver_get(): 1.7.6:20.6.107 > bnxt_hwrm_ver_get(): Driver HWRM version: 1.9.2 > __bnxt_hwrm_func_qcaps(): !!!!!!!!!!!! bp->max_l2_ctx= 4 + 51116 = 51120 > bnxt_dev_init(): bnxt found at mem feb0c000, node addr 0x1100908000M > Number of NICs: 2 > !!!!!!!!!!!! dev_info.max_mac_addrs= 51120 > !!!!!!!!!!!! dev_info.max_mac_addrs= 51120 > > Init port 0.. > __bnxt_hwrm_func_qcaps(): !!!!!!!!!!!! bp->max_l2_ctx= 4 + 1 = 5 > !!!!!!!!!!!! dev_info.max_mac_addrs= 5 > bnxt_print_link_info(): Port 0 Link Down > bnxt_print_link_info(): Port 0 Link Up - speed 10000 Mbps - full-duplex > > Init port 1.. > __bnxt_hwrm_func_qcaps(): !!!!!!!!!!!! bp->max_l2_ctx= 4 + 61292 = 61296 > !!!!!!!!!!!! dev_info.max_mac_addrs= 61296 > bnxt_print_link_info(): Port 1 Link Down > bnxt_print_link_info(): Port 1 Link Up - speed 10000 Mbps - full-duplex > EthApp> > > On master, I see that the new calculation for the number of L2 context > has been restricted to Whitney chip family. > Anyone knows if the chip/FW I am using is compatible with this new > calculation ? > > By the way, I tried to upgrade to FW 214.0.253.5 and Init is falling very > early: > bnxt_hwrm_func_resc_qcaps(): error 1:0:00000000:00f9 > 214.0.x is older FW as far our releases are concerned. I think you will need the following commit from upstream to get past the error you are seeing. Commit: 89a0deb866dc42ead92b79e6e7159622e1ab8490 net/bnxt: fix resource qcaps with older FW But I believe you will need a newer firmware to get consistent values across the ports. Let me see how to get that to you. Thanks Ajit > bnxt_dev_init(): hwrm query capability failure rc: ffffffea > EAL: Requested device 0000:00:04.0 cannot be used > > > Best Regards > Chris >