Hello,
Try without `--no-pci` in your testpmd command.

On Sun, Jan 16, 2022 at 6:08 AM Sindhura Bandi
<sindhura.ba...@certesnetworks.com> wrote:
>
> Hi,
>
>
> I'm trying to bring up dpdk-testpmd application using Mellanox connectX-5 
> ports. With a custom built dpdk, testpmd is not able to detect the ports.
>
>
> OS & Kernel:
>
> Linux debian-10 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21) x86_64 
> GNU/Linux
>
> The steps followed:
>
> Installed MLNX_OFED_LINUX-4.9-4.0.8.0-debian10.0-x86_64 (./mlnxofedinstall 
> --skip-distro-check --upstream-libs --dpdk)
> Downloaded dpdk-18.11 source, and built it after making following changes in 
> config
>
>            CONFIG_RTE_LIBRTE_MLX5_PMD=y
>            CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
>            CONFIG_RTE_BUILD_SHARED_LIB=y
>
> When I run testpmd, it is not recognizing any Mellanox ports
>
>
> #########
> root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 1-3  -w 82:00.0 
> --no-pci -- --total-num-mbufs 1025
> EAL: Detected 24 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> testpmd: No probed ethernet devices
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176, 
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176, 
> socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, 
> MP allocation mode: native
>
>   io packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=0
> Press enter to exit
> ##########
>
> root@debian-10:~# lspci | grep Mellanox
> 82:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 
> Ex]
> 82:00.1 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 
> Ex]
> root@debian-10:~# ibv_devinfo
> hca_id:    mlx5_0
>     transport:            InfiniBand (0)
>     fw_ver:                16.28.4512
>     node_guid:            b8ce:f603:00f2:7952
>     sys_image_guid:            b8ce:f603:00f2:7952
>     vendor_id:            0x02c9
>     vendor_part_id:            4121
>     hw_ver:                0x0
>     board_id:            DEL0000000004
>     phys_port_cnt:            1
>         port:    1
>             state:            PORT_ACTIVE (4)
>             max_mtu:        4096 (5)
>             active_mtu:        1024 (3)
>             sm_lid:            0
>             port_lid:        0
>             port_lmc:        0x00
>             link_layer:        Ethernet
>
> hca_id:    mlx5_1
>     transport:            InfiniBand (0)
>     fw_ver:                16.28.4512
>     node_guid:            b8ce:f603:00f2:7953
>     sys_image_guid:            b8ce:f603:00f2:7952
>     vendor_id:            0x02c9
>     vendor_part_id:            4121
>     hw_ver:                0x0
>     board_id:            DEL0000000004
>     phys_port_cnt:            1
>         port:    1
>             state:            PORT_ACTIVE (4)
>             max_mtu:        4096 (5)
>             active_mtu:        1024 (3)
>             sm_lid:            0
>             port_lid:        0
>             port_lmc:        0x00
>             link_layer:        Ethernet
>
>
> I'm not sure where I'm going wrong. Any hints will be much appreciated.
>
> Thanks,
> Sindhu

Reply via email to