Hi, Adrien, Actually, I tested using the original kernel module without binding. It works.
However, it is only 6Mpps for 64B in pkt-gen, which is so slow for a 40Gbps NIC. Is that right? On Mon, Jan 9, 2017 at 11:13 PM, Adrien Mazarguil < adrien.mazarg...@6wind.com> wrote: > Hi Royce, > > On Mon, Jan 09, 2017 at 10:53:37PM +0800, Royce Niu wrote: > > Dear all, > > > > I cannot use my Mellanox 3 Pro, after I binded it with igb_uio driver. > > > > It always shows when I use my DPDK application. > > > > EAL: Detected 32 lcore(s) > > EAL: Probing VFIO support... > > PMD: bnxt_rte_pmd_init() called for (null) > > EAL: PCI device 0000:02:00.0 on NUMA socket 0 > > EAL: probe driver: 8086:1521 rte_igb_pmd > > EAL: PCI device 0000:02:00.1 on NUMA socket 0 > > EAL: probe driver: 8086:1521 rte_igb_pmd > > EAL: PCI device 0000:02:00.2 on NUMA socket 0 > > EAL: probe driver: 8086:1521 rte_igb_pmd > > EAL: PCI device 0000:02:00.3 on NUMA socket 0 > > EAL: probe driver: 8086:1521 rte_igb_pmd > > EAL: PCI device 0000:81:00.0 on NUMA socket 1 > > EAL: probe driver: 15b3:1007 librte_pmd_mlx4 > > PMD: librte_pmd_mlx4: cannot access device, is mlx4_ib loaded? > > EAL: Error - exiting with code: 1 > > Cause: Cannot create mbuf pool > > > > --------------- > > I have added CONFIG_RTE_LIBRTE_MLX4_PMD=y in .config, and > > install MLNX_OFED_LINUX-3.4-2.0.0.0. > > The mlx4 PMD does not operate through igb_uio (see mlx4 documentation [1]), > PCI devices must remain bound to their original kernel module (mlx4_core), > however you have to additionally load mlx4_ib, mlx4_en and ib_uverbs [2]. > > [1] http://dpdk.org/doc/guides/nics/mlx4.html > [2] http://dpdk.org/doc/guides/nics/mlx4.html#prerequisites > > -- > Adrien Mazarguil > 6WIND > -- Regards, Royce