I should also add that I am using flow director and rte_flow APIs to send ingress traffic to the different queues. You can see my port_init function here <https://coliru.stacked-crooked.com/a/6988edaa96d15952> (for DPDK 18.11) . I also tried do change the number of TX/RX descriptors from 32768 to 16384, with no success. --- Amedeo
On Thu, Jan 17, 2019 at 12:07 AM Amedeo Sapio <amedeo.sa...@gmail.com> wrote: > Hi all, > > I am developing a DPDK program using a Mellanox ConnectX-5 100G. > My program starts N workers (one per core), and each worker deals with its > own dedicated TX and RX queue, therefore I need to setup N TX and N RX > queues. > > For each RX queue I create a mbuf pool with: > n = 262144 > cache size = 512 > priv_size = 0 > data_room_size = RTE_MBUF_DEFAULT_BUF_SIZE > > For N<=4 everything works fine, but with N=8, 'rte_eth_dev_start' returns: > "Unknown error -12" > and the following error: > net_mlx5: port 0 Tx queue 0 QP creation failure > net_mlx5: port 0 Tx queue allocation failed: Cannot allocate memory > > I tried > - to increment the number of Hugepages (up to 64x1G) > - change the pool size in different ways > - both dpdk 18.05 and 18.11 > > but with no success. Do you have any suggestions on what could be the > issue? > > Thanks for your help, > --- > Amedeo >