Hi

We try to read from 100G NIC Mellanox ConnectX-5  without drop at nic.
All thread are with core pinning and cpu isolation.
We use dpdk 19.11
I tried to apply all configuration that are in 
https://fast.dpdk.org/doc/perf/DPDK_19_08_Mellanox_NIC_performance_report.pdf

We have a strange behavior, 1 thread can receive receive 20 Gbps/12 Mpps and 
free mbuf without dropps,  but when trying to pass these mbuf to another thread 
that only free them there are drops, even when trying to work with more threads.

When running 1 thread that only read from port (no multi queue) and free mbuf 
in the same thread, there are no dropp with traffic up to 21 Gbps  12.4 Mpps.
When running 6 thread that only read from port (with multi queue) and free mbuf 
in the same threads, there are no dropp with traffic up to 21 Gbps  12.4 Mpps.

When running 1 to 6 thread that only read from port and pass them to another 6 
thread that only read from ring and free mbuf, there are dropp in nic (imissed 
counter) with traffic over to 10 Gbps  5.2 Mpps.(Here receive thread were 
pinned to cpu 1-6 and additional thread from 7-12 each thread on a single cpu)
Each receive thread send to one thread that free the buffer.

Configurations:

We use rings of size 32768 between the threads. Ring are initialized with 
SP/SC, Write are done with bulk of 512 with rte_ring_enqueue_burst.
Port is initialized with rte_eth_rx_queue_setup nb_rx_desc=8192
rte_eth_rxconf - rx_conf.rx_thresh.pthresh = DPDK_NIC_RX_PTHRESH; //ring 
prefetch threshold
                                rx_conf.rx_thresh.hthresh = 
DPDK_NIC_RX_HTHRESH; //ring host threshold
                                rx_conf.rx_thresh.wthresh = 
DPDK_NIC_RX_WTHRESH; //ring writeback threshold
                                rx_conf.rx_free_thresh = 
DPDK_NIC_RX_FREE_THRESH;
rss ->  ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP;


We tried to work with and without hyperthreading.

****************************************

Network devices using kernel driver
===================================
0000:37:00.0 'MT27800 Family [ConnectX-5] 1017' if=ens2f0 drv=mlx5_core 
unused=igb_uio
0000:37:00.1 'MT27800 Family [ConnectX-5] 1017' if=ens2f1 drv=mlx5_core 
unused=igb_uio

****************************************

ethtool -i ens2f0
driver: mlx5_core
version: 5.3-1.0.0
firmware-version: 16.30.1004 (HPE0000000009)
expansion-rom-version:
bus-info: 0000:37:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes

****************************************

uname -a
Linux localhost.localdomain 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 
UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

****************************************

lscpu | grep -e Socket -e Core -e Thread
Thread(s) per core:    1
Core(s) per socket:    24
Socket(s):             2

****************************************
cat /sys/devices/system/node/node0/cpulist
0-23
****************************************
From /proc/cpuinfo

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 85
model name      : Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz
stepping        : 7
microcode       : 0x5003003
cpu MHz         : 2200.000

****************************************

python /home/cpu_layout.py
======================================================================
Core and Socket Information (as reported by '/sys/devices/system/cpu')
======================================================================

cores =  [0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 16, 17, 18, 19, 20, 21, 
25, 26, 27, 28, 29, 24]
sockets =  [0, 1]

        Socket 0    Socket 1
        --------    --------
Core 0  [0]         [24]
Core 1  [1]         [25]
Core 2  [2]         [26]
Core 3  [3]         [27]
Core 4  [4]         [28]
Core 5  [5]         [29]
Core 6  [6]         [30]
Core 8  [7]
Core 9  [8]         [31]
Core 10 [9]         [32]
Core 11 [10]        [33]
Core 12 [11]        [34]
Core 13 [12]        [35]
Core 16 [13]        [36]
Core 17 [14]        [37]
Core 18 [15]        [38]
Core 19 [16]        [39]
Core 20 [17]        [40]
Core 21 [18]        [41]
Core 25 [19]        [43]
Core 26 [20]        [44]
Core 27 [21]        [45]
Core 28 [22]        [46]
Core 29 [23]        [47]
Core 24             [42]

Reply via email to