Hi Stephen,
I created debug dpdk libs and built our application.
I ensured all 16 rings in the driver are SP / SC created and at runtime the 
ring flags have value of 3 confirming they are.

The perf output continues to show the symbol common_ring_mp_enqueue() and 
rte_atomic32_cmpset().  My understanding is in SP / SC rings they don't utilize:
1.  atomic operation
2.  spin-wait
3.  rte_wait_until_equal_32()
Thus, should be lower overhead, no cache-line bouncing, no memory stalls and no 
spinning from 3.


Perf report snippit:
+   57.25%  DPDK_TX_1  test            [.] common_ring_mp_enqueue 
+   25.51%  DPDK_TX_1  test            [.] rte_atomic32_cmpset 
+    9.13%  DPDK_TX_1  test             [.] i40e_xmit_pkts 
+    6.50%  DPDK_TX_1  test             [.] rte_pause 
      0.21%  DPDK_TX_1  test              [.] 
rte_mempool_ops_enqueue_bulk.isra.0 
      0.20%  DPDK_TX_1  test              [.] dpdk_tx_thread                    
                          

Any suggestions is very appreciated.

Thanks,
Ed

-----Original Message-----
From: Lombardo, Ed 
Sent: Monday, July 7, 2025 12:27 PM
To: Stephen Hemminger <step...@networkplumber.org>
Cc: Ivan Malov <ivan.ma...@arknetworks.am>; users <users@dpdk.org>
Subject: RE: dpdk Tx falling short

Hi Stephen,
I ran a perf diff on two perf records and reveals the real problem with the tx 
thread in transmitting packets.

The comparison is traffic received on ifn3 and transmit ifn4 to traffic 
received on ifn3, ifn5 and transmit on ifn4, ifn6.
When transmit packets on one port the performance is better, however when 
transmit on two ports the performance across the two drops dramatically.

There is increase of 55.29% of the CPU spent in common_ring_mp_enqueue and 
54.18% less time in i40e_xmit_pkts (was E810 tried x710).
The common_ring_mp_enqueue is multi-producer,  is the enqueue of mbuf pointers 
passed in to rte_eth_tx_burst() have to be multi-producer?

Is there a way to change dpdk to use single-producer?

# Event 'cycles'
#
# Baseline  Delta Abs  Shared Object      Symbol
# ........  .........  .................  ......................................
#
    36.37%    +55.29%  test                        [.] common_ring_mp_enqueue
    62.36%    -54.18%   test                        [.] i40e_xmit_pkts
     1.10%     -0.94%     test                         [.] dpdk_tx_thread
     0.01%     -0.01%     [kernel.kallsyms]  [k] native_sched_clock
                     +0.00%    [kernel.kallsyms]  [k] fill_pmd
                     +0.00%    [kernel.kallsyms]  [k] perf_sample_event_took
     0.00%     +0.00%    [kernel.kallsyms]  [k] __flush_smp_call_function_queue
     0.02%                      [kernel.kallsyms]  [k] 
__intel_pmu_enable_all.constprop.0
     0.02%                      [kernel.kallsyms]  [k] native_irq_return_iret
     0.02%                      [kernel.kallsyms]  [k] 
native_tss_update_io_bitmap
     0.01%                      [kernel.kallsyms]  [k] ktime_get
     0.01%                      [kernel.kallsyms]  [k] 
perf_adjust_freq_unthr_context
     0.01%                      [kernel.kallsyms]  [k] __update_blocked_fair
     0.01%                      [kernel.kallsyms]  [k] 
perf_adjust_freq_unthr_events

Thanks,
Ed

-----Original Message-----
From: Lombardo, Ed 
Sent: Sunday, July 6, 2025 1:45 PM
To: Stephen Hemminger <step...@networkplumber.org>
Cc: Ivan Malov <ivan.ma...@arknetworks.am>; users <users@dpdk.org>
Subject: RE: dpdk Tx falling short

Hi Stephen,
If using dpdk rings comes with this penalty then what should I use, is there an 
alterative to rings.  We do not want to use shared memory and do buffer copies?

Thanks,
Ed

-----Original Message-----
From: Stephen Hemminger <step...@networkplumber.org> 
Sent: Sunday, July 6, 2025 12:03 PM
To: Lombardo, Ed <ed.lomba...@netscout.com>
Cc: Ivan Malov <ivan.ma...@arknetworks.am>; users <users@dpdk.org>
Subject: Re: dpdk Tx falling short

External Email: This message originated outside of NETSCOUT. Do not click links 
or open attachments unless you recognize the sender and know the content is 
safe.

On Sun, 6 Jul 2025 00:03:16 +0000
"Lombardo, Ed" <ed.lomba...@netscout.com> wrote:

> Hi Stephen,
> Here are comments to the list of obvious causes of cache misses you mentiond.
> 
> Obvious cache misses.
>  - passing packets to worker with ring - we use lots of rings to pass mbuf 
> pointers.  If I skip the rte_eth_tx_burst() and just free mbuf bulk, the tx 
> ring does not fill up.
>  - using spinlocks (cost 16ns)  - The driver does not use spinlocks, other 
> than what dpdk uses.
>  - fetching TSC  - We don't do this, we let Rx offload timestamp packets.
>  - syscalls?  - No syscalls are done in our driver fast path.
> 
> You mention "passing packets to worker with ring", do you mean using rings to 
> pass mbuf pointers causes cache misses and should be avoided?

Rings do cause data to be modified by one core and examined by another so they 
are a cache miss.

Reply via email to