looking at code for rte_eth_tx_buffer_flush(), in the error case where some 
buffers were not sent, it has a default callback which will enqueue the buffers 
back to the mempool, and in the non error case as well when the call is done it 
will enqueue the buffers back to the mempool ...

this is what we have relied on for ever, otherwise we would not be able to 
utilize it at all ... in the transmit case we never explicitly free the buffers 
...and we are able to run the product through 100s of millions of packet 
transmits, its only under special circumstances that we run into the said issue 
..

I hope i am not misunderstanding something. 


regards







On Tuesday, November 26, 2024 at 04:51:09 PM PST, Stephen Hemminger 
<step...@networkplumber.org> wrote: 





On Tue, 26 Nov 2024 23:50:25 +0000 (UTC)

amit sehas <cu...@yahoo.com> wrote:

> Dumping the stats every 10 minutes suggest that there is no slow leak of 
> buffers. The problem arises when the system is under stress and starts 
> performing extra disk i/o. In this situation dpdk accumulates the buffers and 
> does not return them back to the mempool right away thereby accumulating all 
> the 4k buffers allocated to the queue.
> 
> rte_eth_tx_buffer_flush() should be flushing the buffers and returning them 
> to the mempool ... is there any additional API that can make sure that this 
> happens.


If you read the code in rte_ethdev.h
The rte_eth_tx_buffer_flush is just does a send of the packets that application 
has aggregated
via rte_eth_tx_buffer.

It does nothing vis-a-vis mempools are causing the driver (PMD) to complete 
transmits.
There are some tuneables such as tx_free_thresh which control when driver should
start freeing sent mbufs. 

Have you isolated the CPU's used by DPDK threads?
Is the application stalling because it starts swapping. You may have to 
mlockall to keep the
pages of application from swapping out.

Reply via email to