On Tue, Apr 21, 2026 at 11:34:46AM +0100, Bruce Richardson wrote: > On Sat, Apr 18, 2026 at 09:56:38AM +0000, Morten Brørup wrote: > > Freeing mbufs directly into the mempool meant that mbuf instrumentation, > > including mbuf history marking, was omitted. > > The mbufs are now freed via the rte_mbuf_raw_free_bulk() function instead. > > > > Added a static_assert to ensure that type casting the array of struct > > ci_tx_entry_vec to an array of rte_mbuf pointers remains sound. > > > > Performance note: > > The (n & 31) condition was not removed. > > For the default tx_rs_thresh value (32), the condition will be true. > > And due to inlining, the rte_mbuf_raw_free_bulk() ends up in an > > rte_memcpy(), where the optimizer takes advantage of knowing that the > > lower bits are not set. > > This should compensate somewhat for removing the handcoded optimization of > > copying in chunks of 32 mbufs. > > > > Signed-off-by: Morten Brørup <[email protected]> > > --- > > Ran a very quick perf test using a couple of 100G ports, no regression > seen with this patch, maybe even a slight perf bump. Therefore: > > Acked-by: Bruce Richardson <[email protected]> > Tested-by: Bruce Richardson <[email protected]> > Applied to dpdk-next-net-intel.
Thanks, /Bruce

