> -----Original Message-----
> From: Morten Brørup <m...@smartsharesystems.com>
> Sent: Wednesday, March 8, 2023 4:26 AM
> To: Kamalakshitha Aligeri <kamalakshitha.alig...@arm.com>; Zhang, Yuying
> <yuying.zh...@intel.com>; Xing, Beilei <beilei.x...@intel.com>; Rong, Leyi
> <leyi.r...@intel.com>; ruifeng.w...@arm.com; feifei.wa...@arm.com
> Cc: n...@arm.com; dev@dpdk.org
> Subject: RE: [PATCH] net/i40e: avx512 fast-free path bug fix
>
> > From: Kamalakshitha Aligeri [mailto:kamalakshitha.alig...@arm.com]
> > Sent: Tuesday, 7 March 2023 20.32
> >
> > In i40e_tx_free_bufs_avx512 fast-free path, when cache is NULL, non
> > fast-free path is being executed. Fixed the bug by calling
> > rte_mempool_generic_put API that handles the cache==NULL case.
> >
> > Fixes: 5171b4ee6b6b ("net/i40e: optimize Tx by using AVX512")
> > Cc: leyi.r...@intel.com
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Kamalakshitha Aligeri <kamalakshitha.alig...@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.w...@arm.com>
> > Reviewed-by: Feifei Wang <feifei.wa...@arm.com>
> > ---
> > .mailmap | 1 +
> > drivers/net/i40e/i40e_rxtx_vec_avx512.c | 12 ++++--------
> > 2 files changed, 5 insertions(+), 8 deletions(-)
> >
> > diff --git a/.mailmap b/.mailmap
> > index a9f4f28fba..2581d0efe7 100644
> > --- a/.mailmap
> > +++ b/.mailmap
> > @@ -677,6 +677,7 @@ Kai Ji <kai...@intel.com> Kaiwen Deng
> > <kaiwenx.d...@intel.com> Kalesh AP
> > <kalesh-anakkur.pura...@broadcom.com>
> > Kamalakannan R <kamalakanna...@intel.com>
> > +Kamalakshitha Aligeri <kamalakshitha.alig...@arm.com>
> > Kamil Bednarczyk <kamil.bednarc...@intel.com> Kamil Chalupnik
> > <kamilx.chalup...@intel.com> Kamil Rytarowski
> > <kamil.rytarow...@caviumnetworks.com>
> > diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
> > b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
> > index d3c7bfd121..ad0893324d 100644
> > --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
> > +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
> > @@ -783,16 +783,13 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue
> > *txq)
> > struct rte_mempool_cache *cache =
> > rte_mempool_default_cache(mp,
> > rte_lcore_id());
> >
> > - if (!cache || cache->len == 0)
> > - goto normal;
> > -
> > - cache_objs = &cache->objs[cache->len];
> > -
> > - if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
> > - rte_mempool_ops_enqueue_bulk(mp, (void *)txep,
> n);
> > + if (!cache || n > RTE_MEMPOOL_CACHE_MAX_SIZE) {
> > + rte_mempool_generic_put(mp, (void *)txep, n,
> cache);
> > goto done;
> > }
> >
> > + cache_objs = &cache->objs[cache->len];
> > +
> > /* The cache follows the following algorithm
> > * 1. Add the objects to the cache
> > * 2. Anything greater than the cache min value (if it
> > @@ -824,7 +821,6 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue
> *txq)
> > goto done;
> > }
> >
> > -normal:
> > m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
> > if (likely(m)) {
> > free[0] = m;
> > --
> > 2.25.1
> >
>
> An improvement of the copy-paste code we are aiming to replace by proper
> use of the mempool API.
>
> But still an improvement.
>
> Acked-by: Morten Brørup <m...@smartsharesystems.com>
Applied to dpdk-next-net-intel.
Thanks
Qi