Acked-by: Viacheslav Ovsiienko <[email protected]>

> -----Original Message-----
> From: Gavin Hu <[email protected]>
> Sent: Friday, December 6, 2024 2:58 AM
> To: [email protected]
> Cc: [email protected]; Dariusz Sosnowski <[email protected]>; Slava
> Ovsiienko <[email protected]>; Bing Zhao <[email protected]>; Ori
> Kam <[email protected]>; Suanming Mou <[email protected]>; Matan
> Azrad <[email protected]>; Alexander Kozyrev <[email protected]>
> Subject: [PATCH] net/mlx5: do not poll CQEs when no available elts
> 
> In certain situations, the receive queue (rxq) fails to replenish its 
> internal ring
> with memory buffers (mbufs) from the pool. This can happen when the pool
> has a limited number of mbufs allocated, and the user application holds
> incoming packets for an extended period, resulting in a delayed release of
> mbufs. Consequently, the pool becomes depleted, preventing the rxq from
> replenishing from it.
> 
> There was a bug in the behavior of the vectorized rxq_cq_process_v routine,
> which handled completion queue entries (CQEs) in batches of four. This
> routine consistently accessed four mbufs from the internal queue ring,
> regardless of whether they had been replenished. As a result, it could access
> mbufs that no longer belonged to the poll mode driver (PMD).
> 
> The fix involves checking if there are four replenished mbufs available before
> allowing rxq_cq_process_v to handle the batch. Once replenishment succeeds
> during the polling process, the routine will resume its operation.
> 
> Fixes: 1ded26239aa0 ("net/mlx5: refactor vectorized Rx")
> Cc: [email protected]
> 
> Reported-by: Changqi Dingluo <[email protected]>
> Signed-off-by: Gavin Hu <[email protected]>
> ---
>  drivers/net/mlx5/mlx5_rxtx_vec.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c
> b/drivers/net/mlx5/mlx5_rxtx_vec.c
> index 1872bf310c..1b701801c5 100644
> --- a/drivers/net/mlx5/mlx5_rxtx_vec.c
> +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
> @@ -325,6 +325,9 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct
> rte_mbuf **pkts,
>       /* Not to cross queue end. */
>       pkts_n = RTE_MIN(pkts_n, q_n - elts_idx);
>       pkts_n = RTE_MIN(pkts_n, q_n - cq_idx);
> +     /* Not to move past the allocated mbufs. */
> +     pkts_n = RTE_MIN(pkts_n, RTE_ALIGN_FLOOR(rxq->rq_ci - rxq-
> >rq_pi,
> +
>       MLX5_VPMD_DESCS_PER_LOOP));
>       if (!pkts_n) {
>               *no_cq = !rcvd_pkt;
>               return rcvd_pkt;
> --
> 2.18.2

Reply via email to