On Wed, Jul 12, 2017 at 03:09:54PM -0700, Yongseok Koh wrote:
> On a host having 128B cacheline size, some devices insert 64B padding in
> each completion entry to avoid partial cacheline write by HW. But, as the
> padding is ahead of completion data, casting a completion entry to
> compressed mini-completions must start from the middle of the completion.
> 
> Signed-off-by: Yongseok Koh <ys...@mellanox.com>
> Acked-by: Shahaf Shuler <shah...@mellanox.com>
> ---
>  drivers/net/mlx5/mlx5_rxtx.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
> index ab6df19eb..29ce91b05 100644
> --- a/drivers/net/mlx5/mlx5_rxtx.c
> +++ b/drivers/net/mlx5/mlx5_rxtx.c
> @@ -1556,7 +1556,7 @@ mlx5_rx_poll_len(struct rxq *rxq, volatile struct 
> mlx5_cqe *cqe,
>       if (zip->ai) {
>               volatile struct mlx5_mini_cqe8 (*mc)[8] =
>                       (volatile struct mlx5_mini_cqe8 (*)[8])
> -                     (uintptr_t)(&(*rxq->cqes)[zip->ca & cqe_cnt]);
> +                     (uintptr_t)(&(*rxq->cqes)[zip->ca & cqe_cnt].pkt_info);
>  
>               len = ntohl((*mc)[zip->ai & 7].byte_cnt);
>               *rss_hash = ntohl((*mc)[zip->ai & 7].rx_hash_result);
> @@ -1604,7 +1604,7 @@ mlx5_rx_poll_len(struct rxq *rxq, volatile struct 
> mlx5_cqe *cqe,
>                       volatile struct mlx5_mini_cqe8 (*mc)[8] =
>                               (volatile struct mlx5_mini_cqe8 (*)[8])
>                               (uintptr_t)(&(*rxq->cqes)[rxq->cq_ci &
> -                                                       cqe_cnt]);
> +                                                       cqe_cnt].pkt_info);
>  
>                       /* Fix endianness. */
>                       zip->cqe_cnt = ntohl(cqe->byte_cnt);
> -- 
> 2.11.0
> 


Acked-by: Nelio Laranjeiro <nelio.laranje...@6wind.com>

-- 
Nélio Laranjeiro
6WIND

Reply via email to