From: Eran Ben Elisha <era...@mellanox.com>

Function mlx5e_dealloc_rx_wqe is using page pointer value as an
indication to valid DMA mapping. In case that the mapping failed, we
released the page but kept the dangling pointer. Store the page pointer
only after the DMA mapping passed to avoid invalid page DMA unmap.

Fixes: bc77b240b3c5 ("net/mlx5e: Add fragmented memory support for RX multi 
packet WQE")
Signed-off-by: Eran Ben Elisha <era...@mellanox.com>
Signed-off-by: Saeed Mahameed <sae...@mellanox.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c 
b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 325b2c8c1c6d..7344433259fc 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -222,13 +222,13 @@ static inline int mlx5e_page_alloc_mapped(struct mlx5e_rq 
*rq,
        if (unlikely(!page))
                return -ENOMEM;
 
-       dma_info->page = page;
        dma_info->addr = dma_map_page(rq->pdev, page, 0,
                                      RQ_PAGE_SIZE(rq), rq->buff.map_dir);
        if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) {
                put_page(page);
                return -ENOMEM;
        }
+       dma_info->page = page;
 
        return 0;
 }
-- 
2.13.0

Reply via email to