From: Emil Tantilov <emil.s.tanti...@intel.com>

Add check for build_skb enabled ring in ixgbe_dma_sync_frag().
In that case &skb_shinfo(skb)->frags[0] may not always be set which
can lead to a crash. Instead we derive the page offset from skb->data.

Fixes: 42073d91a214
("ixgbe: Have the CPU take ownership of the buffers sooner")
CC: stable <sta...@vger.kernel.org>
Reported-by: Ambarish Soman <aso...@redhat.com>
Suggested-by: Alexander Duyck <alexander.h.du...@intel.com>
Signed-off-by: Emil Tantilov <emil.s.tanti...@intel.com>
Tested-by: Andrew Bowers <andrewx.bow...@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirs...@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c 
b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 0da5aa2c8aba..9fc063af233c 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1888,6 +1888,14 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring 
*rx_ring,
                                     ixgbe_rx_pg_size(rx_ring),
                                     DMA_FROM_DEVICE,
                                     IXGBE_RX_DMA_ATTR);
+       } else if (ring_uses_build_skb(rx_ring)) {
+               unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
+
+               dma_sync_single_range_for_cpu(rx_ring->dev,
+                                             IXGBE_CB(skb)->dma,
+                                             offset,
+                                             skb_headlen(skb),
+                                             DMA_FROM_DEVICE);
        } else {
                struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
 
-- 
2.14.3

Reply via email to