The MANA driver has two distinct DMA paths for RX buffers: 1. Without PP_FLAG_DMA_MAP: The driver maps full pages manually, creating page-aligned mappings where the DMA offset is always zero.
2. With PP_FLAG_DMA_MAP: page_pool uses sub-page fragments, where multiple RX buffers share a single page. The pool maps the whole page once, and subsequent allocations use offsets into that region. Path 2 is problematic in two scenarios where DMA must go through SWIOTLB bounce buffers: - Confidential VMs (AMD SEV-SNP, Intel TDX): guest memory is encrypted and the NIC cannot access it directly due to lack of TDISP support. All DMA must use SWIOTLB bounce buffers. - Force-bounce mode (swiotlb=force): all DMA is routed through bounce buffers regardless of whether the system is a CVM. In both cases, sub-page RX buffer fragments allocated via page_pool may not be compatible with bounce buffering in this mode, leading to failures in the RX path. Add a check using is_swiotlb_force_bounce() in mana_use_single_rxbuf_per_page() to detect when force-bounce is active for the device and force single RX buffer per page allocation. Note: This issue likely affects any NIC driver using page_pool with sub-page fragment allocation under SWIOTLB. A more general fix at the page_pool level may be desirable. Seeking feedback on the preferred approach. Signed-off-by: Dipayaan Roy <[email protected]> --- drivers/net/ethernet/microsoft/mana/mana_en.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 2d44eaf932a8..841421baf0de 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -12,6 +12,7 @@ #include <linux/pci.h> #include <linux/export.h> #include <linux/skbuff.h> +#include <linux/swiotlb.h> #include <linux/cc_platform.h> #include <net/checksum.h> @@ -748,10 +749,15 @@ static void *mana_get_rxbuf_pre(struct mana_rxq *rxq, dma_addr_t *da) static bool mana_use_single_rxbuf_per_page(struct mana_port_context *apc, u32 mtu) { + struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; + /* On confidential VMs with guest memory encryption, all DMA goes * through SWIOTLB bounce buffers. Sub-page RX fragments may not * be properly bounce-buffered, so use fullpage buffers. */ + if (is_swiotlb_force_bounce(gc->dev)) + return true; + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) return true; -- 2.43.0

