On Confidential VMs (CVMs) such as AMD SEV-SNP or Intel TDX, the guest operating system's memory is encrypted. And current hardwares lacks the support for TDISP (TEE Device Interface Security Protocol), meaning the NIC cannot directly access this encrypted VM memory. Consequently, all DMA operations must utilize SWIOTLB bounce buffers.
In the MANA driver currently, there are two distinct paths for DMA mapping: 1. Without PP_FLAG_DMA_MAP: The driver manually maps full pages for each packet. This creates standalone, page-aligned mappings where the offset is always zero. 2. With PP_FLAG_DMA_MAP: Optimizes memory by using page_pool with sub-page fragments (e.g., multiple RX buffers sharing a single page). When PP_FLAG_DMA_MAP is enabled, page_pool maps the entire page once. Subsequent RX buffer allocations use offsets into this pre-mapped area. When page_pool allocates sub-page RX buffer fragments, the bounce buffer granularity may not align with these smaller fragment sizes, leading to failure in mana driver rx path. Refactor the RX buffer decision into mana_use_single_rxbuf_per_page(). When cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) is true, the driver is forced to use a single RX buffer per page. This ensures: - Each RX buffer is exactly one PAGE_SIZE. - The DMA offset is always 0. - SWIOTLB maps full, page-aligned blocks. Reviewed-by: Haiyang Zhang <[email protected]> Signed-off-by: Dipayaan Roy <[email protected]> --- drivers/net/ethernet/microsoft/mana/mana_en.c | 21 +++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index a654b3699c4c..2d44eaf932a8 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -12,6 +12,7 @@ #include <linux/pci.h> #include <linux/export.h> #include <linux/skbuff.h> +#include <linux/cc_platform.h> #include <net/checksum.h> #include <net/ip6_checksum.h> @@ -744,6 +745,23 @@ static void *mana_get_rxbuf_pre(struct mana_rxq *rxq, dma_addr_t *da) return va; } +static bool +mana_use_single_rxbuf_per_page(struct mana_port_context *apc, u32 mtu) +{ + /* On confidential VMs with guest memory encryption, all DMA goes + * through SWIOTLB bounce buffers. Sub-page RX fragments may not + * be properly bounce-buffered, so use fullpage buffers. + */ + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) + return true; + + /* For xdp and jumbo frames make sure only one packet fits per page. */ + if (mtu + MANA_RXBUF_PAD > PAGE_SIZE / 2 || mana_xdp_get(apc)) + return true; + + return false; +} + /* Get RX buffer's data size, alloc size, XDP headroom based on MTU */ static void mana_get_rxbuf_cfg(struct mana_port_context *apc, int mtu, u32 *datasize, u32 *alloc_size, @@ -754,8 +772,7 @@ static void mana_get_rxbuf_cfg(struct mana_port_context *apc, /* Calculate datasize first (consistent across all cases) */ *datasize = mtu + ETH_HLEN; - /* For xdp and jumbo frames make sure only one packet fits per page */ - if (mtu + MANA_RXBUF_PAD > PAGE_SIZE / 2 || mana_xdp_get(apc)) { + if (mana_use_single_rxbuf_per_page(apc, mtu)) { if (mana_xdp_get(apc)) { *headroom = XDP_PACKET_HEADROOM; *alloc_size = PAGE_SIZE; -- 2.43.0

