On Mon, Mar 23, 2026 at 12:54:03PM -0400, Omar Elghoul wrote:
> On 3/23/26 11:52 AM, Michael S. Tsirkin wrote:
> > On Mon, Mar 23, 2026 at 11:01:31AM -0400, Omar Elghoul wrote:
> > > [...]
> > Well... I am not sure how I missed it. Obvious in hindsight:
> >
> > static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
> > void *buf, unsigned int len, void **ctx,
> > unsigned int *xdp_xmit,
> > struct virtnet_rq_stats *stats)
> > {
> > struct net_device *dev = vi->dev;
> > struct sk_buff *skb;
> > u8 flags;
> > if (unlikely(len < vi->hdr_len + ETH_HLEN)) {
> > pr_debug("%s: short packet %i\n", dev->name, len);
> > DEV_STATS_INC(dev, rx_length_errors);
> > virtnet_rq_free_buf(vi, rq, buf);
> > return;
> > }
> > /* About the flags below:
> > * 1. Save the flags early, as the XDP program might overwrite
> > them.
> > * These flags ensure packets marked as VIRTIO_NET_HDR_F_DATA_VALID
> > * stay valid after XDP processing.
> > * 2. XDP doesn't work with partially checksummed packets (refer to
> > * virtnet_xdp_set()), so packets marked as
> > * VIRTIO_NET_HDR_F_NEEDS_CSUM get dropped during XDP processing.
> > */
> > if (vi->mergeable_rx_bufs) {
> > flags = ((struct virtio_net_common_hdr *)buf)->hdr.flags;
> > skb = receive_mergeable(dev, vi, rq, buf, ctx, len,
> > xdp_xmit,
> > stats);
> > } else if (vi->big_packets) {
> > void *p = page_address((struct page *)buf);
> > flags = ((struct virtio_net_common_hdr *)p)->hdr.flags;
> > skb = receive_big(dev, vi, rq, buf, len, stats);
> > } else {
> > flags = ((struct virtio_net_common_hdr *)buf)->hdr.flags;
> > skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit,
> > stats);
> > }
> >
> >
> > So we are reading the header, before dma sync, which is within
> > receive_mergeable and friends:
> Thank you for your analysis and explanation.
> >
> > static struct sk_buff *receive_mergeable(struct net_device *dev,
> > struct virtnet_info *vi,
> > struct receive_queue *rq,
> > void *buf,
> > void *ctx,
> > unsigned int len,
> > unsigned int *xdp_xmit,
> > struct virtnet_rq_stats *stats)
> > {
> > struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
> > int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> > struct page *page = virt_to_head_page(buf);
> > int offset = buf - page_address(page);
> > struct sk_buff *head_skb, *curr_skb;
> > unsigned int truesize = mergeable_ctx_to_truesize(ctx);
> > unsigned int headroom = mergeable_ctx_to_headroom(ctx);
> > head_skb = NULL;
> > if (rq->use_page_pool_dma)
> > page_pool_dma_sync_for_cpu(rq->page_pool, page, offset,
> > len);
> >
> >
> > Just as a test, the below should fix it (compiled only), but the real
> > fix is more complex since we need to be careful to avoid expensive syncing
> > twice.
>
> I applied your patch and tested it on my system. With this change, I could
> not reproduce the same error anymore. I would be happy to test a proper fix
> once you have one.
>
> >
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 97035b49bae7..57b4f5954bed 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -931,9 +931,19 @@ static struct sk_buff *page_to_skb(struct virtnet_info
> > *vi,
> > static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void
> > **ctx)
> > {
> > + void *buf;
> > +
> > BUG_ON(!rq->page_pool);
> > - return virtqueue_get_buf_ctx(rq->vq, len, ctx);
> > + buf = virtqueue_get_buf_ctx(rq->vq, len, ctx);
> > + if (buf && rq->use_page_pool_dma && *len) {
> > + struct page *page = virt_to_head_page(buf);
> > + int offset = buf - page_address(page);
> > +
> > + page_pool_dma_sync_for_cpu(rq->page_pool, page, offset, *len);
> > + }
> > +
> > + return buf;
> > }
> > static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf)
> >
> >
> >
> >
just sent