On Mon, Feb 16, 2026 at 11:48:27AM +0100, Alexander Lobakin wrote:
> From: Alexander Lobakin <[email protected]>
> Date: Mon, 16 Feb 2026 11:46:05 +0100
> 
> > From: Zaremba, Larysa <[email protected]>
> > Date: Thu, 12 Feb 2026 19:33:22 +0100
> > 
> >> The only user of frag_size field in XDP RxQ info is
> >> bpf_xdp_frags_increase_tail(). It clearly expects whole buffer size instead
> >> of DMA write size. Different assumptions in idpf driver configuration lead
> >> to negative tailroom.
> >>
> >> To make it worse, buffer sizes are not actually uniform in idpf when
> >> splitq is enabled, as there are several buffer queues, so rxq->rx_buf_size
> >> is meaningless in this case.
> >>
> >> Use rxq->truesize as a frag_size for singleq and truesize of the first bufq
> >> in AF_XDP ZC, as there is only one. Disable growinf tail for regular
> >> splitq.
> >>
> >> Fixes: ac8a861f632e ("idpf: prepare structures to support XDP")
> >> Reviewed-by: Aleksandr Loktionov <[email protected]>
> >> Signed-off-by: Larysa Zaremba <[email protected]>
> >> ---
> >>  drivers/net/ethernet/intel/idpf/xdp.c | 8 +++++++-
> >>  drivers/net/ethernet/intel/idpf/xsk.c | 1 +
> >>  2 files changed, 8 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/net/ethernet/intel/idpf/xdp.c 
> >> b/drivers/net/ethernet/intel/idpf/xdp.c
> >> index 958d16f87424..a152c9a26976 100644
> >> --- a/drivers/net/ethernet/intel/idpf/xdp.c
> >> +++ b/drivers/net/ethernet/intel/idpf/xdp.c
> >> @@ -46,11 +46,17 @@ static int __idpf_xdp_rxq_info_init(struct 
> >> idpf_rx_queue *rxq, void *arg)
> >>  {
> >>    const struct idpf_vport *vport = rxq->q_vector->vport;
> >>    bool split = idpf_is_queue_model_split(vport->rxq_model);
> >> +  u32 frag_size = 0;
> >>    int err;
> >>  
> >> +  if (idpf_queue_has(XSK, rxq) && split)
> >> +          frag_size = rxq->bufq_sets[0].bufq.truesize;
> >> +  else if (!split)
> >> +          frag_size = rxq->truesize;
> > 
> > XDP and XSk are supported only in mode splitq mode, so you can remove
> > the second condition and change the first one to just `has(XSK)`.
> >

But the function itself handles singleq case. I do not see why frag_size should 
be treated differently.

Not that I am against of removing this logic, it would look more neat without 
these conditions.

> >> +
> >>    err = __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx,
> >>                             rxq->q_vector->napi.napi_id,
> >> -                           rxq->rx_buf_size);
> >> +                           frag_size);
> >>    if (err)
> >>            return err;
> >>  
> >> diff --git a/drivers/net/ethernet/intel/idpf/xsk.c 
> >> b/drivers/net/ethernet/intel/idpf/xsk.c
> >> index fd2cc43ab43c..febe1073b9b4 100644
> >> --- a/drivers/net/ethernet/intel/idpf/xsk.c
> >> +++ b/drivers/net/ethernet/intel/idpf/xsk.c
> >> @@ -401,6 +401,7 @@ int idpf_xskfq_init(struct idpf_buf_queue *bufq)
> >>    bufq->pending = fq.pending;
> >>    bufq->thresh = fq.thresh;
> >>    bufq->rx_buf_size = fq.buf_len;
> >> +  bufq->truesize = xsk_pool_get_rx_frag_step(fq.pool);
> 
> Better to do that in libeth_xdp rather than here?
>

Smth like that?

diff --git a/drivers/net/ethernet/intel/idpf/xsk.c 
b/drivers/net/ethernet/intel/idpf/xsk.c
index febe1073b9b4..de87455b92d7 100644
--- a/drivers/net/ethernet/intel/idpf/xsk.c
+++ b/drivers/net/ethernet/intel/idpf/xsk.c
@@ -401,7 +401,7 @@ int idpf_xskfq_init(struct idpf_buf_queue *bufq)
        bufq->pending = fq.pending;
        bufq->thresh = fq.thresh;
        bufq->rx_buf_size = fq.buf_len;
-       bufq->truesize = xsk_pool_get_rx_frag_step(fq.pool);
+       bufq->truesize = fq.chunk_align;

        if (!idpf_xskfq_refill(bufq))
                netdev_err(bufq->pool->netdev,
diff --git a/drivers/net/ethernet/intel/libeth/xsk.c 
b/drivers/net/ethernet/intel/libeth/xsk.c
index 846e902e31b6..5b298558ecfd 100644
--- a/drivers/net/ethernet/intel/libeth/xsk.c
+++ b/drivers/net/ethernet/intel/libeth/xsk.c
@@ -167,6 +167,7 @@ int libeth_xskfq_create(struct libeth_xskfq *fq)
        fq->pending = fq->count;
        fq->thresh = libeth_xdp_queue_threshold(fq->count);
        fq->buf_len = xsk_pool_get_rx_frame_size(fq->pool);
+       fq->chunk_align = xsk_pool_get_rx_frag_step(fq->pool);

        return 0;
 }
diff --git a/include/net/libeth/xsk.h b/include/net/libeth/xsk.h
index 481a7b28e6f2..a3ea90d30d17 100644
--- a/include/net/libeth/xsk.h
+++ b/include/net/libeth/xsk.h
@@ -598,6 +598,7 @@ __libeth_xsk_run_pass(struct libeth_xdp_buff *xdp,
  * @thresh: threshold below which the queue is refilled
  * @buf_len: HW-writeable length per each buffer
  * @nid: ID of the closest NUMA node with memory
+ * @chunk_align: step between consecutive buffers, 0 if none exists
  */
 struct libeth_xskfq {
        struct_group_tagged(libeth_xskfq_fp, fp,
@@ -615,6 +616,8 @@ struct libeth_xskfq {

        u32                     buf_len;
        int                     nid;
+
+       u32                     chunk_align;
 };

 int libeth_xskfq_create(struct libeth_xskfq *fq);


> >>  
> >>    if (!idpf_xskfq_refill(bufq))
> >>            netdev_err(bufq->pool->netdev,
> 
> Thanks,
> Olek

Reply via email to