On 11/25/2025 9:35 AM, Alexander Lobakin wrote:
Since recently, page_pool_create() accepts optional stack index of
the Rx queue which the pool will be created for. It can then be
used on control path for stuff like memory providers.
Add the same field to libeth_fq and pass the index from all the
drivers using libeth for managing Rx to simplify implementing MP
support later.
idpf has one libeth_fq per buffer/fill queue and each Rx queue has
two fill queues, but since fill queues can never be shared, we can
store the corresponding Rx queue index there during the
initialization to pass it to libeth.

Reviewed-by: Jacob Keller <[email protected]>
Reviewed-by: Aleksandr Loktionov <[email protected]>
Signed-off-by: Alexander Lobakin <[email protected]>
---
  drivers/net/ethernet/intel/idpf/idpf_txrx.h |  2 ++
  include/net/libeth/rx.h                     |  2 ++
  drivers/net/ethernet/intel/iavf/iavf_txrx.c |  1 +
  drivers/net/ethernet/intel/ice/ice_base.c   |  2 ++
  drivers/net/ethernet/intel/idpf/idpf_txrx.c | 13 +++++++++++++
  drivers/net/ethernet/intel/libeth/rx.c      |  1 +
  6 files changed, 21 insertions(+)

diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h 
b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 75b977094741..1f368c4e0a76 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -744,6 +744,7 @@ libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
   * @q_id: Queue id
   * @size: Length of descriptor ring in bytes
   * @dma: Physical address of ring
+ * @rxq_idx: stack index of the corresponding Rx queue
   * @q_vector: Backreference to associated vector
   * @rx_buffer_low_watermark: RX buffer low watermark
   * @rx_hbuf_size: Header buffer size
@@ -788,6 +789,7 @@ struct idpf_buf_queue {
        dma_addr_t dma;
struct idpf_q_vector *q_vector;
+       u16 rxq_ixd;

I believe this is supposed to be rxq_idx?

Thanks,
Tony

        u16 rx_buffer_low_watermark;
        u16 rx_hbuf_size;

Reply via email to