> -----Original Message-----
> From: Intel-wired-lan <[email protected]> On Behalf
> Of Michal Swiatkowski
> Sent: Tuesday, October 28, 2025 8:07 AM
> To: [email protected]
> Cc: [email protected]; [email protected]; Lobakin, Aleksander
> <[email protected]>; Kitszel, Przemyslaw
> <[email protected]>; Keller, Jacob E
> <[email protected]>; Michal Swiatkowski
> <[email protected]>
> Subject: [Intel-wired-lan] [PATCH iwl-next v2] ice: use
> netif_get_num_default_rss_queues()
> 
> On some high-core systems (like AMD EPYC Bergamo, Intel Clearwater
> Forest) loading ice driver with default values can lead to queue/irq
> exhaustion. It will result in no additional resources for SR-IOV.
> 
> In most cases there is no performance reason for more than half
> num_cpus(). Limit the default value to it using generic
> netif_get_num_default_rss_queues().
> 
> Still, using ethtool the number of queues can be changed up to
> num_online_cpus(). It can be done by calling:
> $ethtool -L ethX combined max_cpu
> 
It could be nice to use $(nproc)?
 $ ethtool -L ethX combined $(nproc)

> This change affects only the default queue amount.
> 
> Signed-off-by: Michal Swiatkowski <[email protected]>
> ---
> v1 --> v2:
>  * Follow Olek's comment and switch from custom limiting to the
> generic
>    netif_...() function.
>  * Add more info in commit message (Paul)
>  * Dropping RB tags, as it is different patch now
> ---
>  drivers/net/ethernet/intel/ice/ice_irq.c |  5 +++--
>  drivers/net/ethernet/intel/ice/ice_lib.c | 12 ++++++++----
>  2 files changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c
> b/drivers/net/ethernet/intel/ice/ice_irq.c
> index 30801fd375f0..1d9b2d646474 100644
> --- a/drivers/net/ethernet/intel/ice/ice_irq.c
> +++ b/drivers/net/ethernet/intel/ice/ice_irq.c
> @@ -106,9 +106,10 @@ static struct ice_irq_entry
> *ice_get_irq_res(struct ice_pf *pf,
>  #define ICE_RDMA_AEQ_MSIX 1
>  static int ice_get_default_msix_amount(struct ice_pf *pf)
>  {
> -     return ICE_MIN_LAN_OICR_MSIX + num_online_cpus() +
> +     return ICE_MIN_LAN_OICR_MSIX +
> netif_get_num_default_rss_queues() +
>              (test_bit(ICE_FLAG_FD_ENA, pf->flags) ? ICE_FDIR_MSIX :
> 0) +
> -            (ice_is_rdma_ena(pf) ? num_online_cpus() +
> ICE_RDMA_AEQ_MSIX : 0);
> +            (ice_is_rdma_ena(pf) ?
> netif_get_num_default_rss_queues() +
> +                                   ICE_RDMA_AEQ_MSIX : 0);
>  }
> 
>  /**
> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c
> b/drivers/net/ethernet/intel/ice/ice_lib.c
> index bac481e8140d..e366d089bef9 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> @@ -159,12 +159,14 @@ static void ice_vsi_set_num_desc(struct ice_vsi
> *vsi)
> 
>  static u16 ice_get_rxq_count(struct ice_pf *pf)
>  {
> -     return min(ice_get_avail_rxq_count(pf), num_online_cpus());
> +     return min(ice_get_avail_rxq_count(pf),
> +                netif_get_num_default_rss_queues());
>  }
min(a, b) resolves to the type of the expression, which here will be int due to 
netif_get_num_default_rss_queues() returning int. 
That implicitly truncates to u16 on return.
What do you think about to make this explicit with min_t() to avoid type 
surprises?

> 
>  static u16 ice_get_txq_count(struct ice_pf *pf)
>  {
> -     return min(ice_get_avail_txq_count(pf), num_online_cpus());
> +     return min(ice_get_avail_txq_count(pf),
> +                netif_get_num_default_rss_queues());
>  }

Same min_t() ?

Otherwise, fine for me.

Reviewed-by: Aleksandr Loktionov <[email protected]>

Reply via email to