Hello Stephen,
On Thu, 7 May 2026 at 04:51, Stephen Hemminger
<[email protected]> wrote:
>
> On Wed, 6 May 2026 14:35:48 +0200
> David Marchand <[email protected]> wrote:
>
> > Since the commit 88ac4396ad29 ("ethdev: add VMDq support"),
> > VMDq has been imposing a maximum number of mac addresses in the
> > mac_addr_add/del API.
> >
> > Nowadays, new Intel drivers do not support the feature and few other
> > drivers implement this feature.
> >
> > This series proposes to flag drivers that support the feature, and
> > remove the limit of number of mac addresses for others.
> >
> > Next step could be to remove the VMDq pool notion from the generic API.
> > However I have some concern about this, as changing the quite stable
> > mac_addr_add/del API now seems a lot of noise for not much benefit.
> >
>
>
> AI review (manual not automated) saw these:
>
> On Wed, 6 May 2026 14:35:49 +0200
> David Marchand <[email protected]> wrote:
>
> [PATCH v2 1/5] ethdev: skip VMDq pools unless configured
> --------------------------------------------------------
>
> Warning: duplicate-add no longer returns 0 in non-VMDq mode
>
> } else if (vmdq) {
> pool_mask = dev->data->mac_pool_sel[index];
> /* Check if both MAC address and pool is already there, and do
> nothing */
> if (pool_mask & RTE_BIT64(pool))
> return 0;
> }
>
> When index >= 0 (MAC already present) and VMDq is not configured,
> the early "already there" return that existed before is now skipped
> entirely. The driver's mac_addr_add op gets re-invoked and
> mac_pool_sel is no longer updated for !vmdq, so subsequent calls
> keep re-invoking it. Pre-patch behaviour was idempotent (return 0).
>
> Suggested fix:
>
> } else if (vmdq) {
> pool_mask = dev->data->mac_pool_sel[index];
> if (pool_mask & RTE_BIT64(pool))
> return 0;
> } else {
> return 0; /* MAC already installed, only one pool */
> }
Yes, good catch.
>
>
> [PATCH v2 2/5] ethdev: announce VMDq capability
> -----------------------------------------------
>
> Warning: capability advertised even when max_vmdq_pools may be 0
>
> The capability bit is set unconditionally in several drivers where
> VMDq availability is conditional on hardware variant or runtime
> configuration:
>
> - net/intel/e1000/igb_ethdev.c: max_vmdq_pools = 0 for e1000_82575,
> e1000_i354 (no setting), e1000_i210, e1000_i211.
> - net/intel/i40e/i40e_ethdev.c: max_vmdq_pools is gated on
> (pf->flags & I40E_FLAG_VMDQ).
> - net/bnxt/bnxt_ethdev.c: max_vmdq_pools is set to 0 when there
> are not enough resources (see "Not enough resources to support
> VMDq" path).
>
> With this patch a user can pass mq_mode |= RTE_ETH_MQ_RX_VMDQ_FLAG
> through rte_eth_dev_configure() because the new capability check
> passes, but the driver has no pools to honour the request.
>
> Suggested fix: gate the capa bit on max_vmdq_pools > 0, or set it
> per-MAC-type / per-flag in each driver.
I realise now that this new device capability would also break VMDQ
with failsafe on top of VMDQ-capable ports.
In the end, as suggested here, max_vmdq_pools is already a gate and no
new capability is needed.
>
> Warning: VMDq capability advertised on VFs that do not configure VMDq
>
> ixgbevf_dev_info_get() and txgbevf_dev_info_get() now advertise
> RTE_ETH_DEV_CAPA_VMDQ, but ixgbevf_dev_configure() and
> txgbevf_dev_configure() only handle RTE_ETH_MQ_RX_RSS_FLAG. The
> feature matrices doc/guides/nics/features/ixgbe_vf.ini and
> txgbe_vf.ini do not list "VMDq = Y". A VF is itself a member of a
> PF VMDq pool; advertising the capa from the VF is misleading.
>
> Warning: ipn3ke advertises VMDq=Y in its features file but is not
> updated by the patch
>
> doc/guides/nics/features/ipn3ke.ini has "VMDq = Y" but the patch
> does not add RTE_ETH_DEV_CAPA_VMDQ to ipn3ke. Either update the
> driver or correct the feature matrix.
>
> Warning: missing release note
>
> The new RTE_ETH_DEV_CAPA_VMDQ public bit and the new rejection in
> rte_eth_dev_configure() (returning -EINVAL when an application sets
> a VMDq mq_mode on a non-VMDq device) are user-visible API and
> behavioural changes. release_26_07.rst should mention them under
> "API Changes".
>
> [PATCH v2 3/5] ethdev: hide VMDq internal sizes
> -----------------------------------------------
>
> Warning: public defines removed without deprecation cycle
>
> RTE_ETH_NUM_RECEIVE_MAC_ADDR and RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
> were declared in the application-facing rte_ethdev.h. Moving them
> to ethdev_driver.h removes them from the public API. Per the DPDK
> API/ABI policy this normally needs a prior deprecation notice; at
> minimum it warrants a release_26_07.rst "API Changes" entry.
Mixed feeling about it.
I can add a note, but I think it is noise.
>
>
> [PATCH v2 4/5] net/iavf: accept up to 32k unicast MAC addresses
> ---------------------------------------------------------------
>
> Warning: dead store in iavf_add_del_uc_addr_bulk()
>
> for (uint32_t i = 0; i < nb_addrs; i++) {
> size_t buf_len = sizeof(struct virtchnl_ether_addr_list) +
> sizeof(struct virtchnl_ether_addr) * list->num_elements;
> ...
> buf_len = sizeof(struct virtchnl_ether_addr_list) +
> sizeof(struct virtchnl_ether_addr) * list->num_elements;
>
> The first initialiser is unconditionally overwritten by the second
> assignment before any read. Drop the initialiser (or move the
> declaration to the use site).
Good catch.. rebase damage on my side, I dropped some
experiments/changes but forgot to update after.
>
> Warning: missing release note for max_mac_addrs jump
>
> max_mac_addrs goes from 64 to 32768 and dev_data->mac_addrs is now
> allocated at RTE_ETHER_ADDR_LEN * 32768 = 192 KiB per VF port. This
> is a notable behaviour change for iavf users and worth a
> release_26_07.rst entry.
>
--
David Marchand