Hi,

> -----Original Message-----
> From: Thomas Monjalon <tho...@monjalon.net>
> Sent: Friday, February 17, 2023 1:46 AM
> To: Jiawei(Jonny) Wang <jiaw...@nvidia.com>
> Cc: Slava Ovsiienko <viachesl...@nvidia.com>; Ori Kam <or...@nvidia.com>;
> andrew.rybche...@oktetlabs.ru; Aman Singh <aman.deep.si...@intel.com>;
> Yuying Zhang <yuying.zh...@intel.com>; Ferruh Yigit <ferruh.yi...@amd.com>;
> dev@dpdk.org; Raslan Darawsheh <rasl...@nvidia.com>
> Subject: Re: [PATCH v5 2/2] ethdev: add Aggregated affinity match item
> 
> For the title, I suggest
> ethdev: add flow matching of aggregated port
> 
> 14/02/2023 16:48, Jiawei Wang:
> > When multiple ports are aggregated into a single DPDK port,
> > (example: Linux bonding, DPDK bonding, failsafe, etc.), we want to
> > know which port is used for Rx and Tx.
> >
> > This patch allows to map a Rx queue with an aggregated port by using a
> > flow rule. The new item is called RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY.
> >
> > While uses the aggregated affinity as a matching item in the flow
> > rule, and sets the same affinity value by call
> > rte_eth_dev_map_aggr_tx_affinity(), then the packet can be sent from
> > the same port as the receiving one.
> > The affinity numbering starts from 1, then trying to match on
> > aggr_affinity 0 will result in an error.
> >
> > Add the testpmd command line to match the new item:
> >     flow create 0 ingress group 0 pattern aggr_affinity affinity is 1 /
> >     end actions queue index 0 / end
> >
> > The above command means that creates a flow on a single DPDK port and
> > matches the packet from the first physical port and redirects these
> > packets into Rx queue 0.
> >
> > Signed-off-by: Jiawei Wang <jiaw...@nvidia.com>
> 
> Acked-by: Thomas Monjalon <tho...@monjalon.net>
> 

OK, update the title next patch, thanks for Ack.

Reply via email to