Hi Dariusz,

Indeed, we are using ConnectX-6 DX cards, which explains why such matching does 
not work according to your information. Thanks for your valuable clarification.

Best regards,
Tao Li

From: Dariusz Sosnowski <dsosnow...@nvidia.com>
Date: Monday, 9. June 2025 at 12:40
To: Li, Tao <tao.l...@sap.com>
Cc: users@dpdk.org <users@dpdk.org>
Subject: Re: [net/mlx5] Failed to install async pattern template matching both 
src and dst IPv6 addresses (DPDK 24.11.2)
Hi,

On Tue, May 06, 2025 at 07:05:13AM +0000, Li, Tao wrote:
> Hi All,
>
> I am experimenting async template APIs to install flow rules to perform 
> matching on IPv6 packets containing TCP payload, using DPDK 24.11.2. However, 
> I found that creating a pattern template that tries to match both the source 
> and destination IPv6 addresses results in an error. In the experiment, the 
> following testpmd commands were used.
>
> <Install async rules>
> port stop 2
> port stop 1
> port stop 0
> flow configure 2 queues_number 1 queues_size 10 counters_number 0 
> aging_counters_number 0 meters_number 0 flags 0
> flow configure 0 queues_number 1 queues_size 10 counters_number 0 
> aging_counters_number 0 meters_number 0 flags 0
> flow configure 1 queues_number 1 queues_size 10 counters_number 0 
> aging_counters_number 0 meters_number 0 flags 0
> port start all
>
> # command leading to the error, only matching src or dst address works
> flow pattern_template 0 create transfer relaxed no pattern_template_id 10  
> template represented_port ethdev_port_id is 1 / eth type is 0x86dd / ipv6 dst 
> is ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff src is 
> ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff proto is 0x0006 / tcp  src is 0xffff 
> / end
>
> flow actions_template 0 create transfer  actions_template_id 10  template 
> represented_port / end mask represented_port  / end
>
> flow template_table 0 create  group 0 priority 0  transfer wire_orig  
> table_id 5 rules_number 8 pattern_template 10 actions_template 10
>
> flow queue 0 create 0 template_table 5 pattern_template 0 actions_template 0 
> postpone no pattern represented_port ethdev_port_id is 0 / eth type is 0x86dd 
> / ipv6 dst is 2001:0db8:beef:0001::1src is fdbe:ef00:dead:beef::2 proto is 
> 0x0006 / tcp src is 0x07d2 / end actions represented_port ethdev_port_id 2 / 
> end
> flow push 0 queue 0
> </Install async rules>
>
> The error emitted from the driver is:
> <emitted error>
> mlx5_net: [mlx5dr_matcher_create]: Failed to initialise matcher: 7
> Pattern template #10 destroyed
> port_flow_complain(): Caught PMD error type 1 (cause unspecified): failed to 
> validate pattern template: Argument list too long
> </emitted error>
>
> One possible mitigation trick is to use two connected groups of flow rules to 
> match the IPv6 source and destination addresses separately, which of course 
> makes the template and flow rule creation more complicated. Thus, I would 
> like to ask:  is it the intended behavior, that matching of source and 
> destination IPv6 addresses at a single pattern template is NOT supported, as 
> observed in the above experiment?

What kind of NIC are you using?

With NVIDIA NICs, matching on both IPv6 addresses in a single flow rule
is supported only with BlueField-3 or newer NIC.
On older NICs, the solution you mentioned (with 2 separate groups) must
be applied.

Best regards,
Dariusz Sosnowski

Reply via email to