Hi, Jan

To use port action (I see it is in your sample action list) the flow should be 
applied to the FDB domain,
ie "transfer" attribute should be specified:

flow validate 0 ingress transfer...

With best regards, Slava

> -----Original Message-----
> From: Jan Viktorin <vikto...@cesnet.cz>
> Sent: Monday, March 1, 2021 14:21
> To: Asaf Penso <as...@nvidia.com>
> Cc: dev@dpdk.org; Ori Kam <or...@nvidia.com>; Jiawei(Jonny) Wang
> <jiaw...@nvidia.com>; Slava Ovsiienko <viachesl...@nvidia.com>
> Subject: Re: [dpdk-dev] Duplicating traffic with RTE Flow
> 
> Hello Asaf,
> 
> it is a while we were in touch regarding this topic. Finally, I am again 
> trying to
> get work this feature. I've seen that sampling is already upstreamed which is
> great. However, I am not very successful with that. There is nearly no
> documentation, just [1], I found no examples, just commit logs...
> 
> I tried:
> 
>  > set sample_actions 0 port_id id 1 / end  > flow validate 0 ingress pattern
> end actions sample ratio 1 index 0 / drop / end
>  port_flow_complain(): Caught PMD error type 1 (cause unspecified): port id
> action is valid in transfer mode only: Operation not supported  > flow 
> validate
> 0 ingress transfer pattern end actions sample ratio 1 index 0 / drop / end
>  port_flow_complain(): Caught PMD error type 1 (cause unspecified): (no
> stated reason): Operation not supported
> 
> Using CentOS 7, DPDK 20.11.0, OFED-5.2-1.0.4.
> NICs: MT2892 Family [ConnectX-6 Dx] 101d (fw 22.28.1002), MT27800 Family
> [ConnectX-5] 1017 (fw 16.27.2008).
> 
> My primary goal is to be able to deliver exactly the same packets both to
> DPDK and to the Linux kernel. Doing this at RTE Flow level would be great due
> to performance and transparency.
> 
> Jan
> 
> [1] https://doc.dpdk.org/guides/prog_guide/rte_flow.html#action-sample
> 
> On Fri, 18 Sep 2020 14:23:42 +0000
> Asaf Penso <as...@nvidia.com> wrote:
> 
> > Hello Jan,
> >
> > You can have a look in series [1] where we propose to add APIs to
> DPDK20.11 for both mirroring and sampling for packets, with additional
> actions of the different traffic.
> >
> > [1]
> > http://patches.dpdk.org/project/dpdk/list/?series=12045
> >
> > Regards,
> > Asaf Penso
> >
> > >-----Original Message-----
> > >From: dev <dev-boun...@dpdk.org> On Behalf Of Jan Viktorin
> > >Sent: Friday, September 18, 2020 3:56 PM
> > >To: dev@dpdk.org
> > >Subject: [dpdk-dev] Duplicating traffic with RTE Flow
> > >
> > >Hello all,
> > >
> > >we are looking for a way to duplicate ingress traffic in hardware.
> > >
> > >There is an example in [1] suggesting to insert two fate actions into
> > >the RTE Flow actions array like:
> > >
> > >  flow create 0 ingress pattern end \
> > >      actions queue index 0 / void / queue index 1 / end
> > >
> > >But our experience is that PMDs reject two fate actions (tried with
> > >mlx5). Another similar approach would be to deliver every single
> > >packet into two virtual
> > >functions:
> > >
> > >  flow create 0 ingress pattern end \
> > >     actions vf index 0 / vf index 1 / end
> > >
> > >Third possibility was to use passthru:
> > >
> > >  flow create 0 ingress pattern end \
> > >      actions passthru / vf index 0 / end  flow create 0 ingress
> > > pattern end \
> > >      actions vf index 1 / end
> > >
> > >Again, tried on mlx5 and it does not support the passthru.
> > >
> > >Last idea was to use isolate with passthru (to deliver both to DPDK
> > >application and to the kernel) but again there was no support on mlx5 for
> passthru...
> > >
> > >  flow isolate 0 true
> > >  flow create 0 ingress pattern end actions passthru / rss end / end
> > >
> > >Is there any other possibility or PMD+NIC that is known to solve such
> issue?
> > >
> > >Thanks
> > >Jan Viktorin
> > >
> > >[1]
> > >https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc
> > >.dpdk
> > >.org%2Fguides%2Fprog_guide%2Frte_flow.html%23table-rte-flow-redirect-
> > >queue-5-
> >
> >3&amp;data=02%7C01%7Casafp%40nvidia.com%7C1a46005bec5245e729e70
> 8d
> >
> >85bd24caf%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C63736030
> 60
> >
> >73519816&amp;sdata=EOF%2Fz62crvBZK8rwzwKIWxj5cVlfPVnU3FLmcL9X2w0
> %3
> > >D&amp;reserved=0

Reply via email to