[vpp-dev] Is it possible to configure a plugin to run after acl plugin?

2023-01-06 Thread hyongsop
Hi, We have a set of plugins running in the 'device-input' feature arc.  We also have a set of ACL rules applied at the inbound side of an interface.  It turns out we need to run a plugin so that it only processes the packets after the ACL rules have been applied to the inbound traffic.  Is the

Re: [vpp-dev] LACP bonding not working with RDMA driver

2023-01-06 Thread Benoit Ganne (bganne) via lists.fd.io
> I could get LACP bonding working with RDMA driver now. I was going through > the RDMA plugin code and found there is a concept of "mode" in RDMA > interfaces. When I modified the interface command like below, I could see > the LACP bonding working:- > vpp# create int rdma host-if eth1 mode ibv >

Re: [vpp-dev] Using TrafficGen with rdma driver

2023-01-06 Thread rtox
Hi Ben, I do not intend to use SRIOV VFs. Is it possible to use rdma-plugin directly with the PF NICs ? With my shared config the NICs appear to come up into VPP ( show interface) without any warnings etc.  Btw. testing on vanilla VPP 22.10 , Ubuntu 20.04 Server Thanks -=-=-=-=-=-=-=-=-=-=-=-

Re: [vpp-dev] issue: ConnectX-5 interface state can not up

2023-01-06 Thread Chinmaya Aggarwal
Hi Li, The topic you pointed was raised by me. I could get through this issue by using MLX5 with RDMA plugin instead of DPDK. With RDMA it is working fine for me. You can try using RDMA if it suits your use case. Thanks and Regards, Chinmaya Agarwal. -=-=-=-=-=-=-=-=-=-=-=- Links: You receive

Re: [vpp-dev] LACP bonding not working with RDMA driver

2023-01-06 Thread Chinmaya Aggarwal
Hi, I could get LACP bonding working with RDMA driver now. I was going through the RDMA plugin code and found there is a concept of "mode" in RDMA interfaces. When I modified the interface command like below, I could see the LACP bonding working:- vpp# create int rdma host-if eth1 mode ibv vpp

Re: [vpp-dev] Slow VPP performance vs. DPDK l2fwd / l3wfd

2023-01-06 Thread Zhang, Fan
Hi Benoit, What I will state in below all based on our understanding to FVL/CVL, not MLX NICs. It is not the HW queue as the queue size can be bigger than 256. It is an interim buffer (please forgive me that I forgot the official terms of it) that NIC to fill descriptors and the CPU to fetch

Re: [vpp-dev] Using TrafficGen with rdma driver

2023-01-06 Thread Benoit Ganne (bganne) via lists.fd.io
Do you use SRIOV VFs? If so make spoof-checking etc. is off. See https://s3-docs.fd.io/vpp/23.02/developer/devicedrivers/rdma.html for more details. Best ben > -Original Message- > From: vpp-dev@lists.fd.io On Behalf Of r...@gmx.net > Sent: Friday, January 6, 2023 15:38 > To: vpp-dev@l

Re: [vpp-dev] Slow VPP performance vs. DPDK l2fwd / l3wfd

2023-01-06 Thread Benoit Ganne (bganne) via lists.fd.io
Interesting! Thanks Fan for bringing that up. So if I understand correctly, with the previous DPDK behavior we could have say 128 packets in the rxq, VPP would request 256, get 32, and the request 224 (256-32) again, etc. While VPP request more packets, the NIC has the opportunity to add packets

[vpp-dev] Using TrafficGen with rdma driver

2023-01-06 Thread rtox
Hey VPP community, anyone out there using TRex setup with rdma driver? The setup works just fine on legacy DPDK drivers, as documented here https://fd.io/docs/vpp/v2101/usecases/simpleperf/trex.html Once switching over to rdma ( as advised for Mellanox cards) I get the TRex warning that "Failed