Hello, > In our intentions: > Our process --> Memif --> VPP (TCP to eth) --> ??? custom plug-in --> > Our radio stack (including driver) > I walked through the VPP docs, but it is not clear to me if there is > already something that can let us interface with the bottom of the stack.
From what I understand you'd like your plugin: 1) to receive all packets coming from "our process" 2) to send directly those packets to a specific driver? For (1) you have several options but I suspect the easiest is just to hook your plugin on the device-input feature arc. This arc gets all packets coming input nodes, including the memif input node. See https://s3-docs.fd.io/vpp/23.02/developer/corearchitecture/featurearcs.html For (2), you could integrate your radio stack as a VPP device driver and hook your plugin to it as a next node. The VPP l2patch feature could be a good example of both: https://git.fd.io/vpp/tree/src/vnet/l2/l2_patch.c > 2. How to configure VPP for multi instances ? (Any doc on it > specifically?) What do you mean by multi-instance? You can run multiple workers (threads): https://s3-docs.fd.io/vpp/23.02/configuration/reference.html#the-cpu-section If you want to run multiple VPP processes in parallel, you can definitely do so, but you'll have to pay attention to core pinning. > 3. How does the RSS mechanism works? RSS depends upon the hw and driver, but usually a hash is computed on the packet 5-tuple (often using Toeplitz) and this hash indexes a receive queue: - all packets of a single flow (eg. a tcp connection) ends up in the same rx queue (you really want that) - it is statistical: with lots of flows, flows are balanced more-or-less evenly between queues, but you'll have collisions and imbalanced, especially with low number of flows > In some tries I've seen performances varying over a 100 Gbps NIC with > dpdk from 15 to 36 Gbps (iperf3): > same conditions but in some tests increased performance. > Any configuration tweak? How many flows (TCP connections) do you use? If only a few, you're going to have lots of RSS collisions, depending upon the ports selected. Best ben
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#22182): https://lists.fd.io/g/vpp-dev/message/22182 Mute This Topic: https://lists.fd.io/mt/95024801/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-