Hi all,

I'm a beginner with dpdk. With my team, we developed a dpdk app with the
following pipeline:

NIC RX -> RX Thread -> Worker Thread -> TX Thread -> NIC TX.

Within the RX Thread, we parse some headers. Within the worker thread,
we're using the hierarchical scheduler. To sum up, we want to replace the
HS with the traffic manager.
However, it seems the TM can only be set up on a NIC. This is not what we
want because we're doing some packet processing stuff within the TX Thread.


Thus, we thought about the SoftNIC as a solution for our problem. Would it
be possible to develop a pipeline like this ?

NIC RX -> RX Thread -> SoftNIC with TM -> Worker Thread -> TX Thread -> NIC
TX.

It looks like the "firmware.cli" script and the packet framework offer us
some freedom to make our pipeline.
First and foremost, I tried to test the SoftNIC with the following command
in the doc :

./testpmd -c 0x3 --vdev 'net_softnic0,firmware=<script
path>/firmware.cli,cpu_id=0,conn_port=8086' -- -i
     --forward-mode=softnic --portmask=0x2

Below are my network devices with the command log:

./usertools/dpdk-devbind.py --status

Network devices using DPDK-compatible driver
============================================
0000:19:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=vfio-pci
unused=i40e
0000:19:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=vfio-pci
unused=i40e

Network devices using kernel driver
===================================
0000:01:00.0 'I350 Gigabit Network Connection 1521' if=eno3 drv=igb
unused=vfio-pci *Active*

Other Network devices
=====================
0000:01:00.1 'I350 Gigabit Network Connection 1521' unused=igb,vfio-pci

Then, the command log of test-pmd:

sudo ./dpdk-testpmd --vdev
'net_softnic0,firmware=./firmware.cli,cpu_id=0,conn_port=8087' -- -i
--forward-mode=softnic --portmask=0x2
[sudo] password for user:
EAL: Detected CPU lcores: 32
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_i40e (8086:1572) device: 0000:19:00.0 (socket 0)
EAL: Probe PCI driver: net_i40e (8086:1572) device: 0000:19:00.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Invalid softnic packet forwarding mode
previous number of forwarding ports 3 - changed to number of configured
ports 1
testpmd: create a new mbuf pool <mb_pool_0>: n=395456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=395456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.

Configuring Port 0 (socket 0)
Port 0: E4:43:4B:04:D1:4E
Configuring Port 1 (socket 0)
Port 1: E4:43:4B:04:D1:50
Configuring Port 2 (socket 0)
; SPDX-License-Identifier: BSD-3-Clause
; Copyright(c) 2018 Intel Corporation

link LINK0 dev 0000:19:00.0

pipeline RX period 10 offset_port_id 0
pipeline RX port in bsz 32 link LINK0 rxq 0
pipeline RX port out bsz 32 swq RXQ0
pipeline RX table match stub
pipeline RX port in 0 table 0
pipeline RX table 0 rule add match default action fwd port 0

pipeline TX period 10 offset_port_id 0
pipeline TX port in bsz 32 swq TXQ0
pipeline TX port out bsz 32 link LINK0 txq 0
pipeline TX table match stub
pipeline TX port in 0 table 0
pipeline TX table 0 rule add match default action fwd port 0

thread 1 pipeline RX enable
Command "thread pipeline enable" failed.
thread 1 pipeline TX enable
Command "thread pipeline enable" failed.
Port 2: 00:00:00:00:00:00
Checking link statuses...
Done
testpmd>
Port 0: link state change event

Port 1: link state change event


Can anyone please help us on this ?

Regards,

Max.

Reply via email to