Re: [dpdk-users] DPDK: MPLS packet processing
> -Original Message- > From: Thomas Monjalon > Sent: Monday, January 18, 2021 12:08 PM > To: raktim bhatt ; Raslan Darawsheh > > Cc: users@dpdk.org; Slava Ovsiienko ; Asaf Penso > > Subject: Re: [dpdk-users] DPDK: MPLS packet processing > > 18/01/2021 09:46, Raslan Darawsheh: > > From: raktim bhatt > > > > > Hi All, > > > > > > I am trying to build a multi-RX-queue dpdk program, using RSS to split the > > > incoming traffic into RX queues on a single port. Mellanox ConnectX-5 > and > > > DPDK Version 19.11 is used for this purpose. It works fine when I use IP > > > over Ethernet packets as input. However when the packet contains IP > over > > > MPLS over Ethernet, RSS does not seem to work. As a result, all packets > > > belonging to various flows (with different src & dst IPs, ports over MPLS) > > > are all sent into the same RX queue. > > > > > > > > > My queries are > > > > > > 1. Is there any parameter/techniques in DPDK to distribute MPLS packets > to > > > multiple RX queues? > > > > > I've tried it over my setup with testpmd: > > ./build/app/dpdk-testpmd -n 4 -w :08:00.0 -- --mbcache=512 -i --nb- > cores=27 --rxq=4 --txq=4 --rss-ip > > testpmd> set verbose 1 > > testpmd> start > > > > then tried to send two MPLS packets with different src IP: > > packet1 = Ether()/MPLS()/IP(src='1.1.1.1') > > packet2 = Ether()/MPLS()/IP(src='1.1.1.2') > > > > and I see that both packets are being spread over the queues, see the > bellow testpmd dump output: > > testpmd> port 0/queue 3: received 1 packets > > src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x8847 - length=60 - > nb_segs=1 - RSS hash=0x43781943 - RSS queue=0x3 - hw ptype: L2_ETHER > L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER - l2_len=14 - > Receive queue=0x3 > > ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_UNKNOWN > PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN > > port 0/queue 1: received 1 packets > > src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x8847 - length=60 - > nb_segs=1 - RSS hash=0xb8631e05 - RSS queue=0x1 - hw ptype: L2_ETHER > L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER - l2_len=14 - > Receive queue=0x1 > > ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_UNKNOWN > PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN > > > > first packet was received on queue 3 and the second one was received > over queue 1, > > by the way, this is with both 19.11.0 and v19.11.6 > > > > > 2. Is there any way to strip off MPLS tags (between Eth and IP) in > > > hardware, something like hw_vlan_strip? > > > > > For this I'm not sure we have such thing in dpdk maybe Thomas can confirm > this here? > > Look for "POP_MPLS" in rte_flow. > Thanks, for pointing to it, I've just noticed it. But unfortunately, we don't have support for this action in MLX5 PMD. Kindest regards Raslan Darawsheh
Re: [dpdk-users] DPDK: MPLS packet processing
18/01/2021 09:46, Raslan Darawsheh: > From: raktim bhatt > > > Hi All, > > > > I am trying to build a multi-RX-queue dpdk program, using RSS to split the > > incoming traffic into RX queues on a single port. Mellanox ConnectX-5 and > > DPDK Version 19.11 is used for this purpose. It works fine when I use IP > > over Ethernet packets as input. However when the packet contains IP over > > MPLS over Ethernet, RSS does not seem to work. As a result, all packets > > belonging to various flows (with different src & dst IPs, ports over MPLS) > > are all sent into the same RX queue. > > > > > > My queries are > > > > 1. Is there any parameter/techniques in DPDK to distribute MPLS packets to > > multiple RX queues? > > > I've tried it over my setup with testpmd: > ./build/app/dpdk-testpmd -n 4 -w :08:00.0 -- --mbcache=512 -i > --nb-cores=27 --rxq=4 --txq=4 --rss-ip > testpmd> set verbose 1 > testpmd> start > > then tried to send two MPLS packets with different src IP: > packet1 = Ether()/MPLS()/IP(src='1.1.1.1') > packet2 = Ether()/MPLS()/IP(src='1.1.1.2') > > and I see that both packets are being spread over the queues, see the bellow > testpmd dump output: > testpmd> port 0/queue 3: received 1 packets > src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x8847 - length=60 - > nb_segs=1 - RSS hash=0x43781943 - RSS queue=0x3 - hw ptype: L2_ETHER > L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER - l2_len=14 - Receive > queue=0x3 > ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD > PKT_RX_OUTER_L4_CKSUM_UNKNOWN > port 0/queue 1: received 1 packets > src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x8847 - length=60 - > nb_segs=1 - RSS hash=0xb8631e05 - RSS queue=0x1 - hw ptype: L2_ETHER > L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER - l2_len=14 - Receive > queue=0x1 > ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD > PKT_RX_OUTER_L4_CKSUM_UNKNOWN > > first packet was received on queue 3 and the second one was received over > queue 1, > by the way, this is with both 19.11.0 and v19.11.6 > > > 2. Is there any way to strip off MPLS tags (between Eth and IP) in > > hardware, something like hw_vlan_strip? > > > For this I'm not sure we have such thing in dpdk maybe Thomas can confirm > this here? Look for "POP_MPLS" in rte_flow.
Re: [dpdk-users] DPDK: MPLS packet processing
++, Hi Raktim, > -Original Message- > From: users On Behalf Of raktim bhatt > Sent: Saturday, January 16, 2021 10:46 AM > To: users@dpdk.org > Subject: [dpdk-users] DPDK: MPLS packet processing > > Hi All, > > I am trying to build a multi-RX-queue dpdk program, using RSS to split the > incoming traffic into RX queues on a single port. Mellanox ConnectX-5 and > DPDK Version 19.11 is used for this purpose. It works fine when I use IP > over Ethernet packets as input. However when the packet contains IP over > MPLS over Ethernet, RSS does not seem to work. As a result, all packets > belonging to various flows (with different src & dst IPs, ports over MPLS) > are all sent into the same RX queue. > > > My queries are > > 1. Is there any parameter/techniques in DPDK to distribute MPLS packets to > multiple RX queues? > I've tried it over my setup with testpmd: ./build/app/dpdk-testpmd -n 4 -w :08:00.0 -- --mbcache=512 -i --nb-cores=27 --rxq=4 --txq=4 --rss-ip testpmd> set verbose 1 testpmd> start then tried to send two MPLS packets with different src IP: packet1 = Ether()/MPLS()/IP(src='1.1.1.1') packet2 = Ether()/MPLS()/IP(src='1.1.1.2') and I see that both packets are being spread over the queues, see the bellow testpmd dump output: testpmd> port 0/queue 3: received 1 packets src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x8847 - length=60 - nb_segs=1 - RSS hash=0x43781943 - RSS queue=0x3 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x3 ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 0/queue 1: received 1 packets src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x8847 - length=60 - nb_segs=1 - RSS hash=0xb8631e05 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x1 ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN first packet was received on queue 3 and the second one was received over queue 1, by the way, this is with both 19.11.0 and v19.11.6 > 2. Is there any way to strip off MPLS tags (between Eth and IP) in > hardware, something like hw_vlan_strip? > For this I'm not sure we have such thing in dpdk maybe Thomas can confirm this here? > > My Port configuration is > > > const struct rte_eth_conf default_port_conf = { > > .rxmode = { > > .hw_vlan_strip = 0,/* VLAN strip enabled. */ > > .header_split = 0,/* Header Split disabled. */ > > .hw_ip_checksum = 0,/* IP checksum offload disabled. */ > > .hw_strip_crc = 0,/* CRC stripping by hardware disabled. > */ > > }, > > .rx_adv_conf = { > > .rss_conf = { > > .rss_key = NULL, > > .rss_key_len = 0, > > .rss_hf = ETH_RSS_IP, > > }, > > } }; > > ___ > > > Thanks and Regards > > Raktim Bhatt
Re: [dpdk-users] Integration from Dpdk18.05 to Dpdk19.11 - rte_timer_subsystem_init(void)
On Sun, Jan 17, 2021 at 11:04 PM Li, Jiu (NSB - CN/Hangzhou) wrote: > > Hello! Dpdk experts, > > On Dpdk 18.05, void rte_timer_subsystem_init(void) > On Dpdk 19.11, intrte_timer_subsystem_init(void) implementation changed, > which will return 0, -EALREADY or -ENOMEM; > > There is still have dpdk "process" mode (instead of pdkd thread mode) > deployment in my side. > Can I have a question? > > If rte_timer_subsystem_init() is called one time is enough? > After rte_timer_subsystem_init() called with return 0 by one process , then > other processes are able to use "rte timer" service without issue, right? > Copied timer library maintainers. -- David Marchand
Re: [dpdk-users] A compilation problem on arm64
On Sun, Jan 17, 2021 at 11:04 PM LemmyHuang wrote: > > Dear Concerns, > > > I have a compilation problem on arm64. > My environment is dpdk-19.11, gcc-9.3.1 and kernel-5.10.0-0.0.0.7.aarch64. > The errors are as follows: > > > >> ... > >> CC [M] > /home/abuild/rpmbuild/BUILD/dpdk-19.11/arm64-armv8a-linux-gcc/build/kernel/linux/kni/kni_misc.o > [0m > >> CC [M] > /home/abuild/rpmbuild/BUILD/dpdk-19.11/arm64-armv8a-linux-gcc/build/kernel/linux/kni/kni_net.o > [0m > >> LD librte_common_octeontx2.so.20.0 [0m > >> INSTALL-LIB librte_common_octeontx2.so.20.0 [0m > >> == Build drivers/bus [0m > >> In file included from ./include/linux/atomic.h:7, [0m > >> from > ./include/asm-generic/bitops/atomic.h:5, [0m > >> from > ./arch/arm64/include/asm/bitops.h:26, [0m > >> from > ./include/linux/bitops.h:29, [0m > >> from > ./include/linux/kernel.h:12, [0m > >> from > ./include/linux/list.h:9, [0m > >> from > ./include/linux/rculist.h:10, [0m > >> from > ./include/linux/pid.h:5, [0m > >> from > ./include/linux/sched.h:14, [0m > >> from > ./include/linux/ratelimit.h:6, [0m > >> from > ./include/linux/dev_printk.h:16, [0m > >> from > ./include/linux/device.h:15, [0m > >> from > /home/abuild/rpmbuild/BUILD/dpdk-19.11/arm64-armv8a-linux-gcc/build/kernel/linux/igb_uio/igb_uio.c:8: > [0m > >> ./include/linux/atomic-arch-fallback.h: In function > 'igbuio_pci_open': [0m > >> ./arch/arm64/include/asm/atomic.h:20:20: error: inlining failed in > call to 'arch_atomic_sub.constprop': --param max-inline-insns-single-O2 limit > reached [-Werror=inline] [0m > >> 20 | static inline void arch_##op(int i, atomic_t > *v) \ [0m > >> | > ^ [0m > >> ./arch/arm64/include/asm/atomic.h:30:1: note: in expansion of macro > 'ATOMIC_OP' [0m > >> 30 | ATOMIC_OP(atomic_sub) [0m > >> | ^ [0m > >> In file included from ./include/linux/atomic.h:81, [0m > >> from > ./include/asm-generic/bitops/atomic.h:5, [0m > >> from > ./arch/arm64/include/asm/bitops.h:26, [0m > >> from > ./include/linux/bitops.h:29, [0m > >> from > ./include/linux/kernel.h:12, [0m > >> from > ./include/linux/list.h:9, [0m > >> from > ./include/linux/rculist.h:10, [0m > >> from > ./include/linux/pid.h:5, [0m > >> from > ./include/linux/sched.h:14, [0m > >> from > ./include/linux/ratelimit.h:6, [0m > >> from > ./include/linux/dev_printk.h:16, [0m > >> from > ./include/linux/device.h:15, [0m > >> from > /home/abuild/rpmbuild/BUILD/dpdk-19.11/arm64-armv8a-linux-gcc/build/kernel/linux/igb_uio/igb_uio.c:8: > [0m > >> ./include/linux/atomic-arch-fallback.h:441:2: note: called from here > [0m > >> 441 | arch_atomic_sub(1, v); [0m > >> | ^ [0m > >> cc1: all warnings being treated as errors [0m > >> make[6]: *** [scripts/Makefile.build:279: > /home/abuild/rpmbuild/BUILD/dpdk-19.11/arm64-armv8a-linux-gcc/build/kernel/linux/igb_uio/igb_uio.o] > Error 1 [0m > >> make[5]: *** [Makefile:1805: > /home/abuild/rpmbuild/BUILD/dpdk-19.11/arm64-armv8a-linux-gcc/build/kernel/linux/igb_uio] > Error 2 [0m > >> make[4]: *** > [/home/abuild/rpmbuild/BUILD/dpdk-19.11/mk/rte.module.mk:51: igb_uio.ko] > Error 2 [0m > >> make[3]: *** > [/home/abuild/rpmbuild/BUILD/dpdk-19.11/mk/rte.subdir.mk:37: igb_uio] Error 2 > [0m > >> make[3]: *** Waiting for unfinished jobs [0m > >> ... This is hard to read, please paste raw outputs. This error does not ring a bell, this is probably kernel/arch specific. I copied Luca who maintains 19.11. On the other hand, rather than fixing igb_uio build, why don't you use vfio-pci? -- David Marchand