+1, the aggregate RX rate seems to be around 12 KPPS, the vector rate is small. Absent I/O silliness, one core should handle this load with no problem.
D. From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Damjan Marion via Lists.Fd.Io Sent: Wednesday, August 1, 2018 4:27 PM To: chakravarthy.arise...@viasat.com Cc: vpp-dev@lists.fd.io Subject: Re: [vpp-dev] tx-errors on VPP controlled dpdk device In VPP packet stays on the same core where it is received in majority of cases. Handing over packet to different core is performance expensive process and we are trying to avoid it. You likely need to utilise RSS on rx side to equally load your cores, but in this specific case VPP is not overloaded, your vector rate is ~2.... -- Damjan On 1 Aug 2018, at 20:22, chakravarthy.arise...@viasat.com<mailto:chakravarthy.arise...@viasat.com> wrote: Hi Damjan, Thanks for your feedback. I'm running the test in AWS instances. Thus, I have got only VFs. I do not have access to PF. So, I'm trying to get help from AWS to find out. Once I get the info, I'll post it over here. In the mean time, I looked at the counters that you suggested me to focus on. It looks like the packets are scheduled on only one core in transmit direction. Is there a way to change? I have 3 dedicated cores (1 main core thread for stats/mgmt and 2 cores for the worker threads). All the Tx queues are pinned to worker thread 1. So, worker thread 2 is not used for transmit path at all. Is there way to spread the transmit queues across the threads? Thanks Chakri vpp# sh threads ID Name Type LWP Sched Policy (Priority) lcore Core Socket State 0 vpp_main 1733 other (0) 1 1 0 1 vpp_wk_0 workers 1745 other (0) 2 2 0 2 vpp_wk_1 workers 1746 other (0) 3 3 0 3 stats 1747 other (0) 0 0 0 vpp# sh run Thread 0 vpp_main (lcore 1) Time 5125.9, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00 vector rates in 0.0000e0, out 0.0000e0, drop 0.0000e0, punt 0.0000e0 Name State Calls Vectors Suspends Clocks Vectors/Call api-rx-from-ring any wait 0 0 364 1.19e4 0.00 cdp-process any wait 0 0 992 1.98e3 0.00 dhcp-client-process any wait 0 0 51 3.41e3 0.00 dns-resolver-process any wait 0 0 5 4.06e3 0.00 dpdk-process any wait 0 0 1709 5.13e4 0.00 fib-walk any wait 0 0 2563 1.37e3 0.00 ikev2-manager-process any wait 0 0 5124 7.25e2 0.00 ip-route-resolver-process any wait 0 0 51 2.64e3 0.00 ip4-reassembly-expire-walk any wait 0 0 513 3.85e3 0.00 ip6-icmp-neighbor-discovery-ev any wait 0 0 5124 6.92e2 0.00 ip6-reassembly-expire-walk any wait 0 0 513 3.84e3 0.00 lisp-retry-service any wait 0 0 2563 1.57e3 0.00 memif-process any wait 0 0 1709 2.10e3 0.00 rd-cp-process any wait 0 0 237212380 3.21e2 0.00 unix-cli-local:17 active 0 0 580 2.05e5 0.00 unix-epoll-input polling 96172305 0 0 1.19e4 0.00 vpe-oam-process any wait 0 0 2513 1.23e3 0.00 --------------- Thread 1 vpp_wk_0 (lcore 2) Time 5125.9, average vectors/node 4.82, last 128 main loops 0.00 per node 0.00 vector rates in 9.5578e3, out 8.4052e3, drop 0.0000e0, punt 0.0000e0 Name State Calls Vectors Suspends Clocks Vectors/Call VirtualFunctionEthernet0/6/0-o active 91 91 0 8.59e2 1.00 VirtualFunctionEthernet0/6/0-t active 91 91 0 2.82e3 1.00 VirtualFunctionEthernet0/7/0-o active 5334164 32661561 0 4.33e1 6.12 VirtualFunctionEthernet0/7/0-t active 5334164 26753703 0 3.83e2 5.02 arp-input active 182 182 0 7.25e3 1.00 dpdk-input polling 16550217513 16330917 0 4.05e5 0.00 ethernet-input active 5334255 32661652 0 7.97e1 6.12 interface-output active 182 182 0 6.58e2 1.00 ip4-input active 4685453 16330735 0 9.48e1 3.49 ip4-load-balance active 5334073 32661470 0 4.85e1 6.12 ip4-local active 4685453 16330735 0 9.92e1 3.49 ip4-lookup active 4685453 16330735 0 1.05e2 3.49 ip4-rewrite active 5334073 32661470 0 5.57e1 6.12 ip4-udp-lookup active 4685453 16330735 0 8.96e1 3.49 l2-fwd active 10019526 48992205 0 5.56e1 4.89 l2-input active 10019526 48992205 0 6.03e1 4.89 l2-learn active 10019526 48992205 0 6.75e1 4.89 l2-output active 10019526 48992205 0 6.16e1 4.89 memif-input polling 16550217513 32661470 0 2.33e5 0.00 unix-epoll-input polling 1817493 0 0 1.18e4 0.00 vxlan4-encap active 5334073 32661470 0 1.09e2 6.12 vxlan4-input active 4685453 16330735 0 1.19e2 3.49 memif1/1-output active 4685453 16330735 0 1.34e2 3.49 memif1/1-tx active 4685453 16330735 0 1.53e3 3.49 --------------- Thread 2 vpp_wk_1 (lcore 3) Time 5125.9, average vectors/node 1.67, last 128 main loops 0.00 per node 0.00 vector rates in 3.1859e3, out 3.1859e3, drop 0.0000e0, punt 0.0000e0 Name State Calls Vectors Suspends Clocks Vectors/Call dpdk-input polling 16679496489 16330735 0 4.24e5 0.00 ip4-input active 9785099 16330735 0 1.47e2 1.67 ip4-local active 9785099 16330735 0 1.33e2 1.67 ip4-lookup active 9785099 16330735 0 1.25e2 1.67 ip4-udp-lookup active 9785099 16330735 0 1.18e2 1.67 l2-fwd active 9785099 16330735 0 1.06e2 1.67 l2-input active 9785099 16330735 0 1.32e2 1.67 l2-learn active 9785099 16330735 0 1.31e2 1.67 l2-output active 9785099 16330735 0 9.05e1 1.67 memif-input polling 16679496489 0 0 4.38e2 0.00 unix-epoll-input polling 1130721 0 0 1.14e4 0.00 vxlan4-input active 9785099 16330735 0 1.45e2 1.67 memif1/1-output active 9785099 16330735 0 1.01e2 1.67 memif1/1-tx active 9785099 16330735 0 2.09e3 1.67 vpp# sh hardware [80/1816] Name Idx Link Hardware VirtualFunctionEthernet0/6/0 1 up VirtualFunctionEthernet0/6/0 Ethernet address 06:3a:20:ff:aa:d0 AWS ENA VF carrier up full duplex speed 10000 mtu 9216 rx queues 2, rx desc 1024, tx queues 3, tx desc 1024 cpu socket 0 tx frames ok 2 tx bytes ok 84 rx frames ok 21146632 rx bytes ok 33665435044 extended stats: rx good packets 21146632 tx good packets 2 rx good bytes 33665435044 tx good bytes 84 VirtualFunctionEthernet0/7/0 2 up VirtualFunctionEthernet0/7/0 Ethernet address 06:90:5e:ca:8f:6c AWS ENA VF carrier up full duplex speed 10000 mtu 9216 rx queues 2, rx desc 1024, tx queues 3, tx desc 1024 cpu socket 0 tx frames ok 17322383 tx bytes ok 27577230636 rx frames ok 2 rx bytes ok 84 extended stats: rx good packets 2 tx good packets 17322383 rx good bytes 84 tx good bytes 27577230636 local0 0 down local0 local loop1 3 up loop1 Ethernet address de:ad:00:00:00:01 loop2 5 up loop2 Ethernet address de:ad:00:00:00:02 memif1/1 7 up memif1/1 Ethernet address 02:fe:95:70:02:bc MEMIF interface instance 0 memif2/2 8 up memif2/2 Ethernet address 02:fe:6d:04:8f:40 MEMIF interface instance 1 vxlan_tunnel1 4 up vxlan_tunnel1 VXLAN vxlan_tunnel2 6 up vxlan_tunnel2 VXLAN vpp# show int Name Idx State Counter Count VirtualFunctionEthernet0/6/0 1 up rx packets 21146633 rx bytes 33665435086 tx packets 3 tx bytes 126 ip4 21146630 VirtualFunctionEthernet0/7/0 2 up rx packets 3 rx bytes 126 tx packets 21146633 tx bytes 33665435086 tx-error 3824249 local0 0 up loop1 3 up loop2 5 up memif1/1 7 up tx packets 21146630 tx bytes 32608103460 memif2/2 8 up rx packets 21146630 rx bytes 32608103460 vxlan_tunnel1 4 up rx packets 21146630 rx bytes 32608103460 vxlan_tunnel2 6 up tx packets 21146630 tx bytes 33369382140 vpp# show error Count Node Reason 120878944 vxlan4-input good packets decapsulated 241757861 vxlan4-encap good packets encapsulated 362636805 l2-output L2 output packets 362636805 l2-learn L2 learn packets 362636805 l2-input L2 input packets 2616 arp-input ARP replies sent 8 arp-input ARP probe or announcement dropped 18841192 VirtualFunctionEthernet0/7/0-tx Tx packet drops (dpdk tx failure) 120878917 vxlan4-input good packets decapsulated 120878917 l2-output L2 output packets 120878917 l2-learn L2 learn packets 120878917 l2-input L2 input packets Startup config snippet ================= dev 0000:00:06.0 { num-rx-queues 2 num-rx-desc 1024 num-tx-desc 1024 } dev 0000:00:07.0 { num-rx-queues 2 num-rx-desc 1024 num-tx-desc 1024 } -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10012): https://lists.fd.io/g/vpp-dev/message/10012 Mute This Topic: https://lists.fd.io/mt/23982730/675642 Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [dmar...@me.com<mailto:dmar...@me.com>] -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10015): https://lists.fd.io/g/vpp-dev/message/10015 Mute This Topic: https://lists.fd.io/mt/23982730/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-