dpdk-input polling 13176087868
3432988412 0 1.11e3 .26
The above line says polling is going good for RX. Only 26% times it could
get the packets. So I think this is a clue that you still have CPU left. =>
Dave & team can confirm ...
drops 36931507
rx-miss 22942665
Regarding above errors, I am not sure how these errors strings are mapped
to Device errors. I tired to check the driver code ..but could not nail
down. Again Dave & team can throw some info on these.
rx-miss => I think this is hardware interrupts missed on the CPU core.
Regards
On Mon, Aug 7, 2017 at 4:57 PM, SAKTHIVEL ANAND S <[email protected]>
wrote:
> Hi Dave, Thanks for your inputs..
> I just experimenting based on your inputs.. when i push < 7Gbps i don't
> see any packet losses on dpdk side and the vector/call is still ~1.1. As
> you mentioned <2.0,means has lot of room left..
> then i just increased the traffic to achieve bit rate of about 8gbps to 9
> gbps and started seeing dpdk packet drops. ( "rx missed" in show hardware
> as well as "drops" in show interfaces.
>
> can you tel me, what i have missed here and why dpdk is dropping , even
> though system has lot of cpu space?
> Also pls guide me to understand why vectors/call is not increasing ?
>
> here is the full o/p
> *root@ubuntu:~# vppctl sh ha*
> Name Idx Link Hardware
> TenGigabitEthernet3/0/0 5 up TenGigabitEthernet3/0/0
> Ethernet address 14:02:ec:73:ea:d0
> Intel X710/XL710 Family
> carrier up full duplex speed 10000 mtu 9216
>
> tx frames ok 159
> tx bytes ok 9540
> rx frames ok 3429247504
> rx bytes ok 4501655515148
> rx missed 22942665
> rx multicast frames ok 193
> extended stats:
> rx good packets 3429247703
> tx good packets 159
> rx good bytes 4501655766820
> tx good bytes 9540
> rx unicast packets 3452190013
> rx multicast packets 193
> rx broadcast packets 3
> rx unknown protocol packets 3452190211
> tx unicast packets 159
> rx size 64 packets 161
> rx size 128 to 255 packets 193
> rx size 1024 to 1522 packets 3452189888
> tx size 64 packets 159
> TenGigabitEthernet3/0/1 6 up TenGigabitEthernet3/0/1
> Ethernet address 14:02:ec:73:ea:d8
> Intel X710/XL710 Family
> carrier up full duplex speed 10000 mtu 9216
>
> tx frames ok 3389859164
> tx bytes ok 4298341418744
> rx frames ok 194
> rx bytes ok 36344
> rx multicast frames ok 193
> extended stats:
> rx good packets 194
> tx good packets 3389859164
> rx good bytes 36344
> tx good bytes 4298341418744
> rx unicast packets 1
> rx multicast packets 193
> rx unknown protocol packets 194
> tx unicast packets 3389859163
> tx broadcast packets 1
> rx size 64 packets 1
> rx size 128 to 255 packets 193
> tx size 64 packets 1
> tx size 1024 to 1522 packets 3389859163
> local0 0 down local0
> local
> pg/stream-0 1 down pg/stream-0
> Packet generator
> pg/stream-1 2 down pg/stream-1
> Packet generator
> pg/stream-2 3 down pg/stream-2
> Packet generator
> pg/stream-3 4 down pg/stream-3
> Packet generator
> root@ubuntu:~# vppctl sh int
> Name Idx State Counter
> Count
> TenGigabitEthernet3/0/0 5 up rx
> packets 3430406350
> rx bytes
> 4473249467216
> tx
> packets 159
> tx
> bytes 9540
>
> drops 36931507
>
> punts 193
>
> ip4 3430405997
>
> rx-miss 22942665
> TenGigabitEthernet3/0/1 6 up rx
> packets 194
> rx
> bytes 36344
> tx
> packets 3393474491
> tx bytes
> 4302925653362
>
> drops 1
>
> punts 193
>
> tx-error 2460933
> local0 0 down
> pg/stream-0 1 down
> pg/stream-1 2 down
> pg/stream-2 3 down
> pg/stream-3 4 down
> *root@ubuntu:~# vppctl sh run*
> Time 5785.1, average vectors/node 1.14, last 128 main loops 0.00 per node
> 0.00
> vector rates in 5.9342e5, out 5.8661e5, drop 6.3839e3, punt 6.6723e-2
> Name State
> Calls Vectors Suspends Clocks Vectors/Call
> TenGigabitEthernet3/0/0-output active
> 159 159 0 4.92e2 1.00
> TenGigabitEthernet3/0/0-tx active
> 159 159 0 1.43e3 1.00
> TenGigabitEthernet3/0/1-output active 2984620677
> 3396056360 0 1.09e2 1.14
> TenGigabitEthernet3/0/1-tx active 2984620677
> 3393593938 0 1.70e2 1.14
> admin-up-down-process event wait
> 0 0 1 1.94e3 0.00
> api-rx-from-ring active
> 0 0 61782 2.34e5 0.00
> arp-input active
> 160 160 0 1.53e4 1.00
> cdp-process any wait
> 0 0 2053 1.18e3 0.00
> cnat-db-scanner any wait
> 0 0 1 2.46e3 0.00
> dhcp-client-process any wait
> 0 0 58 4.27e3 0.00
> dpdk-input polling 13176087868
> 3432988412 0 1.11e3 .26
> dpdk-process any wait
> 0 0 1913 5.30e7 0.00
> error-drop active 35376992
> 36931508 0 1.17e2 1.04
> error-punt active
> 386 386 0 3.86e3 1.00
>
> ethernet-input active 546
> 546 0 3.69e3 1.00
> flow-report-process any wait 0
> 0 1 3.23e3 0.00
> gmon-process time wait 0
> 0 1157 1.77e6 0.00
> interface-output active 159
> 159 0 1.57e3 1.00
> ip4-arp active
> 1 1 0 2.83e4 1.00
>
> ip4-input-no-checksum active 3019924799
> 3432987866 0 2.04e2 1.14
> ip4-local active 3019924799
> 3432987866 0 1.42e2 1.14
> ip4-lookup active 6004545476
> 6829044226 0 1.73e2 1.14
> ip4-rewrite-transit active 2984620676
> 3396056359 0 1.19e2 1.14
> ip4-udp-lookup active 3019924799
> 3432987866 0 1.18e2 1.14
> ip6-icmp-neighbor-discovery-ev any wait 0
> 0 5784 1.67e3 0.00
> mynode_lookup active 3019924799
> 3432987866 0 4.06e2 1.14
> mynode_input active 3019924799
> 3432987866 0 1.20e2 1.14
>
> startup-config-process done 1
> 0 1 1.24e4 0.00
> unix-epoll-input polling 5253384335
> 0 0 3.92e2 0.00
> vhost-user-process any wait 0
> 0 1 3.29e4 0.00
> vpe-link-state-process event wait 0
> 0 3 1.57e4 0.00
> vpe-oam-process any wait 0
> 0 2836 1.75e3 0.00
> vpe-route-resolver-process any wait 0
> 0 58 5.19e3 0.00
>
> *root@ubuntu:~# vppctl sh err*
> Count Node Reason
> -896881608 mynode_lookup MPGU packet processed
> 36931506 mynode_lookup MPGU session does not
> exists
> -859950102 mpgu_gtpu_input GTPU packet processed
> 1 ip4-arp ARP requests sent
> 387 ethernet-input unknown ethernet type
> 159 arp-input ARP replies sent
> 1 arp-input ARP replies received
> 2464079 TenGigabitEthernet3/0/1-tx Tx packet drops (dpdk
> tx failure)
> root@ubuntu:~#
> Thanks
> Sakthivel S
>
>
>
> On Tue, Jul 25, 2017 at 8:26 PM, Dave Barach (dbarach) <[email protected]>
> wrote:
>
>> A vector size less than 2 indicates a *ton* of headroom. You shouldn’t
>> drop packets - except due to I/O bottlenecks - at a vector size
>> significantly less than 150.
>>
>>
>>
>> Rough, off-the cuff guesses [for production images, on “reasonable”
>> hardware]: vector size < 2 should be good for O(100kpps). Vector size 150
>> should be good for O(5mpps).
>>
>>
>>
>> Thanks… Dave
>>
>>
>>
>> P.S. The clocks/pkt figures below look like they may be from a
>> TAG=vpp_debug image... Or not. It’s hard to tell at 20 KPPS...
>>
>>
>>
>> *From:* SAKTHIVEL ANAND S [mailto:[email protected]]
>> *Sent:* Tuesday, July 25, 2017 9:20 AM
>> *To:* Dave Barach (dbarach) <[email protected]>
>> *Cc:* chenxndsc <[email protected]>; vpp-dev <[email protected]>
>>
>> *Subject:* Re: [vpp-dev] 回复: vpp cpu usage utility
>>
>>
>>
>> Thanks Dave..
>>
>> I am on vpp16.06 and after adding my nodes I get the following in “vppctl
>> show r”.
>>
>> One thing I observe (as expected) is when I push more traffic to see
>> higher "Vectors/Call" the Clocks value of nodes reduces.
>>
>> Now question is neither clocks or Vectors/Call indicate that still some
>> processing power is left.
>>
>>
>>
>> By the way, what is prevailing vector size and per-node vector size in
>> the below output?
>>
>> < last 128 main loops 0.00 per node 0.00> What are these values and why
>> these are zeros?
>>
>>
>>
>> Thanks in advance.
>>
>>
>>
>> root@ubuntu:~# vppctl show run
>> Time 1678.2, average vectors/node 1.45, last 128 main loops 0.00 per node
>> 0.00
>> vector rates in 1.9800e4, out 1.9337e4, drop 4.6368e2, punt 6.9123e-2
>> Name State
>> Calls Vectors Suspends Clocks Vectors/Call
>> GigabitEthernet13/0/0-output active 117
>> 117 0 2.71e3 1.00
>> GigabitEthernet13/0/0-tx active 117
>> 117 0 1.94e4 1.00
>> GigabitEthernet1b/0/0-output active 22450924
>> 32449638 0 1.26e3 1.45
>> GigabitEthernet1b/0/0-tx active 22450924
>> 32449638 0 3.64e3 1.45
>> arp-input active
>> 21064 22591 0 2.61e4 1.07
>> dpdk-input polling
>> 388349175 33227874 0 2.54e4 .09
>> dpdk-process any wait
>> 0 0 560 2.09e5 0.00
>> error-drop active
>> 709544 778129 0 2.25e3 1.09
>> error-punt active
>> 116 116 0 1.16e4 1.00
>> ethernet-input active
>> 27238 28772 0 5.35e3 1.06
>> interface-output active
>> 88 88 0 4.86e3 1.00
>> ip4-arp active
>> 15 16 0 3.63e4 1.07
>> ip4-classify active
>> 1600 1603 0 1.04e4 1.00
>> ip4-input-no-checksum active 22854016
>> 33199102 0 1.79e3 1.45
>> ip4-local active
>> 22841621 33173678 0 1.29e3 1.45
>> ip4-lookup active 45295061
>> 65626189 0 1.51e3 1.45
>> ip4-lookup-multicast active 10723
>> 22451 0 1.95e3 2.09
>> ip4-miss active
>> 12074 23802 0 1.37e3 1.97
>> ip4-rewrite-transit active 22450827
>> 32449541 0 1.47e3 1.45
>> ip4-udp-lookup active 22841621
>> 33173678 0 1.29e3 1.45
>> llc-input active
>> 3393 3394 0 4.01e3 1.00
>> mynode_lookup active 22841621
>> 33173678 0 1.89e3 1.45
>> mynode_input active 22841621
>> 33173678 0 1.49e3 1.45
>> unix-epoll-input polling
>> 365372525 0 0 3.36e3 0.00
>>
>> --------------------------
>>
>> -Sakthivel S OM
>>
>>
>>
>> On Mon, Jul 17, 2017 at 11:25 PM, Dave Barach (dbarach) <
>> [email protected]> wrote:
>>
>> The “show runtime” command displays the prevailing vector size, and
>> per-node vector sizes.
>>
>>
>>
>> Those stats are reasonably equivalent to a vector engine load average.
>>
>>
>>
>> Thanks… Dave
>>
>>
>>
>> *From:* [email protected] [mailto:[email protected]] *On
>> Behalf Of *SAKTHIVEL ANAND S
>> *Sent:* Monday, July 17, 2017 12:57 PM
>> *To:* chenxndsc <[email protected]>; vpp-dev <[email protected]>
>> *Subject:* Re: [vpp-dev] 回复: vpp cpu usage utility
>>
>>
>>
>> Thanks chenxndsc for the response.
>>
>>
>>
>> Yes, the polling shall take the 100% CPU. My question is related to know
>> how much more load a core can take more.
>>
>> For e.g. Lets say along with normal dpdk polling, if a core processeses
>> few sessions, how much CPU it takes in processing sessions apart from
>> polling. This is required to know how many sessions a can handle and hence
>> how many cores are needed to handle desired number of sessions?
>>
>>
>>
>> -Sakthivel Ss
>>
>>
>>
>> On 17-Jul-2017 21:08, "chenxndsc" <[email protected]> wrote:
>>
>> hello ANAND
>> 100% is the real cpu utilization, because VPP use POLL mode, if you know
>> how DPDK works, you gotta to know it better.
>>
>> 来自我的华为手机
>>
>>
>>
>> -------- 原始邮件 --------
>> 主题:[vpp-dev] vpp cpu usage utility
>> 发件人:SAKTHIVEL ANAND S
>> 收件人:vpp-dev
>> 抄送:
>>
>> Hi
>>
>> I am working on an use case where i just want to measure CPU usage of vpp
>> task.
>>
>> Linux "TOP" command shows 100% cpu always (even though there is no active
>> processing in it,since it is running on closed/tight loop).
>>
>> Is there any way to measure "real/actual" cpu usage of VPP task?
>>
>> My setup: Ubuntu 16.04 with VPP16.06
>>
>>
>> --
>>
>> Thanks in advance
>> Sakthivel S OM
>>
>>
>>
>>
>> --
>>
>> Thanks
>> Sakthivel S OM
>>
>
>
>
> --
> Thanks
> Sakthivel S OM
>
>
> _______________________________________________
> vpp-dev mailing list
> [email protected]
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
_______________________________________________
vpp-dev mailing list
[email protected]
https://lists.fd.io/mailman/listinfo/vpp-dev