sorry for the typo in the previous mail.Complete output of the show run
command is mentioned below:-
vpp# show run
Thread 0 vpp_main (lcore 0)
Time 393.8, average vectors/node 1.00, last 128 main loops 0.00 per node
0.00
vector rates in 0.0000e0, out 1.0158e-2, drop 2.5394e-3, punt 0.0000e0
Name State Calls Vectors
Suspends Clocks Vectors/Call
VirtualFunctionEthernet0/a/0-o active 4 4
0 2.69e3 1.00
VirtualFunctionEthernet0/a/0-t active 4 4
0 4.72e3 1.00
acl-plugin-fa-cleaner-process event wait 0 0
1 1.64e4 0.00
admin-up-down-process event wait 0 0
1 7.39e3 0.00
api-rx-from-ring any wait 0 0
29 2.64e5 0.00
bfd-process event wait 0 0
1 6.42e3 0.00
cdp-process any wait 0 0
41 1.28e5 0.00
dhcp-client-process any wait 0 0
4 5.04e3 0.00
dpdk-ipsec-process done 1 0
0 2.89e5 0.00
dpdk-process any wait 0 0
132 6.89e4 0.00
error-drop active 1 1
0 2.35e3 1.00
fib-walk any wait 0 0
197 8.02e3 0.00
flow-report-process any wait 0 0
1 7.02e2 0.00
flowprobe-timer-process any wait 0 0
1 4.88e3 0.00
ikev2-manager-process any wait 0 0
394 4.81e3 0.00
ioam-export-process any wait 0 0
1 1.18e3 0.00
ip4-glean active 1 1
0 1.56e4 1.00
ip4-lookup active 4 4
0 6.53e3 1.00
ip4-rewrite active 3 3
0 3.02e3 1.00
ip6-icmp-neighbor-discovery-ev any wait 0 0
394 3.78e3 0.00
l2fib-mac-age-scanner-process event wait 0 0
1 1.22e3 0.00
lisp-retry-service any wait 0 0
197 8.37e3 0.00
lldp-process event wait 0 0
1 4.15e6 0.00
memif-process event wait 0 0
1 1.74e4 0.00
nat64-expire-walk done 1 0
0 1.01e4 0.00
send-garp-na-process event wait 0 0
1 1.29e3 0.00
snat-det-expire-walk done 1 0
0 1.05e3 0.00
startup-config-process done 1 0
1 5.97e3 0.00
udp-ping-process any wait 0 0
1 1.05e4 0.00
unix-cli-sockaddr family 1 active 0 0
298 7.72e6 0.00
unix-epoll-input polling 655279 0
0 1.56e6 0.00
vhost-user-process any wait 0 0
1 1.33e3 0.00
vhost-user-send-interrupt-proc any wait 0 0
1 1.06e3 0.00
vpe-link-state-process event wait 0 0
2 5.78e3 0.00
vpe-oam-process any wait 0 0
193 5.68e3 0.00
vpe-route-resolver-process any wait 0 0
4 7.33e3 0.00
vxlan-gpe-ioam-export-process any wait 0 0
1 2.31e3 0.00
---------------
Thread 1 vpp_wk_0 (lcore 1)
Time 393.8, average vectors/node 17.54, last 128 main loops 0.00 per node
0.00
vector rates in 4.5190e5, out 4.5190e5, drop 3.3698e0, punt 0.0000e0
Name State Calls Vectors
Suspends Clocks Vectors/Call
VirtualFunctionEthernet0/a/0-o active 10146115 177952676
0 1.70e1 17.54
VirtualFunctionEthernet0/a/0-t active 10146115 177952676
0 7.86e1 17.54
arp-input active 1337 1340
0 2.58e3 1.00
dpdk-input polling 6678250972 177954003
0 3.26e3 .03
error-drop active 1324 1327
0 8.68e2 1.00
ethernet-input active 1337 1340
0 1.87e3 1.00
interface-output active 16 16
0 8.06e2 1.00
ip4-icmp-echo-reply active 3 3
0 3.01e4 1.00
ip4-icmp-echo-request active 2 2
0 3.80e3 1.00
ip4-icmp-error active 10146097 177952658
0 1.28e2 17.54
ip4-icmp-input active 5 5
0 2.59e3 1.00
ip4-input-no-checksum active 10146102 177952663
0 4.05e1 17.54
ip4-load-balance active 2 2
0 3.67e3 1.00
ip4-local active 10146102 177952663
0 5.54e1 17.54
ip4-lookup active 20292199 355905321
0 4.37e1 17.54
ip4-rewrite active 10146099 177952660
0 3.23e1 17.54
ip4-udp-lookup active 10146097 177952658
0 3.95e1 17.54
---------------
Thread 2 vpp_wk_1 (lcore 2)
Time 393.8, average vectors/node 0.00, last 128 main loops 0.00 per node
0.00
vector rates in 0.0000e0, out 0.0000e0, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors
Suspends Clocks Vectors/Call
---------------
Thread 3 vpp_wk_2 (lcore 3)
Time 393.8, average vectors/node 0.00, last 128 main loops 0.00 per node
0.00
vector rates in 0.0000e0, out 0.0000e0, drop 0.0000e0, punt 0.0000e0
Name State Calls Vectors
Suspends Clocks Vectors/Call
Thanks,
rahul
On Wed, Aug 16, 2017 at 12:01 PM, Rahul Negi <[email protected]> wrote:
> HI All,
> After configuring hugepages of size 2048KiB at the host,I am not seeing
> any improvement in the performance.Still after pumping more than 4Mpps i am
> seeing this counter getting incremented
> rx_no_dma_resources: 93356 at the host.
>
> output of following commands are mentioned below-:
>
> vpp# show interface
> Name Idx State Counter
> Count
> VirtualFunctionEthernet0/a/0 1 up rx packets
> 177953727
> rx bytes
> 11389034442
> tx packets
> 177952680
> tx bytes
> 16371646028
> drops
> 1052
> ip4
> 177952663
> local0 0 down
>
>
> vpp# show run
> Thread 0 vpp_main (lcore 0)
> Time 393.8, average vectors/node 1.00, last 128 main loops 0.00 per node
> 0.00
> vector rates in 0.0000e0, out 1.0158e-2, drop 2.5394e-3, punt 0.0000e0
> Name State Calls Vectors
> Suspends Clocks Vectors/Call
> VirtualFunctionEthernet0/a/0-o active 4 4
> 0 2.69e3 1.00
> VirtualFunctionEthernet0/a/0-t active 4 4
> 0 4.72e3 1.00
> acl-plugin-fa-cleaner-process event wait 0 0
> 1 1.64e4 0.00
> admin-up-down-process event wait 0 0
> 1 7.39e3 0.00
> api-rx-from-ring any wait 0 0
> 29 2.64e5 0.00
> bfd-process event wait 0 0
> 1 6.42e3 0.00
> cdp-process any wait 0 0
> 41 1.28e5 0.00
> dhcp-client-process any wait 0 0
> 4 5.04e3 0.00
> dpdk-ipsec-process done 1 0
> 0 2.89e5 0.00
> dpdk-process any wait 0 0
> 132 6.89e4 0.00
> error-drop active 1 1
> 0 2.35e3 1.00
> fib-walk any wait 0 0
> 197 8.02e3 0.00
> flow-report-process any wait 0 0
> 1 7.02e2 0.00
> flowprobe-timer-process any wait 0 0
> 1 4.88e3 0.00
> ikev2-manager-process any wait 0 0
> 394 4.81e3 0.00
> ioam-export-process any wait 0 0
> 1 1.18e3 0.00
> ip4-glean active 1 1
> 0 1.56e4 1.00
> ip4-lookup active 4 4
> 0 6.53e3 1.00
> ip4-rewrite active 3 3
> 0 3.02e3 1.00
> ip6-icmp-neighbor-discovery-ev any wait 0 0
> 394 3.78e3 0.00
> l2fib-mac-age-scanner-process event wait 0 0
> 1 1.22e3 0.00
> lisp-retry-service any wait 0 0
> 197 8.37e3 0.00
> lldp-process event wait 0 0
> 1 4.15e6 0.00
> memif-process event wait 0 0
> 1 1.74e4 0.00
> nat64-expire-walk done 1 0
> 0 1.01e4 0.00
> send-garp-na-process event wait 0 0
> 1 1.29e3 0.00
> snat-det-expire-walk done 1 0
> 0 1.05e3 0.00
> startup-config-process done 1 0
> 1 5.97e3 0.00
> udp-ping-process any wait 0 0
> 1 1.05e4 0.00
> unix-cli-sockaddr family 1 active 0 0
> 298 7.72e6 0.00
> unix-epoll-input polling 655279 0
> 0 1.56e6 0.00
> vhost-user-process any wait 0 0
> 1 1.33e3 0.00
> vhost-user-send-interrupt-proc any wait 0 0
> 1 1.06e3 0.00
> vpe-link-state-process event wait 0 0
> 2 5.78e3 0.00
> vpe-oam-process any wait 0 0
> 193 5.68e3 0.00
> vpe-route-resolver-process any wait 0 0
> 4 7.33e3 0.00
> vxlan-gpe-ioam-export-process any wait 0 0
> 1 2.31e3 0.00
> ---------------
> Thread 1 vpp_wk_0 (lcore 1)
> Time 393.8, average vectors/node 17.54, last 128 main loops 0.00 per node
> 0.00
> vector rates in 4.5190e5, out 4.5190e5, drop 3.3698e0, punt 0.0000e0
> Name State Calls Vectors
> Suspends Clocks Vectors/Call
> VirtualFunctionEthernet0/a/0-o active 10146115 177952676
> 0 1.70e1 17.54
> VirtualFunctionEthernet0/a/0-t active 10146115 177952676
> 0 7.86e1 17.54
> arp-input active 1337 1340
> 0 2.58e3 1.00
> dpdk-input polling 6678250972 177954003
> 0 3.26e3 .03
> error-drop active 1324 1327
> 0 8.68e2 1.00
> ethernet-input active 1337 1340
> 0 1.87e3 1.00
> interface-output active 16 16
> 0 8.06e2 1.00
> ip4-icmp-echo-reply active 3 3
> 0 3.01e4 1.00
> ip4-icmp-echo-request active 2 2
> 0 3.80e3 1.00
> ip4-icmp-error active 10146097 177952658
> 0 1.28e2 17.54
> ip4-icmp-input active 5 5
> 0 2.59e3 1.00
> ip4-input-no-checksum active 10146102 177952663
> 0 4.05e1 17.54
> ip4-load-balance active 2 2
> 0 3.67e3 1.00
> ip4-local active 10146102 177952663
> 0 5.54e1 17.54
> ip4-lookup active 20292199 355905321
> 0 4.37e1 17.54
> ip4-rewrite active 10146099 177952660
> 0 3.23e1 17.54
> ip4-udp-lookup active 10146097 177952658
> 0 3.95e1 17.54
>
> Thanks,
> Rahul
>
> On Fri, Aug 11, 2017 at 1:45 AM, Kinsella, Ray <[email protected]>
> wrote:
>
>> Hi Rahul,
>>
>> So there a few additional hoops you have to jump through to get good
>> performance. The biggest of which is making sure that
>>
>> 1. The host is setup to make hugepages available.
>> 2. QEMU is pointed at the hugepages.
>> 3. The guest is setup to make hugepages available.
>>
>> The following documented from DPDK covers it in some detail.
>>
>> http://dpdk.readthedocs.io/en/v16.04/sample_app_ug/vhost.html
>>
>> Ray K
>>
>>
>> On 09/08/2017 23:33, Rahul Negi wrote:
>>
>>> Hi All,
>>> I am working on an use case where i want to measure the Maximum PPS
>>> handled by vpp in sriov configuration.I have created a virtual machine
>>> on a host having specifications as follows:-
>>> 1.RHEL 7.3 installed
>>> 2.Intel X540 10 gig NIC attached
>>>
>>> I have created a virtual function from one of the interface of 10 gig
>>> NIC(for e.g ens3f0) and attached it to my Virtual Machine.I have
>>> installed ubuntu 16.04 on my Vm. My vpp version running on vm is 17.10.
>>>
>>> vpp# show version verbose
>>> Version: v17.10-rc0~86-g7d4a22c
>>> Compiled by: root
>>> Compile host: ubuntu
>>> Compile date: Wed Jul 26 18:56:51 EDT 2017
>>> Compile location: /root/vpp/vpp
>>> Compiler: GCC 5.4.0 20160609
>>> Current PID: 5006
>>>
>>> Currently My Vm is with 4vcpu and vpp cpu model is as follow:-
>>> 1.one main thread
>>> 2.Three worker threads
>>>
>>> I am not able to get more than 4Mpps with this configuration of vpp.When
>>> i am pumping more than 4Mpps traffic to my VM. I can see this
>>> counter rx_no_dma_resources: 5628104 getting incremented at the host.As
>>> there is no eth stats available in sriov configuration for virtual
>>> functions attached to VM.
>>>
>>> Guest vcpus are pin to host physical cpus.
>>>
>>> So 4Mpps is the expected number that we can get in this configuration?
>>>
>>> Thanks,
>>> Rahul
>>>
>>>
>>> _______________________________________________
>>> vpp-dev mailing list
>>> [email protected]
>>> https://lists.fd.io/mailman/listinfo/vpp-dev
>>>
>>>
>
_______________________________________________
vpp-dev mailing list
[email protected]
https://lists.fd.io/mailman/listinfo/vpp-dev