Q1:

I run vpp in kvm vm,and the software nic frontend is virtio backend is
vhost-user.
I test the vpp route performance with Tester(vpp code version is
stable/16.09), vm connect Tester with dpdk
ovs datapath


vpp startup.conf is


vppctl config is


set int state interface1 up
set int state interface2 up
set int ip address interface1 172.16.1.1/24
set int ip address interface2 172.16.2.1/24​

kvm config is as :
https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)

vpp route forward is 80kpps. My host use E5 2630 v3 CPU

In my mind ,vpp is high performance,so I think my test config for vpp
may be not suitable for
vpp high performance. Is my config right for vpp high performance ?
Does some config make vpp faster I don't use ?


Q2:

After performance test,I exec the cli "show error"
it show <Tx packet drop (dpdk tx faiulre)> item has so many packet.

In my experience,If
vpp have no cpu to forward packets, it will lose packet in dpdk rx, not tx.

If lose packet in tx, I think the reason is tx nic backend in
ovs dpdk datapath not
timely get the packet away. But in my test environment ,dpdk ovs datapath
get packet rate is 10mpps,vpp forward rate is 80kpps.

Hope to you reply :-)

Best Regards

Yu
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to