Hi, all

We find ovs has serious performance issue, we only launch one VM in one 
compute, and do iperf small udp pps performance test between these two VMs, we 
can see about 180000 pps (packets per second, -l 16), but

1) if we add 100 veth ports in br-int bridge, respectively, then the pps 
performance will be about 50000 pps.
2) If we launch one more VM in every compute node, but don’t run any workload, 
the pps performance will be about 90000 pps. (note, no above veth ports in this 
test)
3) If we launch two more VMs in every compute node (totally 3 VMs every compute 
nodes), but don’t run any workload , the pps performance will be about 50000 
pps (note, no above veth ports in this test)

Anybody can help explain why it is so? Is there any known way to optimized 
this? I really think ovs performance is bad (we can draw such conclusion from 
our test result at least), I don’t want to defame ovs ☺

BTW, we used ovs kernel datapath and vhost, we can see every port has a vhost 
kernel thread, it is running with 100% cpu utilization if we run iperf in VM, 
bu for those idle VMs, the corresponding vhost still has about 30% cpu 
utilization, I don’t understand why.

In addition, we find udp performance is also very bad for small UDP packet for 
physical NIC. But it can reach 260000 pps for –l 80 which enough covers vxlan 
header (8 bytes) + inner eth header (14) + ipudp header (28) + 16 = 66, if we 
consider performance overhead ovs bridge introduces, pps performance between 
VMs should be able to reach 200000 pps at least, other VMs and ports shouldn’t 
have so big hurt against it because they are idle, no any workload there.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to