Hi, William

I used OVS DPDK to test it, you shouldn't add tap interface to ovs DPDK bridge 
if you use vdev to add tap, virtio_user is just for it, but that won't use this 
receive function to receive packets.

At 2019-12-17 02:55:50, "William Tu" <u9012...@gmail.com> wrote:
>On Fri, Dec 06, 2019 at 02:09:24AM -0500, yang_y...@163.com wrote:
>> From: Yi Yang <yangy...@inspur.com>
>> 
>> Current netdev_linux_rxq_recv_tap and netdev_linux_rxq_recv_sock
>> just receive single packet, that is very inefficient, per my test
>> case which adds two tap ports or veth ports into OVS bridge
>> (datapath_type=netdev) and use iperf3 to do performance test
>> between two ports (they are set into different network name space).
>> 
>> The result is as below:
>> 
>>   tap:  295 Mbits/sec
>>   veth: 207 Mbits/sec
>> 
>> After I change netdev_linux_rxq_recv_tap and
>> netdev_linux_rxq_recv_sock to use batch process, the performance
>> is boosted by about 7 times, here is the result:
>> 
>>   tap:  1.96 Gbits/sec
>>   veth: 1.47 Gbits/sec
>> 
>> Undoubtedly this is a huge improvement although it can't match
>> OVS kernel datapath yet.
>> 
>> FYI: here is thr result for OVS kernel datapath:
>> 
>>   tap:  37.2 Gbits/sec
>>   veth: 36.3 Gbits/sec
>> 
>> Note: performance result is highly related with your test machine
>> , you shouldn't expect the same results on your test machine.
>> 
>> Signed-off-by: Yi Yang <yangy...@inspur.com>
>
>Hi Yi Yang,
>
>Are you testing this using OVS-DPDK?
>If you're using OVS-DPDK, then you should use DPDK's vdev to
>open and attach tap/veth device to OVS. I think you'll see much
>better performance.
>
>The performance issue you pointed out only happens when using
>userspace datapath without DPDK library, where afxdp is used.
>I'm still looking for a better solutions for faster interface
>for veth (af_packet) and tap.
>
>Thanks
>William
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to