Hi
I've been trying to measure the throughput of Open vSwitch with DPDK for
physical ports. My test setup is shown below:
+-----------------------------+
| 82599ES 10-Gigabit SFI/SFP+ |
+-----------------------------+
| p0 | | p1 |
+----+ +----+
^ ^
| |
v v
+----+ +----+
| p0 | | p1 |
+-----------------------------+
| NAPATECH Adapter - 2 port |
+-----------------------------+
I manually created the ovs bridge, dpdk ports and a flow entry as follows:
===============================
$ sudo ovs-vsctl show
[sudo] password for argela:
dfa41660-4e24-4f4e-87de-a21398f51246
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "dpdk1"
Interface "dpdk1"
type: dpdk
Port "dpdk0"
Interface "dpdk0"
type: dpdk
$ sudo ovs-ofctl dump-flows br0
[sudo] password for argela:
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=3659.413s, table=0, n_packets=5061744149,
n_bytes=3111037716380, idle_age=582, in_port=1 actions=output:2
===============================
Then, I isolated cores from Linux kernel according to the output of
dpdk/tools/cpu_layout.py.
===============================
$ ./cpu_layout.py
cores = [0, 1, 2, 8, 9, 10]
sockets = [0, 1]
Socket 0 Socket 1
-------- --------
Core 0 [0, 12] [6, 18]
Core 1 [1, 13] [7, 19]
Core 2 [2, 14] [8, 20]
Core 8 [3, 15] [9, 21]
Core 9 [4, 16] [10, 22]
Core 10 [5, 17] [11, 23]
$ sudo vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash iommu=pt intel_iommu=on
vfio_iommu_type1.allow_unsafe_interrupts=1 default_hugepagesz=1G hugepagesz=1G
hugepages=5 isolcpus=12,13,14,15,16,17,18,19,20,21,22,23"
$ sudo ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=FFF000
$ sudo top -p `pidof ovs-vswitchd` -H -d1
3098 root 20 0 4250504 65688 2672 R 99,9 0,4 26:18.44 pmd42
14
3101 root 20 0 4250504 65688 2672 R 99,9 0,4 26:18.45 pmd45
17
3096 root 20 0 4250504 65688 2672 R 99,8 0,4 26:18.83 pmd46
12
3097 root 20 0 4250504 65688 2672 R 99,8 0,4 26:18.77 pmd41
13
3099 root 20 0 4250504 65688 2672 R 98,8 0,4 26:18.43 pmd43
15
3100 root 20 0 4250504 65688 2672 R 98,8 0,4 26:18.42 pmd44
16
2652 root 20 0 4250504 65688 2672 S 0,0 0,4 0:03.15 ovs-vswitchd
1
2653 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.01
dpdk_watchdog3 1
2654 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.05
vhost_thread2 1
2655 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.73 urcu1
1
2683 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.00 handler27
1
2684 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.00 handler26
1
2685 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.00 handler25
1
2686 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.00 handler24
1
2689 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.00 handler23
1
2690 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.00 handler17
1
2691 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.00 handler18
1
2692 root 20 0 4250504 65688 2672 S 0,0 0,4 0:00.00 handler19
1
2693 root 20 0 4250504 65688 2672 S 0,0 0,4 0:01.82
revalidator20 1
2694 root 20 0 4250504 65688 2672 S 0,0 0,4 0:01.51
revalidator21 1
2695 root 20 0 4250504 65688 2672 S 0,0 0,4 0:01.51
revalidator16 1
2696 root 20 0 4250504 65688 2672 S 0,0 0,4 0:01.51
revalidator22 1
===============================
When I start the traffic by using a pcap file for NAPATECH port0 with 10G rate,
OvS+DPDK can reach the line rate. However, when I start sending 64 byte UDP
packets at 10G rate, the throughtput is ~550M. And also, OvS+DPDK only uses one
core and the others are IDLE even if I set 12 cores.
===============================
$ sudo ovs-appctl dpif-netdev/pmd-stats-show
pmd thread numa_id 0 core_id 13:
emc hits:0
megaflow hits:0
miss:0
lost:0
polling cycles:2608878847988 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 0 core_id 14:
emc hits:0
megaflow hits:0
miss:0
lost:0
pmd thread numa_id 0 core_id 16:
emc hits:0
megaflow hits:0
miss:0
lost:0
main thread:
emc hits:0
megaflow hits:0
miss:0
lost:0
polling cycles:76924028 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 0 core_id 12:
emc hits:1963602000
megaflow hits:44356
miss:9
lost:0
polling cycles:1007444696020 (28.15%)
processing cycles:2570868572784 (71.85%)
avg cycles per packet: 1822.28 (3578313268804/1963646365)
avg processing cycles per packet: 1309.23 (2570868572784/1963646365)
pmd thread numa_id 0 core_id 15:
emc hits:0
megaflow hits:0
miss:0
lost:0
pmd thread numa_id 0 core_id 17:
emc hits:0
megaflow hits:0
miss:0
lost:0
===============================
Could you please explain why OvS+DPDK did not reach the line rate (10G) for 64
byte UDP traffic?
Thanks in advance.
- Volkan
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss