[ovs-dev] [Qestion] How can I get traffic statistics of an OVN QoS item?

2023-08-01 Thread yang_y_yi
Hi, folks After I added QoS, I can see it by ovn-nbctl list qos """ _uuid               : 690ecdf9-c6ae-41ef-a4ae-0a7a7c7c33ce action              : {} bandwidth           : {rate=100} direction           : to-lport external_ids        : {} match               : "outport ==

Re: [ovs-dev] [PATCH v4] Enable VXLAN TSO for DPDK datapath

2021-02-04 Thread yang_y_yi
At 2021-02-05 01:38:14, "Flavio Leitner" wrote: > > >Hi Yi, > >Again, sorry the delay to review the patch. > >The patch is using the outer length fields from DPDK >which seems to be a problem in OVS because most of the >packet transformation functions are not aware of that. > >Therefore, after

[ovs-dev] [PATCH v4] Enable VXLAN TSO for DPDK datapath

2020-11-25 Thread yang_y_yi
From: Yi Yang Many NICs can support VXLAN TSO which can help improve across-compute-node VM-to-VM performance in case that MTU is set to 1500. This patch allows dpdkvhostuserclient interface and veth/tap interface to leverage NICs' offload capability to maximize across-compute-node TCP

Re: [ovs-dev] [PATCH V3 1/4] Enable VXLAN TSO for DPDK datapath

2020-11-01 Thread yang_y_yi
Thanks a lot, Flavio, please check inline comments for more discussion. At 2020-10-31 01:55:57, "Flavio Leitner" wrote: > >Hi Yi, > >Thanks for the patch and sorry the delay to review it. >See my comments in line. > >Thanks, >fbl > > >On Fri, Aug 07, 2020 at 06:56:45PM +0800,

[ovs-dev] [PATCH] conntrack: fix zone sync issue

2020-10-18 Thread yang_y_yi
From: Yi Yang In some use cases, zone is used to differentiate different conntrack state tables, so zone also should be synchronized if it is set. Signed-off-by: Yi Yang --- include/network.h | 1 + src/build.c | 3 +++ src/parse.c | 5 + 3 files changed, 9 insertions(+) diff

[ovs-dev] [PATCH] netdev-dpdk: fix incorrect shinfo initialization

2020-10-14 Thread yang_y_yi
From: Yi Yang shinfo is used to store reference counter and free callback of an external buffer, but it is stored in mbuf if the mbuf has tailroom for it. This is wrong because the mbuf (and its data) can be freed before the external buffer, for example: pkt2 = rte_pktmbuf_alloc(mp);

[ovs-dev] [PATCH v3] userspace: fix bad UDP performance issue of veth

2020-09-23 Thread yang_y_yi
From: Yi Yang iperf3 UDP performance of veth to veth case is very very bad because of too many packet loss, the root cause is rmem_default and wmem_default are just 212992, but iperf3 UDP test used 8K UDP size which resulted in many UDP fragment in case that MTU size is 1500, one 8K UDP send

[ovs-dev] [PATCH v2 3/3] Fix tap interface status update issue in network namespace

2020-09-09 Thread yang_y_yi
From: Yi Yang Currently OVS can't get link state, mtu, mac, driver, etc. when tap interface is in network namespace, with netns option and netns helper functions, these info can be gotten. This patch fixed all these issues and make sure tap interface in network namespace can get same info as it

[ovs-dev] [PATCH v2 0/3] userspace: enable tap interface statistics and status update support

2020-09-09 Thread yang_y_yi
From: Yi Yang OVS userspace datapath can't support tap interface statistics and status update, so users can't get these information by cmd "ovs-vsctl list interface tap1", the root cause of this issue is OVS doesn't know network namespace of tap interface. This patch series fixed this issue and

[ovs-dev] [PATCH v2 1/3] Add netns option for tap interface in userspace datapath

2020-09-09 Thread yang_y_yi
From: Yi Yang In userspace datapath, "ovs-vsctl list interface" can't get interface statistics and there are many WARN log, we can enable it work normally if it has correct network namespace. This patch enabled netns option for tap interface , it is the prerequisite interface statistics and

[ovs-dev] [PATCH v2 2/3] Fix tap interface statistics issue

2020-09-09 Thread yang_y_yi
From: Yi Yang After tap interface is moved to network namespace, "ovs-vsctl list interface tapXXX" can get statistics info of tap interface, the root cause is OVS still gets statistics info in root namespace. With netns option help, OVS can get statistics info in tap interface netns. This

[ovs-dev] [PATCH v2] userspace: fix bad UDP performance issue of veth

2020-09-09 Thread yang_y_yi
From: Yi Yang iperf3 UDP performance of veth to veth case is very very bad because of too many packet loss, the root cause is rmem_default and wmem_default are just 212992, but iperf3 UDP test used 8K UDP size which resulted in many UDP fragment in case that MTU size is 1500, one 8K UDP send

[ovs-dev] [PATCH] userspace: fix bad UDP performance issue of veth

2020-08-20 Thread yang_y_yi
From: Yi Yang iperf3 UDP performance of veth to veth case is very very bad because of too many packet loss, the root cause is rmem_default and wmem_default are just 212992, but iperf3 UDP test used 8K UDP size which resulted in many UDP fragment in case that MTU size is 1500, one 8K UDP send

[ovs-dev] [PATCH V1 4/4] Fix tap interface status update issue in network namespace

2020-08-16 Thread yang_y_yi
From: Yi Yang Currently OVS can't get link state, mtu, mac, driver, etc. when tap interface is in network namespace, with netns option and netns helper functions, these info can be gotten. This patch fixed all these issues and make sure tap interface in network namespace can get same info as it

[ovs-dev] [PATCH V1 2/4] Add netns option for tap interface in userspace datapath

2020-08-16 Thread yang_y_yi
From: Yi Yang In userspace datapath, "ovs-vsctl list interface" can't get interface statistics and there are many WARN log, we can enable it work normally if it has correct network namespace. This patch enabled netns option for tap interface , it is the prerequisite interface statistics and

[ovs-dev] [PATCH V1 1/4] Use pmd thread to handle system interfaces

2020-08-16 Thread yang_y_yi
From: Yi Yang Currently all the interfaces are handled by single thread ovs-vswitchd in userspace datapath, this is unscalable, especially in Openstack case, there are many tap and veth interfaces attached to bridge to handle routing and floating ip. But ovs-netdev can't be handled by pmd

[ovs-dev] [PATCH V1 3/4] Fix tap interface statistics issue

2020-08-16 Thread yang_y_yi
From: Yi Yang After tap interface is moved to network namespace, "ovs-vsctl list interface tapXXX" can get statistics info of tap interface, the root cause is OVS still gets statistics info in root namespace. With netns option help, OVS can get statistics info in tap interface netns. This

[ovs-dev] [PATCH V1 0/4] Enable pmd to support for system interfaces

2020-08-16 Thread yang_y_yi
From: Yi Yang In openstack and OVS DPDK user scenario, there are many tap interfaces and veth interfaces added into OVS bridge, but only single thread ovs-vswitchd is handling them, this resulted in very bad performance for floating IP and L3 routing in DVR mode. This patch series are just to

[ovs-dev] [PATCH V3 2/4] Add GSO support for DPDK data path

2020-08-07 Thread yang_y_yi
From: Yi Yang GSO(Generic Segment Offload) can segment large UDP and TCP packet to small packets per MTU of destination , especially for the case that physical NIC can't do hardware offload VXLAN TSO and VXLAN UFO, GSO can make sure userspace TSO can still work but not drop. In addition, GSO

[ovs-dev] [PATCH V3 0/4] userspace: enable VXLAN TSO, GSO and GRO

2020-08-07 Thread yang_y_yi
From: Yi Yang Many NICs can support VXLAN TSO, this can improve VM-to-VM TCP performance, but for UDP, most of NICs can offload UFO, so GSO is very necessary for UDP when userspace TSO is enabled, GSO also can do VXLAN TSO if NIC can't support it. GRO is necessary if TSO and UFO are enabled, it

[ovs-dev] [PATCH V3 1/4] Enable VXLAN TSO for DPDK datapath

2020-08-07 Thread yang_y_yi
From: Yi Yang Many NICs can support VXLAN TSO which can help improve across-compute-node VM-to-VM performance in case that MTU is set to 1500. This patch allows dpdkvhostuserclient interface and veth/tap interface to leverage NICs' offload capability to maximize across-compute-node TCP

[ovs-dev] [PATCH V3 3/4] Add VXLAN TCP and UDP GRO support for DPDK data path

2020-08-07 Thread yang_y_yi
From: Yi Yang GRO(Generic Receive Offload) can help improve performance when TSO (TCP Segment Offload) or VXLAN TSO is enabled on transmit side, this can avoid overhead of ovs DPDK data path and enqueue vhost for VM by merging many small packets to large packets (65535 bytes at most) once it

[ovs-dev] [PATCH V3 4/4] Update Documentation/topics/userspace-tso.rst

2020-08-07 Thread yang_y_yi
From: Yi Yang With GSO and GRO enabled, OVS DPDK can do GSO by software if NIC can't support TSO or VXLAN TSO hardware offload. Signed-off-by: Yi Yang --- Documentation/topics/userspace-tso.rst | 13 ++--- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git

[ovs-dev] [PATCH v2 5/5] Update Documentation/topics/userspace-tso.rst

2020-07-01 Thread yang_y_yi
From: Yi Yang With GSO and GRO enabled, OVS DPDK can do GSO by software if NIC can't support TSO or VXLAN TSO hardware offload. Signed-off-by: Yi Yang --- Documentation/topics/userspace-tso.rst | 13 ++--- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git

[ovs-dev] [PATCH v2 1/5] Fix dp_packet_set_size error for multi-seg mbuf

2020-07-01 Thread yang_y_yi
From: Yi Yang For multi-seg mbuf, pkt_len isn't equal to data_len, data_len is data_len of the first seg, pkt_len is sum of data_len of all the segs, so for such packets, dp_packet_set_size shouldn't change data_len. Signed-off-by: Yi Yang --- lib/dp-packet.h | 4 +++- 1 file changed, 3

[ovs-dev] [PATCH v2 4/5] Add VXLAN TCP and UDP GRO support for DPDK data path

2020-07-01 Thread yang_y_yi
From: Yi Yang GRO(Generic Receive Offload) can help improve performance when TSO (TCP Segment Offload) or VXLAN TSO is enabled on transmit side, this can avoid overhead of ovs DPDK data path and enqueue vhost for VM by merging many small packets to large packets (65535 bytes at most) once it

[ovs-dev] [PATCH v2 0/5] userspace: enable VXLAN TSO, GSO and GRO

2020-07-01 Thread yang_y_yi
From: Yi Yang Many NICs can support VXLAN TSO, this can improve VM-to-VM TCP performance, but for UDP, most of NICs can offload UFO, so GSO is very necessary for UDP when userspace TSO is enabled, GSO also can do VXLAN TSO if NIC can't support it. GRO is necessary if TSO and UFO are enabled, it

[ovs-dev] [PATCH v2 3/5] Add GSO support for DPDK data path

2020-07-01 Thread yang_y_yi
From: Yi Yang GSO(Generic Segment Offload) can segment large UDP and TCP packet to small packets per MTU of destination , especially for the case that physical NIC can't do hardware offload VXLAN TSO and VXLAN UFO, GSO can make sure userspace TSO can still work but not drop. In addition, GSO

[ovs-dev] [PATCH v2 2/5] Enable VXLAN TSO for DPDK datapath

2020-07-01 Thread yang_y_yi
From: Yi Yang Many NICs can support VXLAN TSO which can help improve across-compute-node VM-to-VM performance in case that MTU is set to 1500. This patch allows dpdkvhostuserclient interface and veth/tap interface to leverage NICs' offload capability to maximize across-compute-node TCP

[ovs-dev] [PATCH v1] Enable VXLAN TSO for DPDK datapath

2020-06-01 Thread yang_y_yi
From: Yi Yang Many NICs can support VXLAN TSO which can help improve across-compute-node VM-to-VM performance in case that MTU is set to 1500. This patch allows dpdkvhostuserclient interface and veth/tap interface to leverage NICs' offload capability to maximize across-compute-node TCP

[ovs-dev] [RFC PATCH] Enable VXLAN TSO for dpdk datapath

2020-05-25 Thread yang_y_yi
From: Yi Yang This patch just show how VXLAN TSO works for developers, it isn't ready for merge, welcome comments. Signed-off-by: Yi Yang --- lib/dp-packet.h| 33 +++ lib/netdev-dpdk.c | 167 +++-- lib/netdev-linux.c | 20 +++

[ovs-dev] [PATCH v8] Use TPACKET_V3 to accelerate veth for userspace datapath

2020-04-13 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V3 and using DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). >From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the Linux kernels current

[ovs-dev] [PATCH v7] Use TPACKET_V3 to accelerate veth for userspace datapath

2020-03-18 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V3 and using DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). >From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the Linux kernels current

[ovs-dev] [PATCH v6] Use TPACKET_V3 to accelerate veth for userspace datapath

2020-03-06 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V3 and using DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). >From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the Linux kernels current

[ovs-dev] [PATCH v5] Use TPACKET_V3 to accelerate veth for userspace datapath

2020-02-24 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V3 and using DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). >From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the Linux kernels current

[ovs-dev] [PATCH v5] Use TPACKET_V3 to accelerate veth for userspace datapath

2020-02-24 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V3 and using DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). >From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the Linux kernels current

[ovs-dev] [PATCH v5] Use TPACKET_V3 to accelerate veth for userspace datapath

2020-02-24 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V3 and using DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). >From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the Linux kernels current

[ovs-dev] [PATCH v4] Use TPACKET_V3 to accelerate veth for userspace datapath

2020-02-15 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V3 and using DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). >From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the Linux kernels current

[ovs-dev] [PATCH v3] Use TPACKET_V3 to accelerate veth for userspace datapath

2020-02-11 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V3 and using DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). >From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the Linux kernels current

[ovs-dev] [PATCH v2] Use TPACKET_V3 to accelerate veth for userspace datapath

2020-02-07 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V3 and using DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). >From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the Linux kernels current

[ovs-dev] [PATCH] Use TPACKET_V1/V2/V3 to accelerate veth for DPDK datapath

2020-01-20 Thread yang_y_yi
From: Yi Yang We can avoid high system call overhead by using TPACKET_V1/V2/V3 and use DPDK-like poll to receive and send packets (Note: send still needs to call sendto to trigger final packet transmission). I can see about 30% improvement compared to last recvmmsg optimization if I use

Re: [ovs-dev] [PATCH] Use batch process recv for tap and raw socket in netdev datapath

2019-12-17 Thread yang_y_yi
Hi, William I used OVS DPDK to test it, you shouldn't add tap interface to ovs DPDK bridge if you use vdev to add tap, virtio_user is just for it, but that won't use this receive function to receive packets. At 2019-12-17 02:55:50, "William Tu" wrote: >On Fri, Dec 06, 2019 at 02:09:24AM

[ovs-dev] [PATCH v2] Use batch process recv for tap and raw socket in netdev datapath

2019-12-17 Thread yang_y_yi
From: Yi Yang Current netdev_linux_rxq_recv_tap and netdev_linux_rxq_recv_sock just receive single packet, that is very inefficient, per my test case which adds two tap ports or veth ports into OVS bridge (datapath_type=netdev) and use iperf3 to do performance test between two ports (they are

[ovs-dev] [PATCH] Use batch process recv for tap and raw socket in netdev datapath

2019-12-05 Thread yang_y_yi
From: Yi Yang Current netdev_linux_rxq_recv_tap and netdev_linux_rxq_recv_sock just receive single packet, that is very inefficient, per my test case which adds two tap ports or veth ports into OVS bridge (datapath_type=netdev) and use iperf3 to do performance test between two ports (they are