Hi, folks
After I added QoS, I can see it by ovn-nbctl list qos
"""
_uuid : 690ecdf9-c6ae-41ef-a4ae-0a7a7c7c33ce
action : {}
bandwidth : {rate=100}
direction : to-lport
external_ids : {}
match : "outport ==
At 2021-02-05 01:38:14, "Flavio Leitner" wrote:
>
>
>Hi Yi,
>
>Again, sorry the delay to review the patch.
>
>The patch is using the outer length fields from DPDK
>which seems to be a problem in OVS because most of the
>packet transformation functions are not aware of that.
>
>Therefore, after
From: Yi Yang
Many NICs can support VXLAN TSO which can help
improve across-compute-node VM-to-VM performance
in case that MTU is set to 1500.
This patch allows dpdkvhostuserclient interface
and veth/tap interface to leverage NICs' offload
capability to maximize across-compute-node TCP
Thanks a lot, Flavio, please check inline comments for more discussion.
At 2020-10-31 01:55:57, "Flavio Leitner" wrote:
>
>Hi Yi,
>
>Thanks for the patch and sorry the delay to review it.
>See my comments in line.
>
>Thanks,
>fbl
>
>
>On Fri, Aug 07, 2020 at 06:56:45PM +0800,
From: Yi Yang
In some use cases, zone is used to differentiate different
conntrack state tables, so zone also should be synchronized
if it is set.
Signed-off-by: Yi Yang
---
include/network.h | 1 +
src/build.c | 3 +++
src/parse.c | 5 +
3 files changed, 9 insertions(+)
diff
From: Yi Yang
shinfo is used to store reference counter and free callback
of an external buffer, but it is stored in mbuf if the mbuf
has tailroom for it.
This is wrong because the mbuf (and its data) can be freed
before the external buffer, for example:
pkt2 = rte_pktmbuf_alloc(mp);
From: Yi Yang
iperf3 UDP performance of veth to veth case is
very very bad because of too many packet loss,
the root cause is rmem_default and wmem_default
are just 212992, but iperf3 UDP test used 8K
UDP size which resulted in many UDP fragment in
case that MTU size is 1500, one 8K UDP send
From: Yi Yang
Currently OVS can't get link state, mtu, mac, driver, etc.
when tap interface is in network namespace, with netns option
and netns helper functions, these info can be gotten.
This patch fixed all these issues and make sure tap interface
in network namespace can get same info as it
From: Yi Yang
OVS userspace datapath can't support tap interface statistics
and status update, so users can't get these information by cmd
"ovs-vsctl list interface tap1", the root cause of this issue
is OVS doesn't know network namespace of tap interface.
This patch series fixed this issue and
From: Yi Yang
In userspace datapath, "ovs-vsctl list interface" can't
get interface statistics and there are many WARN log, we
can enable it work normally if it has correct network
namespace. This patch enabled netns option for tap interface
, it is the prerequisite interface statistics and
From: Yi Yang
After tap interface is moved to network namespace,
"ovs-vsctl list interface tapXXX" can get statistics
info of tap interface, the root cause is OVS still
gets statistics info in root namespace.
With netns option help, OVS can get statistics info
in tap interface netns.
This
From: Yi Yang
iperf3 UDP performance of veth to veth case is
very very bad because of too many packet loss,
the root cause is rmem_default and wmem_default
are just 212992, but iperf3 UDP test used 8K
UDP size which resulted in many UDP fragment in
case that MTU size is 1500, one 8K UDP send
From: Yi Yang
iperf3 UDP performance of veth to veth case is
very very bad because of too many packet loss,
the root cause is rmem_default and wmem_default
are just 212992, but iperf3 UDP test used 8K
UDP size which resulted in many UDP fragment in
case that MTU size is 1500, one 8K UDP send
From: Yi Yang
Currently OVS can't get link state, mtu, mac, driver, etc.
when tap interface is in network namespace, with netns option
and netns helper functions, these info can be gotten.
This patch fixed all these issues and make sure tap interface
in network namespace can get same info as it
From: Yi Yang
In userspace datapath, "ovs-vsctl list interface" can't
get interface statistics and there are many WARN log, we
can enable it work normally if it has correct network
namespace. This patch enabled netns option for tap interface
, it is the prerequisite interface statistics and
From: Yi Yang
Currently all the interfaces are handled by single
thread ovs-vswitchd in userspace datapath, this is
unscalable, especially in Openstack case, there are
many tap and veth interfaces attached to bridge to
handle routing and floating ip.
But ovs-netdev can't be handled by pmd
From: Yi Yang
After tap interface is moved to network namespace,
"ovs-vsctl list interface tapXXX" can get statistics
info of tap interface, the root cause is OVS still
gets statistics info in root namespace.
With netns option help, OVS can get statistics info
in tap interface netns.
This
From: Yi Yang
In openstack and OVS DPDK user scenario, there are many tap
interfaces and veth interfaces added into OVS bridge, but
only single thread ovs-vswitchd is handling them, this resulted
in very bad performance for floating IP and L3 routing in DVR
mode.
This patch series are just to
From: Yi Yang
GSO(Generic Segment Offload) can segment large UDP
and TCP packet to small packets per MTU of destination
, especially for the case that physical NIC can't
do hardware offload VXLAN TSO and VXLAN UFO, GSO can
make sure userspace TSO can still work but not drop.
In addition, GSO
From: Yi Yang
Many NICs can support VXLAN TSO, this can improve
VM-to-VM TCP performance, but for UDP, most of NICs
can offload UFO, so GSO is very necessary for UDP
when userspace TSO is enabled, GSO also can do
VXLAN TSO if NIC can't support it. GRO is necessary
if TSO and UFO are enabled, it
From: Yi Yang
Many NICs can support VXLAN TSO which can help
improve across-compute-node VM-to-VM performance
in case that MTU is set to 1500.
This patch allows dpdkvhostuserclient interface
and veth/tap interface to leverage NICs' offload
capability to maximize across-compute-node TCP
From: Yi Yang
GRO(Generic Receive Offload) can help improve performance
when TSO (TCP Segment Offload) or VXLAN TSO is enabled
on transmit side, this can avoid overhead of ovs DPDK
data path and enqueue vhost for VM by merging many small
packets to large packets (65535 bytes at most) once it
From: Yi Yang
With GSO and GRO enabled, OVS DPDK can do GSO by software
if NIC can't support TSO or VXLAN TSO hardware offload.
Signed-off-by: Yi Yang
---
Documentation/topics/userspace-tso.rst | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git
From: Yi Yang
With GSO and GRO enabled, OVS DPDK can do GSO by software
if NIC can't support TSO or VXLAN TSO hardware offload.
Signed-off-by: Yi Yang
---
Documentation/topics/userspace-tso.rst | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git
From: Yi Yang
For multi-seg mbuf, pkt_len isn't equal to data_len,
data_len is data_len of the first seg, pkt_len is
sum of data_len of all the segs, so for such packets,
dp_packet_set_size shouldn't change data_len.
Signed-off-by: Yi Yang
---
lib/dp-packet.h | 4 +++-
1 file changed, 3
From: Yi Yang
GRO(Generic Receive Offload) can help improve performance
when TSO (TCP Segment Offload) or VXLAN TSO is enabled
on transmit side, this can avoid overhead of ovs DPDK
data path and enqueue vhost for VM by merging many small
packets to large packets (65535 bytes at most) once it
From: Yi Yang
Many NICs can support VXLAN TSO, this can improve
VM-to-VM TCP performance, but for UDP, most of NICs
can offload UFO, so GSO is very necessary for UDP
when userspace TSO is enabled, GSO also can do
VXLAN TSO if NIC can't support it. GRO is necessary
if TSO and UFO are enabled, it
From: Yi Yang
GSO(Generic Segment Offload) can segment large UDP
and TCP packet to small packets per MTU of destination
, especially for the case that physical NIC can't
do hardware offload VXLAN TSO and VXLAN UFO, GSO can
make sure userspace TSO can still work but not drop.
In addition, GSO
From: Yi Yang
Many NICs can support VXLAN TSO which can help
improve across-compute-node VM-to-VM performance
in case that MTU is set to 1500.
This patch allows dpdkvhostuserclient interface
and veth/tap interface to leverage NICs' offload
capability to maximize across-compute-node TCP
From: Yi Yang
Many NICs can support VXLAN TSO which can help
improve across-compute-node VM-to-VM performance
in case that MTU is set to 1500.
This patch allows dpdkvhostuserclient interface
and veth/tap interface to leverage NICs' offload
capability to maximize across-compute-node TCP
From: Yi Yang
This patch just show how VXLAN TSO works for developers,
it isn't ready for merge, welcome comments.
Signed-off-by: Yi Yang
---
lib/dp-packet.h| 33 +++
lib/netdev-dpdk.c | 167 +++--
lib/netdev-linux.c | 20 +++
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V3
and using DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
>From Linux kernel 3.10 on, TPACKET_V3 has been supported,
so all the Linux kernels current
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V3
and using DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
>From Linux kernel 3.10 on, TPACKET_V3 has been supported,
so all the Linux kernels current
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V3
and using DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
>From Linux kernel 3.10 on, TPACKET_V3 has been supported,
so all the Linux kernels current
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V3
and using DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
>From Linux kernel 3.10 on, TPACKET_V3 has been supported,
so all the Linux kernels current
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V3
and using DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
>From Linux kernel 3.10 on, TPACKET_V3 has been supported,
so all the Linux kernels current
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V3
and using DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
>From Linux kernel 3.10 on, TPACKET_V3 has been supported,
so all the Linux kernels current
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V3
and using DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
>From Linux kernel 3.10 on, TPACKET_V3 has been supported,
so all the Linux kernels current
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V3
and using DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
>From Linux kernel 3.10 on, TPACKET_V3 has been supported,
so all the Linux kernels current
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V3
and using DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
>From Linux kernel 3.10 on, TPACKET_V3 has been supported,
so all the Linux kernels current
From: Yi Yang
We can avoid high system call overhead by using TPACKET_V1/V2/V3
and use DPDK-like poll to receive and send packets (Note: send
still needs to call sendto to trigger final packet transmission).
I can see about 30% improvement compared to last recvmmsg
optimization if I use
Hi, William
I used OVS DPDK to test it, you shouldn't add tap interface to ovs DPDK bridge
if you use vdev to add tap, virtio_user is just for it, but that won't use this
receive function to receive packets.
At 2019-12-17 02:55:50, "William Tu" wrote:
>On Fri, Dec 06, 2019 at 02:09:24AM
From: Yi Yang
Current netdev_linux_rxq_recv_tap and netdev_linux_rxq_recv_sock
just receive single packet, that is very inefficient, per my test
case which adds two tap ports or veth ports into OVS bridge
(datapath_type=netdev) and use iperf3 to do performance test
between two ports (they are
From: Yi Yang
Current netdev_linux_rxq_recv_tap and netdev_linux_rxq_recv_sock
just receive single packet, that is very inefficient, per my test
case which adds two tap ports or veth ports into OVS bridge
(datapath_type=netdev) and use iperf3 to do performance test
between two ports (they are
44 matches
Mail list logo