Re: [ovs-discuss] the network performence is not normal when use openvswitch.ko make from ovs tree

2019-11-05 Thread Tonghao Zhang
On Tue, Nov 5, 2019 at 6:14 PM shuangyang qian  wrote:
>
>
> cc to ovs-discuss
> -- Forwarded message -
> 发件人: shuangyang qian 
> Date: 2019年11月5日周二 下午6:12
> Subject: Re: [ovs-discuss] the network performence is not normal when use 
> openvswitch.ko make from ovs tree
> To: Tonghao Zhang 
>
>
> thank you for your reply, i just change my kernel version as same as you and 
> do the steps you provide, and get the same result which i metioned at first. 
> The process is like below.
> on node1:
> # ovs-vsctl show
> 4f4b936e-ddb9-4fc6-b0aa-6eb6034d4671
> Bridge br-int
> Port br-int
> Interface br-int
> type: internal
> Port "gnv0"
> Interface "gnv0"
> type: geneve
> options: {csum="true", key="100", remote_ip="10.18.124.2"}
> Port "veth-vm1"
> Interface "veth-vm1"
> ovs_version: "2.12.0"
> # ip netns exec vm1 ip a
> 1: lo:  mtu 65536 qdisc noop state DOWN group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN 
> group default qlen 1000
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
Why this netdev in your netns? you run the ovs in your netns ? the ovs
should running on host.

> 3: erspan0@NONE:  mtu 1450 qdisc noop state DOWN group 
> default qlen 1000
> link/ether 32:d9:4f:86:c3:58 brd ff:ff:ff:ff:ff:ff
?
> 4: ovs-ip6gre0@NONE:  mtu 1448 qdisc noop state DOWN group default 
> qlen 1000
> link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 
> 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
> 5: ovs-ip6tnl0@NONE:  mtu 1452 qdisc noop state DOWN group default 
> qlen 1000
?
> link/tunnel6 :: brd ::
> 19: vm1-eth0@if18:  mtu 1500 qdisc noqueue 
> state UP group default qlen 1000
> link/ether 32:4b:51:e2:2b:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 192.168.100.10/24 scope global vm1-eth0
>valid_lft forever preferred_lft forever
> inet6 fe80::304b:51ff:fee2:2bf4/64 scope link
>valid_lft forever preferred_lft forever
please set vm1-eth0 mtu to 1450.
> on node2:
> # ovs-vsctl show
> 53df6c21-c210-4c2c-a7ab-b1edb0df4a31
> Bridge br-int
> Port "veth-vm2"
> Interface "veth-vm2"
> Port "gnv0"
> Interface "gnv0"
> type: geneve
> options: {csum="true", key="100", remote_ip="10.18.124.1"}
> Port br-int
> Interface br-int
> type: internal
> ovs_version: "2.12.0"
> # ip netns exec vm2 ip a
> 1: lo:  mtu 65536 qdisc noop state DOWN group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN 
> group default qlen 1000
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 3: erspan0@NONE:  mtu 1450 qdisc noop state DOWN group 
> default qlen 1000
> link/ether 8e:90:3e:95:1b:dd brd ff:ff:ff:ff:ff:ff
> 4: ovs-ip6gre0@NONE:  mtu 1448 qdisc noop state DOWN group default 
> qlen 1000
> link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 
> 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
> 5: ovs-ip6tnl0@NONE:  mtu 1452 qdisc noop state DOWN group default 
> qlen 1000
> link/tunnel6 :: brd ::
> 11: vm2-eth0@if10:  mtu 1500 qdisc noqueue 
> state UP group default qlen 1000
> link/ether ee:e4:3e:16:6f:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 192.168.100.20/24 scope global vm2-eth0
>valid_lft forever preferred_lft forever
> inet6 fe80::ece4:3eff:fe16:6f66/64 scope link
>valid_lft forever preferred_lft forever
>
> and in network namespace vm1 on node1 i start iperf3 as server:
> # ip netns exec vm1 iperf3 -s
>
> and in network namespace vm2 on noed2 i start iperf3 as client:
> # ip netns exec vm2 iperf3 -c 192.168.100.10 -i 2 -t 10
> Connecting to host 192.168.100.10, port 5201
> [  4] local 192.168.100.20 port 35258 connected to 192.168.100.10 port 5201
> [ ID] Interval   Transfer Bandwidth   Retr  Cwnd
> [  4]   0.00-2.00   sec   494 MBytes  2.07 Gbits/sec  151952 KBytes
> [  4]   2.00-4.00   sec   582 MBytes  2.44 Gbits/sec3   1007 KBytes
> [  4]   4.00-6.00   sec   639 MBytes  2.68 Gbits/sec0   1.36 MBytes
> [  4]   6.00-8.00   sec   618 MBytes  2.59 Gbits/sec0   1.64 MBytes
> [  4]   8.00-10.00  sec   614 MBytes  2.57 Gbits/sec0   1.88 MBytes
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval

Re: [ovs-discuss] the network performence is not normal when use openvswitch.ko make from ovs tree

2019-11-05 Thread Tonghao Zhang
On Mon, Nov 4, 2019 at 5:14 PM shuangyang qian  wrote:
>
> Hi:
> I make rpm packages for ovs and ovn with this 
> document:http://docs.openvswitch.org/en/latest/intro/install/fedora/ . For 
> use the kernel module in ovs tree, i configure with the command: ./configure 
> --with-linux=/lib/modules/$(uname -r)/build .
> Then install the rpm packages.
> when i finished, i check the openvswitch.ko is like:
> # lsmod |  grep openvswitch
> openvswitch   291276  0
> tunnel6 3115  1 openvswitch
> nf_defrag_ipv6 25957  2 nf_conntrack_ipv6,openvswitch
> nf_nat_ipv6 6459  2 openvswitch,ip6table_nat
> nf_nat_ipv4 6187  2 openvswitch,iptable_nat
> nf_nat 18080  5 
> xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4
> nf_conntrack  102766  10 
> ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat
> libcrc32c   1388  3 ip_vs,openvswitch,xfs
> ipv6  400397  92 
> ip_vs,nf_conntrack_ipv6,openvswitch,nf_defrag_ipv6,nf_nat_ipv6,bridge
> # modinfo openvswitch
> filename:   /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
> alias:  net-pf-16-proto-16-family-ovs_ct_limit
> alias:  net-pf-16-proto-16-family-ovs_meter
> alias:  net-pf-16-proto-16-family-ovs_packet
> alias:  net-pf-16-proto-16-family-ovs_flow
> alias:  net-pf-16-proto-16-family-ovs_vport
> alias:  net-pf-16-proto-16-family-ovs_datapath
> version:2.11.2
> license:GPL
> description:Open vSwitch switching datapath
> srcversion: 9DDA327F9DD46B9813628A4
> depends:
> nf_conntrack,tunnel6,ipv6,nf_nat,nf_defrag_ipv6,libcrc32c,nf_nat_ipv6,nf_nat_ipv4
> vermagic:   4.9.18-19080201 SMP mod_unload modversions
> parm:   udp_port:Destination UDP port (ushort)
> # rpm -qf /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
> openvswitch-kmod-2.11.2-1.el7.x86_64
>
> Then i start to build my network structure. I have two node,and network 
> namespace vm1 on node1, network namespace vm2 on node2. vm1's veth pair 
> veth-vm1 is on node1's br-int. vm2's veth pair veth-vm2 is on node2's br-int. 
> In logical layer, there is one logical switch test-subnet and two logical 
> switch port node1 and node2 on it. like this:
> # ovn-nbctl show
> switch 70585c0e-3cd9-459e-9448-3c13f3c0bfa3 (test-subnet)
> port node2
> addresses: ["00:00:00:00:00:02 192.168.100.20"]
> port node1
> addresses: ["00:00:00:00:00:01 192.168.100.10"]
> on node1:
> # ovs-vsctl show
> 5180f74a-1379-49af-b265-4403bd0d82d8
> Bridge br-int
> fail_mode: secure
> Port "ovn-431b9e-0"
> Interface "ovn-431b9e-0"
> type: geneve
> options: {csum="true", key=flow, remote_ip="10.18.124.2"}
> Port br-int
> Interface br-int
> type: internal
> Port "veth-vm1"
> Interface "veth-vm1"
> ovs_version: "2.11.2"
> # ip netns exec vm1 ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
> default qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 14: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN 
> group default qlen 1000
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 15: erspan0@NONE:  mtu 1450 qdisc noop state DOWN group 
> default qlen 1000
> link/ether 22:02:1b:08:ec:53 brd ff:ff:ff:ff:ff:ff
> 16: ovs-ip6gre0@NONE:  mtu 1448 qdisc noop state DOWN group default 
> qlen 1
> link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 
> 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
> 17: ovs-ip6tnl0@NONE:  mtu 1452 qdisc noop state DOWN group default 
> qlen 1
> link/tunnel6 :: brd ::
> 18: vm1-eth0@if17:  mtu 1400 qdisc noqueue 
> state UP group default qlen 1000
> link/ether 00:00:00:00:00:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 192.168.100.10/24 scope global vm1-eth0
>valid_lft forever preferred_lft forever
> inet6 fe80::200:ff:fe00:1/64 scope link
>valid_lft forever preferred_lft forever
>
>
> on node2:# ovs-vsctl show
> 011332d0-78bc-47f7-be3c-fab0beb08e28
> Bridge br-int
> fail_mode: secure
> Port br-int
> Interface br-int
> type: internal
> Port "ovn-c655f8-0"
> Interface "ovn-c655f8-0"
> type: geneve
> options: {csum="true", key=flow, remote_ip="10.18.124.1"}
> Port "veth-vm2"
> Interface "veth-vm2"
> ovs_version: "2.11.2"
> #ip netns exec vm2 ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
> default qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> i

[ovs-discuss] the network performence is not normal when use openvswitch.ko make from ovs tree

2019-11-04 Thread shuangyang qian
Hi:
I make rpm packages for ovs and ovn with this document:
http://docs.openvswitch.org/en/latest/intro/install/fedora/ . For use the
kernel module in ovs tree, i configure with the command: ./configure
--with-linux=/lib/modules/$(uname -r)/build .
Then install the rpm packages.
when i finished, i check the openvswitch.ko is like:
# lsmod |  grep openvswitch
openvswitch   291276  0
tunnel6 3115  1 openvswitch
nf_defrag_ipv6 25957  2 nf_conntrack_ipv6,openvswitch
nf_nat_ipv6 6459  2 openvswitch,ip6table_nat
nf_nat_ipv4 6187  2 openvswitch,iptable_nat
nf_nat 18080  5
xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4
nf_conntrack  102766  10
ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat
libcrc32c   1388  3 ip_vs,openvswitch,xfs
ipv6  400397  92
ip_vs,nf_conntrack_ipv6,openvswitch,nf_defrag_ipv6,nf_nat_ipv6,bridge
# modinfo openvswitch
filename:
/lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
alias:  net-pf-16-proto-16-family-ovs_ct_limit
alias:  net-pf-16-proto-16-family-ovs_meter
alias:  net-pf-16-proto-16-family-ovs_packet
alias:  net-pf-16-proto-16-family-ovs_flow
alias:  net-pf-16-proto-16-family-ovs_vport
alias:  net-pf-16-proto-16-family-ovs_datapath
version:2.11.2
license:GPL
description:Open vSwitch switching datapath
srcversion: 9DDA327F9DD46B9813628A4
depends:
 
nf_conntrack,tunnel6,ipv6,nf_nat,nf_defrag_ipv6,libcrc32c,nf_nat_ipv6,nf_nat_ipv4
vermagic:   4.9.18-19080201 SMP mod_unload modversions
parm:   udp_port:Destination UDP port (ushort)
# rpm -qf /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
openvswitch-kmod-2.11.2-1.el7.x86_64

Then i start to build my network structure. I have two node,and network
namespace vm1 on node1, network namespace vm2 on node2. vm1's veth pair
veth-vm1 is on node1's br-int. vm2's veth pair veth-vm2 is on node2's
br-int. In logical layer, there is one logical switch test-subnet and two
logical switch port node1 and node2 on it. like this:
# ovn-nbctl show
switch 70585c0e-3cd9-459e-9448-3c13f3c0bfa3 (test-subnet)
port node2
addresses: ["00:00:00:00:00:02 192.168.100.20"]
port node1
addresses: ["00:00:00:00:00:01 192.168.100.10"]
on node1:
# ovs-vsctl show
5180f74a-1379-49af-b265-4403bd0d82d8
Bridge br-int
fail_mode: secure
Port "ovn-431b9e-0"
Interface "ovn-431b9e-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.18.124.2"}
Port br-int
Interface br-int
type: internal
Port "veth-vm1"
Interface "veth-vm1"
ovs_version: "2.11.2"
# ip netns exec vm1 ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
14: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN
group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
15: erspan0@NONE:  mtu 1450 qdisc noop state DOWN
group default qlen 1000
link/ether 22:02:1b:08:ec:53 brd ff:ff:ff:ff:ff:ff
16: ovs-ip6gre0@NONE:  mtu 1448 qdisc noop state DOWN group default
qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
17: ovs-ip6tnl0@NONE:  mtu 1452 qdisc noop state DOWN group default
qlen 1
link/tunnel6 :: brd ::
18: vm1-eth0@if17:  mtu 1400 qdisc noqueue
state UP group default qlen 1000
link/ether 00:00:00:00:00:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.100.10/24 scope global vm1-eth0
   valid_lft forever preferred_lft forever
inet6 fe80::200:ff:fe00:1/64 scope link
   valid_lft forever preferred_lft forever


on node2:# ovs-vsctl show
011332d0-78bc-47f7-be3c-fab0beb08e28
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "ovn-c655f8-0"
Interface "ovn-c655f8-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.18.124.1"}
Port "veth-vm2"
Interface "veth-vm2"
ovs_version: "2.11.2"
#ip netns exec vm2 ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
10: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN
group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
11: erspan0@NONE:  mtu 14