cc to ovs-discuss
---------- Forwarded message ---------
发件人: shuangyang qian <qsyq...@gmail.com>
Date: 2019年11月5日周二 下午6:12
Subject: Re: [ovs-discuss] the network performence is not normal when use
openvswitch.ko make from ovs tree
To: Tonghao Zhang <xiangxia.m....@gmail.com>


thank you for your reply, i just change my kernel version as same as you
and do the steps you provide, and get the same result which i metioned at
first. The process is like below.
on node1:
# ovs-vsctl show
4f4b936e-ddb9-4fc6-b0aa-6eb6034d4671
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "gnv0"
            Interface "gnv0"
                type: geneve
                options: {csum="true", key="100", remote_ip="10.18.124.2"}
        Port "veth-vm1"
            Interface "veth-vm1"
    ovs_version: "2.12.0"
# ip netns exec vm1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ovs-gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN
group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
3: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group
default qlen 1000
    link/ether 32:d9:4f:86:c3:58 brd ff:ff:ff:ff:ff:ff
4: ovs-ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default
qlen 1000
    link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
5: ovs-ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default
qlen 1000
    link/tunnel6 :: brd ::
19: vm1-eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
    link/ether 32:4b:51:e2:2b:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.100.10/24 scope global vm1-eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::304b:51ff:fee2:2bf4/64 scope link
       valid_lft forever preferred_lft forever

on node2:
# ovs-vsctl show
53df6c21-c210-4c2c-a7ab-b1edb0df4a31
    Bridge br-int
        Port "veth-vm2"
            Interface "veth-vm2"
        Port "gnv0"
            Interface "gnv0"
                type: geneve
                options: {csum="true", key="100", remote_ip="10.18.124.1"}
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.12.0"
# ip netns exec vm2 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ovs-gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN
group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
3: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group
default qlen 1000
    link/ether 8e:90:3e:95:1b:dd brd ff:ff:ff:ff:ff:ff
4: ovs-ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default
qlen 1000
    link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
5: ovs-ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default
qlen 1000
    link/tunnel6 :: brd ::
11: vm2-eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
    link/ether ee:e4:3e:16:6f:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.100.20/24 scope global vm2-eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ece4:3eff:fe16:6f66/64 scope link
       valid_lft forever preferred_lft forever

and in network namespace vm1 on node1 i start iperf3 as server:
# ip netns exec vm1 iperf3 -s

and in network namespace vm2 on noed2 i start iperf3 as client:
# ip netns exec vm2 iperf3 -c 192.168.100.10 -i 2 -t 10
Connecting to host 192.168.100.10, port 5201
[  4] local 192.168.100.20 port 35258 connected to 192.168.100.10 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-2.00   sec   494 MBytes  2.07 Gbits/sec  151    952 KBytes

[  4]   2.00-4.00   sec   582 MBytes  2.44 Gbits/sec    3   1007 KBytes

[  4]   4.00-6.00   sec   639 MBytes  2.68 Gbits/sec    0   1.36 MBytes

[  4]   6.00-8.00   sec   618 MBytes  2.59 Gbits/sec    0   1.64 MBytes

[  4]   8.00-10.00  sec   614 MBytes  2.57 Gbits/sec    0   1.88 MBytes

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  2.88 GBytes  2.47 Gbits/sec  154             sender
[  4]   0.00-10.00  sec  2.88 GBytes  2.47 Gbits/sec
 receiver

iperf Done.

the openvswitch.ko in both two nodes is:
# modinfo openvswitch
filename:
/lib/modules/3.10.0-957.el7.x86_64/extra/openvswitch/openvswitch.ko
alias:          net-pf-16-proto-16-family-ovs_ct_limit
alias:          net-pf-16-proto-16-family-ovs_meter
alias:          net-pf-16-proto-16-family-ovs_packet
alias:          net-pf-16-proto-16-family-ovs_flow
alias:          net-pf-16-proto-16-family-ovs_vport
alias:          net-pf-16-proto-16-family-ovs_datapath
version:        2.12.0
license:        GPL
description:    Open vSwitch switching datapath
retpoline:      Y
rhelversion:    7.6
srcversion:     764C8BD051B3182DE71CF29
depends:
 nf_conntrack,tunnel6,nf_nat,nf_defrag_ipv6,libcrc32c,nf_nat_ipv6,nf_nat_ipv4
vermagic:       3.10.0-957.el7.x86_64 SMP mod_unload modversions
parm:           udp_port:Destination UDP port (ushort)

aslo, i uninstall openvswitch-kmod rpm package and use openvswitch.ko in
linux kernel, like:
# modinfo openvswitch
filename:
/lib/modules/3.10.0-957.el7.x86_64/kernel/net/openvswitch/openvswitch.ko.xz
alias:          net-pf-16-proto-16-family-ovs_packet
alias:          net-pf-16-proto-16-family-ovs_flow
alias:          net-pf-16-proto-16-family-ovs_vport
alias:          net-pf-16-proto-16-family-ovs_datapath
license:        GPL
description:    Open vSwitch switching datapath
retpoline:      Y
rhelversion:    7.6
srcversion:     6FE05FC439FA9CE7E264684
depends:
 nf_conntrack,nf_nat,libcrc32c,nf_nat_ipv6,nf_nat_ipv4,nf_defrag_ipv6
intree:         Y
vermagic:       3.10.0-957.el7.x86_64 SMP mod_unload modversions
signer:         CentOS Linux kernel signing key
sig_key:        B7:0D:CF:0D:F2:D9:B7:F2:91:59:24:82:49:FD:6F:E8:7B:78:14:27
sig_hashalgo:   sha256

and get the bw is:
# ip netns exec vm2 iperf3 -c 192.168.100.10 -i 2 -t 10
Connecting to host 192.168.100.10, port 5201
[  4] local 192.168.100.20 port 35270 connected to 192.168.100.10 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-2.00   sec  1.61 GBytes  6.92 Gbits/sec  2793    877 KBytes

[  4]   2.00-4.00   sec  1.56 GBytes  6.70 Gbits/sec  7773    907 KBytes

[  4]   4.00-6.00   sec  1.78 GBytes  7.62 Gbits/sec  4387    952 KBytes

[  4]   6.00-8.00   sec  1.66 GBytes  7.11 Gbits/sec  9365    815 KBytes

[  4]   8.00-10.00  sec  1.68 GBytes  7.20 Gbits/sec  2421    554 KBytes

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  8.28 GBytes  7.11 Gbits/sec  26739
sender
[  4]   0.00-10.00  sec  8.28 GBytes  7.11 Gbits/sec
 receiver

iperf Done.

so, the performence also is not normal when use openvswitch.ko installed
from openvswitch-kmod rpm package.

can you show me your build process for openvswith-*.rpm package or give me
some link? or your process of install ovs?

i don't know where is wrong.

thx.

Tonghao Zhang <xiangxia.m....@gmail.com> 于2019年11月5日周二 下午3:59写道:

> On Mon, Nov 4, 2019 at 5:14 PM shuangyang qian <qsyq...@gmail.com> wrote:
> >
> > Hi:
> > I make rpm packages for ovs and ovn with this document:
> http://docs.openvswitch.org/en/latest/intro/install/fedora/ . For use the
> kernel module in ovs tree, i configure with the command: ./configure
> --with-linux=/lib/modules/$(uname -r)/build .
> > Then install the rpm packages.
> > when i finished, i check the openvswitch.ko is like:
> > # lsmod |  grep openvswitch
> > openvswitch           291276  0
> > tunnel6                 3115  1 openvswitch
> > nf_defrag_ipv6         25957  2 nf_conntrack_ipv6,openvswitch
> > nf_nat_ipv6             6459  2 openvswitch,ip6table_nat
> > nf_nat_ipv4             6187  2 openvswitch,iptable_nat
> > nf_nat                 18080  5
> xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4
> > nf_conntrack          102766  10
> ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat
> > libcrc32c               1388  3 ip_vs,openvswitch,xfs
> > ipv6                  400397  92
> ip_vs,nf_conntrack_ipv6,openvswitch,nf_defrag_ipv6,nf_nat_ipv6,bridge
> > # modinfo openvswitch
> > filename:
>  /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
> > alias:          net-pf-16-proto-16-family-ovs_ct_limit
> > alias:          net-pf-16-proto-16-family-ovs_meter
> > alias:          net-pf-16-proto-16-family-ovs_packet
> > alias:          net-pf-16-proto-16-family-ovs_flow
> > alias:          net-pf-16-proto-16-family-ovs_vport
> > alias:          net-pf-16-proto-16-family-ovs_datapath
> > version:        2.11.2
> > license:        GPL
> > description:    Open vSwitch switching datapath
> > srcversion:     9DDA327F9DD46B9813628A4
> > depends:
> nf_conntrack,tunnel6,ipv6,nf_nat,nf_defrag_ipv6,libcrc32c,nf_nat_ipv6,nf_nat_ipv4
> > vermagic:       4.9.18-19080201 SMP mod_unload modversions
> > parm:           udp_port:Destination UDP port (ushort)
> > # rpm -qf /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
> > openvswitch-kmod-2.11.2-1.el7.x86_64
> >
> > Then i start to build my network structure. I have two node,and network
> namespace vm1 on node1, network namespace vm2 on node2. vm1's veth pair
> veth-vm1 is on node1's br-int. vm2's veth pair veth-vm2 is on node2's
> br-int. In logical layer, there is one logical switch test-subnet and two
> logical switch port node1 and node2 on it. like this:
> > # ovn-nbctl show
> > switch 70585c0e-3cd9-459e-9448-3c13f3c0bfa3 (test-subnet)
> >     port node2
> >         addresses: ["00:00:00:00:00:02 192.168.100.20"]
> >     port node1
> >         addresses: ["00:00:00:00:00:01 192.168.100.10"]
> > on node1:
> > # ovs-vsctl show
> > 5180f74a-1379-49af-b265-4403bd0d82d8
> >     Bridge br-int
> >         fail_mode: secure
> >         Port "ovn-431b9e-0"
> >             Interface "ovn-431b9e-0"
> >                 type: geneve
> >                 options: {csum="true", key=flow, remote_ip="10.18.124.2"}
> >         Port br-int
> >             Interface br-int
> >                 type: internal
> >         Port "veth-vm1"
> >             Interface "veth-vm1"
> >     ovs_version: "2.11.2"
> > # ip netns exec vm1 ip a
> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1
> >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >     inet 127.0.0.1/8 scope host lo
> >        valid_lft forever preferred_lft forever
> >     inet6 ::1/128 scope host
> >        valid_lft forever preferred_lft forever
> > 14: ovs-gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state
> DOWN group default qlen 1000
> >     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> > 15: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN
> group default qlen 1000
> >     link/ether 22:02:1b:08:ec:53 brd ff:ff:ff:ff:ff:ff
> > 16: ovs-ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group
> default qlen 1
> >     link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
> 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
> > 17: ovs-ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group
> default qlen 1
> >     link/tunnel6 :: brd ::
> > 18: vm1-eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc
> noqueue state UP group default qlen 1000
> >     link/ether 00:00:00:00:00:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> >     inet 192.168.100.10/24 scope global vm1-eth0
> >        valid_lft forever preferred_lft forever
> >     inet6 fe80::200:ff:fe00:1/64 scope link
> >        valid_lft forever preferred_lft forever
> >
> >
> > on node2:# ovs-vsctl show
> > 011332d0-78bc-47f7-be3c-fab0beb08e28
> >     Bridge br-int
> >         fail_mode: secure
> >         Port br-int
> >             Interface br-int
> >                 type: internal
> >         Port "ovn-c655f8-0"
> >             Interface "ovn-c655f8-0"
> >                 type: geneve
> >                 options: {csum="true", key=flow, remote_ip="10.18.124.1"}
> >         Port "veth-vm2"
> >             Interface "veth-vm2"
> >     ovs_version: "2.11.2"
> > #ip netns exec vm2 ip a
> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1
> >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >     inet 127.0.0.1/8 scope host lo
> >        valid_lft forever preferred_lft forever
> >     inet6 ::1/128 scope host
> >        valid_lft forever preferred_lft forever
> > 10: ovs-gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state
> DOWN group default qlen 1000
> >     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> > 11: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN
> group default qlen 1000
> >     link/ether 4a:1d:ca:65:e3:ca brd ff:ff:ff:ff:ff:ff
> > 12: ovs-ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group
> default qlen 1
> >     link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
> 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
> > 13: ovs-ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group
> default qlen 1
> >     link/tunnel6 :: brd ::
> > 17: vm2-eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc
> noqueue state UP group default qlen 1000
> >     link/ether 00:00:00:00:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> >     inet 192.168.100.20/24 scope global vm2-eth0
> >        valid_lft forever preferred_lft forever
> >     inet6 fe80::200:ff:fe00:2/64 scope link
> >        valid_lft forever preferred_lft forever
> >
> > then i start to use iperf to check the network performence, oh, by the
> way, i use geneve protocol between the two nodes, ovn-sbctl show is :
> > # ovn-sbctl show
> > Chassis "c655f877-b7ed-4bb5-a047-23521426d541"
> >     hostname: "node1.com"
> >     Encap geneve
> >         ip: "10.18.124.1"
> >         options: {csum="true"}
> >     Port_Binding "node1"
> > Chassis "431b9efb-b464-42a1-a6dd-7fc6e0176137"
> >     hostname: "node2.com"
> >     Encap geneve
> >         ip: "10.18.124.2"
> >         options: {csum="true"}
> >     Port_Binding "node2"
> >
> > on node1, in network namespace vm1 i start the iperf3 as server:
> > #ip netns exec vm1 iperf3 -s
> > on node2, in network namespace vm2 i start the iper3 as client:
> > # ip netns exec vm2 iperf3 -c 192.168.100.10
> > Connecting to host 192.168.100.10, port 5201
> > [  4] local 192.168.100.20 port 40708 connected to 192.168.100.10 port
> 5201
> > [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
> > [  4]   0.00-1.00   sec   431 MBytes  3.61 Gbits/sec   34    253 KBytes
> > [  4]   1.00-2.00   sec   426 MBytes  3.58 Gbits/sec    0    253 KBytes
> > [  4]   2.00-3.00   sec   426 MBytes  3.57 Gbits/sec    0    253 KBytes
> > [  4]   3.00-4.00   sec   401 MBytes  3.37 Gbits/sec    0    255 KBytes
> > [  4]   4.00-5.00   sec   429 MBytes  3.60 Gbits/sec    0    255 KBytes
> > [  4]   5.00-6.00   sec   413 MBytes  3.46 Gbits/sec    0    253 KBytes
> > [  4]   6.00-7.00   sec   409 MBytes  3.43 Gbits/sec    0    250 KBytes
> > [  4]   7.00-8.00   sec   427 MBytes  3.58 Gbits/sec    0    253 KBytes
> > [  4]   8.00-9.00   sec   417 MBytes  3.49 Gbits/sec    0    250 KBytes
> > [  4]   9.00-10.00  sec   385 MBytes  3.23 Gbits/sec    0   5.27 KBytes
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval           Transfer     Bandwidth       Retr
> > [  4]   0.00-10.00  sec  4.07 GBytes  3.49 Gbits/sec   34
>  sender
> > [  4]   0.00-10.00  sec  4.07 GBytes  3.49 Gbits/sec
> receiver
> >
> > as you see, the bw is only 3.xxGbits/sec, but my physics eth1's bw
> 10000M:
> Hi, I run the ovs on node1 and node2, using geneve tunnel but I didn't
> reproduce your issue.
>
> create the geneve tunnel:
> # ovs-vsctl add-br br-int
> # ovs-vsctl add-port br-int gnv0 -- set Interface gnv0 type=geneve
> options:csum=true options:key=100 options:remote_ip=1.1.1.200
>
> # ovs-vsctl show
> 9393485c-c64c-490e-884e-418ff5d90251
>     Bridge br-int
>         Port gnv0
>             Interface gnv0
>                 type: geneve
>                 options: {csum="true", key="100", remote_ip="1.1.1.200"}
>         Port __tap01
>             Interface __tap01
>         Port br-int
>             Interface br-int
>                 type: internal
>
> # ip netns exec ns100 ifconfig
> __tap00: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
>         inet 2.2.2.100  netmask 255.255.255.0  broadcast 0.0.0.0
>         inet6 fe80::254:ff:fe00:1  prefixlen 64  scopeid 0x20<link>
>         ether 00:54:00:00:00:01  txqueuelen 1000  (Ethernet)
>         RX packets 605000  bytes 39951500 (38.1 MiB)
>         RX errors 0  dropped 0  overruns 0  frame 0
>         TX packets 819864  bytes 31247862764 (29.1 GiB)
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> # ip netns exec ns100 iperf -c 2.2.2.200 -i 2 -t 10
> ------------------------------------------------------------
> Client connecting to 2.2.2.200, TCP port 5001
> TCP window size:  482 KByte (default)
> ------------------------------------------------------------
> [  3] local 2.2.2.100 port 41428 connected with 2.2.2.200 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0- 2.0 sec  1.85 GBytes  7.93 Gbits/sec
> [  3]  2.0- 4.0 sec  1.94 GBytes  8.33 Gbits/sec
>
> # modinfo openvswitch
> filename:       /lib/modules/3.10.0-957.1.3.el7.x86_64/extra/openvswitch.ko
>
> so, can you use the commands as show above to reproduce your
> issue.(the kernel version is different.)
> > # ethtool eth1
> > Settings for eth1:
> >         Supported ports: [ FIBRE ]
> >         Supported link modes:   10000baseT/Full
> >         Supported pause frame use: Symmetric
> >         Supports auto-negotiation: No
> >         Supported FEC modes: Not reported
> >         Advertised link modes:  10000baseT/Full
> >         Advertised pause frame use: Symmetric
> >         Advertised auto-negotiation: No
> >         Advertised FEC modes: Not reported
> >         Speed: 10000Mb/s
> >         Duplex: Full
> >         Port: Other
> >         PHYAD: 0
> >         Transceiver: external
> >         Auto-negotiation: off
> >         Supports Wake-on: d
> >         Wake-on: d
> >         Current message level: 0x00000007 (7)
> >                                drv probe link
> >         Link detected: yes
> >
> > when i uninstall the openvswitch-kmod package and use the openvswitch.ko
> in the upstream linux kernel, like this:
> > # lsmod | grep openvswitch
> > openvswitch            95805  0
> > nf_defrag_ipv6         25957  2 nf_conntrack_ipv6,openvswitch
> > nf_nat_ipv6             6459  2 openvswitch,ip6table_nat
> > nf_nat_ipv4             6187  2 openvswitch,iptable_nat
> > nf_nat                 18080  5
> xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4
> > nf_conntrack          102766  10
> ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat
> > libcrc32c               1388  3 ip_vs,openvswitch,xfs
> > # modinfo openvswitch
> > filename:
>  /lib/modules/4.9.18-19080201/kernel/net/openvswitch/openvswitch.ko
> > alias:          net-pf-16-proto-16-family-ovs_packet
> > alias:          net-pf-16-proto-16-family-ovs_flow
> > alias:          net-pf-16-proto-16-family-ovs_vport
> > alias:          net-pf-16-proto-16-family-ovs_datapath
> > license:        GPL
> > description:    Open vSwitch switching datapath
> > srcversion:     915B872C96FB1D38D107742
> > depends:
> nf_conntrack,nf_nat,libcrc32c,nf_nat_ipv6,nf_nat_ipv4,nf_defrag_ipv6
> > intree:         Y
> > vermagic:       4.9.18-19080201 SMP mod_unload modversions
> >
> > and do the same test in above, and i get the follow result:
> > # ip netns exec vm2 iperf3 -c 192.168.100.10
> > Connecting to host 192.168.100.10, port 5201
> > [  4] local 192.168.100.20 port 40652 connected to 192.168.100.10 port
> 5201
> > [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
> > [  4]   0.00-1.00   sec  1000 MBytes  8.39 Gbits/sec    4    290 KBytes
> > [  4]   1.00-2.00   sec   994 MBytes  8.34 Gbits/sec    0    292 KBytes
> > [  4]   2.00-3.00   sec  1002 MBytes  8.41 Gbits/sec    0    287 KBytes
> >    4]   3.00-4.00   sec   994 MBytes  8.34 Gbits/sec    0    292 KBytes
> > ▽  4]   4.00-5.00   sec   992 MBytes  8.32 Gbits/sec    0    298 KBytes
> > [  4]   5.00-6.00   sec   994 MBytes  8.34 Gbits/sec    0    305 KBytes
> > [  4]   6.00-7.00   sec   989 MBytes  8.29 Gbits/sec    0    313 KBytes
> > [  4]   7.00-8.00   sec   992 MBytes  8.32 Gbits/sec    0    290 KBytes
> > [  4]   8.00-9.00   sec   996 MBytes  8.36 Gbits/sec    0    303 KBytes
> > [  4]   9.00-10.00  sec   955 MBytes  8.01 Gbits/sec    0   5.27 KBytes
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval           Transfer     Bandwidth       Retr
> > [  4]   0.00-10.00  sec  9.67 GBytes  8.31 Gbits/sec    4
>  sender
> > [  4]   0.00-10.00  sec  9.67 GBytes  8.31 Gbits/sec
> receiver
> >
> > so, i cann't understand why the performence is so poor when i use the
> kernel module build from the ovs tree.
> >
> > anyone can give me some advice where is wrong?
> >
> > thx!
> >
> > _______________________________________________
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to