[ovs-discuss] ovs with docker

2020-01-02 Thread shuangyang qian
when i use ovs with docker, and found some unused virtual nic in docker
network namespace, like below:

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
2: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN
group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
3: erspan0@NONE:  mtu 1450 qdisc noop state DOWN group
default qlen 1000
link/ether 2e:87:57:9d:4a:d7 brd ff:ff:ff:ff:ff:ff
4: ovs-ip6gre0@NONE:  mtu 1448 qdisc noop state DOWN group default
qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
5: ovs-ip6tnl0@NONE:  mtu 1452 qdisc noop state DOWN group default
qlen 1
link/tunnel6 :: brd ::
29: eth0@if30:  mtu 1430 qdisc noqueue
state UP group default
link/ether 0a:00:00:32:02:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.50.2.2/24 brd 172.50.2.255 scope global eth0
   valid_lft forever preferred_lft forever

except lo and eth0, the ovs-gre* 、erspan*、ovs-ip6* virtual nics is not i
need. do some one can tell me how to delete them.

BTW, the virtual nics alse on host network stack, so i think shoudl find a
way to delete them in host. The unused virtual nics are created when i
insmod openvswitch.ko which i make it from ovs tree. the version is 2.11.2.

Thanks.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] high cpu usage when use ovs

2019-11-06 Thread shuangyang qian
when i usage ovs for overlay network with geneve protocol, the topo is like
bellow.
two physic nodes, node1 and node2.
in node1 i have network namespace vm1, its ip is 192.168.100.10.
in node2 i have network namespace vm2, its ip is 192.168.100.20.
in vm1 i start iperf3 server, in vm2 i start iperf3 as client.
and get result is:
# iperf3 -c 192.168.100.10 -t 2
Connecting to host 172.20.0.4, port 5201
[  4] local 172.20.0.5 port 35272 connected to 172.20.0.4 port 5201
[ ID] Interval   Transfer Bandwidth   Retr  Cwnd
[  4]   0.00-1.00   sec   322 MBytes  2.70 Gbits/sec0231 KBytes

[  4]   1.00-2.00   sec   320 MBytes  2.68 Gbits/sec2237 KBytes

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bandwidth   Retr
[  4]   0.00-2.00   sec   642 MBytes  2.69 Gbits/sec2 sender
[  4]   0.00-2.00   sec   640 MBytes  2.68 Gbits/sec
 receiver

iperf Done.

as the physic network is 10G bw.

when i see the cpu usage in node1, i get the result is:
%Cpu18 :  0.7 us,  0.0 sy,  0.0 ni, 99.0 id,  0.3 wa,  0.0 hi,  0.0 si,
 0.0 st
%Cpu19 :  0.3 us,  0.0 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.0 si,
 0.0 st
%Cpu20 :  0.0 us,  0.3 sy,  0.0 ni,  0.7 id,  0.0 wa,  0.0 hi, 99.0 si,
 0.0 st
%Cpu21 :  0.3 us, 22.1 sy,  0.0 ni, 52.9 id,  0.0 wa,  0.0 hi, 24.7 si,
 0.0 st
%Cpu22 :  0.3 us,  1.7 sy,  0.0 ni, 95.7 id,  0.0 wa,  0.0 hi,  2.3 si,
 0.0 st

as we can see, the 20s cpu has be full.
the thread of resource is:
  132 root  20   0   0  0  0 R  90.0  0.0   0:52.96
ksoftirqd/20

 9588 root  20   09756   2456   2212 S  51.8  0.0   7:59.99 iperf3

as we can see,  ksoftirqd/20 use 100% of the cpu.
i can not understand why  ksoftirqd/20 use the full of the cpu?
btw, i use the openvswitch.ko build from ovs tree of version 2.11.2.
in home dir of ovs tree:
# ll datapath/linux/openvswitch.ko
-rw-r--r-- 1 root root 14056752 Nov  7 11:26 datapath/linux/openvswitch.ko

anyone can give me some suggestions?
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Fwd: the network performence is not normal when use openvswitch.ko make from ovs tree

2019-11-05 Thread shuangyang qian
cc to ovs-discuss
-- Forwarded message -
发件人: shuangyang qian 
Date: 2019年11月5日周二 下午6:12
Subject: Re: [ovs-discuss] the network performence is not normal when use
openvswitch.ko make from ovs tree
To: Tonghao Zhang 


thank you for your reply, i just change my kernel version as same as you
and do the steps you provide, and get the same result which i metioned at
first. The process is like below.
on node1:
# ovs-vsctl show
4f4b936e-ddb9-4fc6-b0aa-6eb6034d4671
Bridge br-int
Port br-int
Interface br-int
type: internal
Port "gnv0"
Interface "gnv0"
type: geneve
options: {csum="true", key="100", remote_ip="10.18.124.2"}
Port "veth-vm1"
Interface "veth-vm1"
ovs_version: "2.12.0"
# ip netns exec vm1 ip a
1: lo:  mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN
group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
3: erspan0@NONE:  mtu 1450 qdisc noop state DOWN group
default qlen 1000
link/ether 32:d9:4f:86:c3:58 brd ff:ff:ff:ff:ff:ff
4: ovs-ip6gre0@NONE:  mtu 1448 qdisc noop state DOWN group default
qlen 1000
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
5: ovs-ip6tnl0@NONE:  mtu 1452 qdisc noop state DOWN group default
qlen 1000
link/tunnel6 :: brd ::
19: vm1-eth0@if18:  mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether 32:4b:51:e2:2b:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.100.10/24 scope global vm1-eth0
   valid_lft forever preferred_lft forever
inet6 fe80::304b:51ff:fee2:2bf4/64 scope link
   valid_lft forever preferred_lft forever

on node2:
# ovs-vsctl show
53df6c21-c210-4c2c-a7ab-b1edb0df4a31
Bridge br-int
Port "veth-vm2"
Interface "veth-vm2"
Port "gnv0"
Interface "gnv0"
type: geneve
options: {csum="true", key="100", remote_ip="10.18.124.1"}
Port br-int
Interface br-int
type: internal
ovs_version: "2.12.0"
# ip netns exec vm2 ip a
1: lo:  mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN
group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
3: erspan0@NONE:  mtu 1450 qdisc noop state DOWN group
default qlen 1000
link/ether 8e:90:3e:95:1b:dd brd ff:ff:ff:ff:ff:ff
4: ovs-ip6gre0@NONE:  mtu 1448 qdisc noop state DOWN group default
qlen 1000
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
5: ovs-ip6tnl0@NONE:  mtu 1452 qdisc noop state DOWN group default
qlen 1000
link/tunnel6 :: brd ::
11: vm2-eth0@if10:  mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether ee:e4:3e:16:6f:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.100.20/24 scope global vm2-eth0
   valid_lft forever preferred_lft forever
inet6 fe80::ece4:3eff:fe16:6f66/64 scope link
   valid_lft forever preferred_lft forever

and in network namespace vm1 on node1 i start iperf3 as server:
# ip netns exec vm1 iperf3 -s

and in network namespace vm2 on noed2 i start iperf3 as client:
# ip netns exec vm2 iperf3 -c 192.168.100.10 -i 2 -t 10
Connecting to host 192.168.100.10, port 5201
[  4] local 192.168.100.20 port 35258 connected to 192.168.100.10 port 5201
[ ID] Interval   Transfer Bandwidth   Retr  Cwnd
[  4]   0.00-2.00   sec   494 MBytes  2.07 Gbits/sec  151952 KBytes

[  4]   2.00-4.00   sec   582 MBytes  2.44 Gbits/sec3   1007 KBytes

[  4]   4.00-6.00   sec   639 MBytes  2.68 Gbits/sec0   1.36 MBytes

[  4]   6.00-8.00   sec   618 MBytes  2.59 Gbits/sec0   1.64 MBytes

[  4]   8.00-10.00  sec   614 MBytes  2.57 Gbits/sec0   1.88 MBytes

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bandwidth   Retr
[  4]   0.00-10.00  sec  2.88 GBytes  2.47 Gbits/sec  154 sender
[  4]   0.00-10.00  sec  2.88 GBytes  2.47 Gbits/sec
 receiver

iperf Done.

the openvswitch.ko in both two nodes is:
# modinfo openvswitch
filename:
/lib/modules/3.10.0-957.el7.x86_64/extra/openvswitch/openvswitch.ko
alias:  net-pf-16-proto-16-family-ovs_ct_limit
alias:  net-pf-16-proto-16-family-ovs_meter
alias:  net-pf-16-proto-16-family-ovs_packet
alias:  net-pf-16-proto-16-family-ovs_flow
alias:  net-pf-16-proto-16-family-ovs_vport
alias:  net-pf-16-proto-16-family-ovs_datapath
version:2.12.0
license:GPL
des

[ovs-discuss] the network performence is not normal when use openvswitch.ko make from ovs tree

2019-11-04 Thread shuangyang qian
Hi:
I make rpm packages for ovs and ovn with this document:
http://docs.openvswitch.org/en/latest/intro/install/fedora/ . For use the
kernel module in ovs tree, i configure with the command: ./configure
--with-linux=/lib/modules/$(uname -r)/build .
Then install the rpm packages.
when i finished, i check the openvswitch.ko is like:
# lsmod |  grep openvswitch
openvswitch   291276  0
tunnel6 3115  1 openvswitch
nf_defrag_ipv6 25957  2 nf_conntrack_ipv6,openvswitch
nf_nat_ipv6 6459  2 openvswitch,ip6table_nat
nf_nat_ipv4 6187  2 openvswitch,iptable_nat
nf_nat 18080  5
xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4
nf_conntrack  102766  10
ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat
libcrc32c   1388  3 ip_vs,openvswitch,xfs
ipv6  400397  92
ip_vs,nf_conntrack_ipv6,openvswitch,nf_defrag_ipv6,nf_nat_ipv6,bridge
# modinfo openvswitch
filename:
/lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
alias:  net-pf-16-proto-16-family-ovs_ct_limit
alias:  net-pf-16-proto-16-family-ovs_meter
alias:  net-pf-16-proto-16-family-ovs_packet
alias:  net-pf-16-proto-16-family-ovs_flow
alias:  net-pf-16-proto-16-family-ovs_vport
alias:  net-pf-16-proto-16-family-ovs_datapath
version:2.11.2
license:GPL
description:Open vSwitch switching datapath
srcversion: 9DDA327F9DD46B9813628A4
depends:
 
nf_conntrack,tunnel6,ipv6,nf_nat,nf_defrag_ipv6,libcrc32c,nf_nat_ipv6,nf_nat_ipv4
vermagic:   4.9.18-19080201 SMP mod_unload modversions
parm:   udp_port:Destination UDP port (ushort)
# rpm -qf /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
openvswitch-kmod-2.11.2-1.el7.x86_64

Then i start to build my network structure. I have two node,and network
namespace vm1 on node1, network namespace vm2 on node2. vm1's veth pair
veth-vm1 is on node1's br-int. vm2's veth pair veth-vm2 is on node2's
br-int. In logical layer, there is one logical switch test-subnet and two
logical switch port node1 and node2 on it. like this:
# ovn-nbctl show
switch 70585c0e-3cd9-459e-9448-3c13f3c0bfa3 (test-subnet)
port node2
addresses: ["00:00:00:00:00:02 192.168.100.20"]
port node1
addresses: ["00:00:00:00:00:01 192.168.100.10"]
on node1:
# ovs-vsctl show
5180f74a-1379-49af-b265-4403bd0d82d8
Bridge br-int
fail_mode: secure
Port "ovn-431b9e-0"
Interface "ovn-431b9e-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.18.124.2"}
Port br-int
Interface br-int
type: internal
Port "veth-vm1"
Interface "veth-vm1"
ovs_version: "2.11.2"
# ip netns exec vm1 ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
14: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN
group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
15: erspan0@NONE:  mtu 1450 qdisc noop state DOWN
group default qlen 1000
link/ether 22:02:1b:08:ec:53 brd ff:ff:ff:ff:ff:ff
16: ovs-ip6gre0@NONE:  mtu 1448 qdisc noop state DOWN group default
qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
17: ovs-ip6tnl0@NONE:  mtu 1452 qdisc noop state DOWN group default
qlen 1
link/tunnel6 :: brd ::
18: vm1-eth0@if17:  mtu 1400 qdisc noqueue
state UP group default qlen 1000
link/ether 00:00:00:00:00:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.100.10/24 scope global vm1-eth0
   valid_lft forever preferred_lft forever
inet6 fe80::200:ff:fe00:1/64 scope link
   valid_lft forever preferred_lft forever


on node2:# ovs-vsctl show
011332d0-78bc-47f7-be3c-fab0beb08e28
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "ovn-c655f8-0"
Interface "ovn-c655f8-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.18.124.1"}
Port "veth-vm2"
Interface "veth-vm2"
ovs_version: "2.11.2"
#ip netns exec vm2 ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
10: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN
group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
11: erspan0@NONE:  mtu 14

[ovs-discuss] The kernel module does not support meters.

2019-10-17 Thread shuangyang qian
Hi,
I install ovs and ovn for version of 2.11.4. and use geneve protocol for
overlay communication. and found some log info in ovs-vswitch.log, like
bellow:

2019-10-18T02:39:39.162Z|1|vlog|INFO|opened log file
/var/log/openvswitch/ovs-vswitchd.log
2019-10-18T02:39:39.166Z|2|ovs_numa|INFO|Discovered 16 CPU cores on
NUMA node 0
2019-10-18T02:39:39.166Z|3|ovs_numa|INFO|Discovered 16 CPU cores on
NUMA node 1
2019-10-18T02:39:39.166Z|4|ovs_numa|INFO|Discovered 2 NUMA nodes and 32
CPU cores
2019-10-18T02:39:39.166Z|5|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
connecting...
2019-10-18T02:39:39.166Z|6|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
connected
2019-10-18T02:39:39.254Z|7|dpif_netlink|INFO|The kernel module does not
support meters.
2019-10-18T02:39:39.282Z|8|ofproto_dpif|INFO|system@ovs-system:
Datapath supports recirculation
2019-10-18T02:39:39.282Z|9|ofproto_dpif|INFO|system@ovs-system: VLAN
header stack length probed as 2
2019-10-18T02:39:39.282Z|00010|ofproto_dpif|INFO|system@ovs-system: MPLS
label stack length probed as 1
2019-10-18T02:39:39.282Z|00011|ofproto_dpif|INFO|system@ovs-system:
Datapath supports truncate action
2019-10-18T02:39:39.282Z|00012|ofproto_dpif|INFO|system@ovs-system:
Datapath supports unique flow ids
2019-10-18T02:39:39.282Z|00013|ofproto_dpif|INFO|system@ovs-system:
Datapath does not support clone action
2019-10-18T02:39:39.283Z|00014|ofproto_dpif|INFO|system@ovs-system: Max
sample nesting level probed as 3
2019-10-18T02:39:39.283Z|00015|ofproto_dpif|INFO|system@ovs-system:
Datapath does not support eventmask in conntrack action
2019-10-18T02:39:39.283Z|00016|ofproto_dpif|INFO|system@ovs-system:
Datapath does not support ct_clear action
2019-10-18T02:39:39.283Z|00017|ofproto_dpif|INFO|system@ovs-system: Max
dp_hash algorithm probed to be 0
2019-10-18T02:39:39.283Z|00018|ofproto_dpif|INFO|system@ovs-system:
Datapath supports ct_state
2019-10-18T02:39:39.283Z|00019|ofproto_dpif|INFO|system@ovs-system:
Datapath supports ct_zone
2019-10-18T02:39:39.283Z|00020|ofproto_dpif|INFO|system@ovs-system:
Datapath supports ct_mark
2019-10-18T02:39:39.283Z|00021|ofproto_dpif|INFO|system@ovs-system:
Datapath supports ct_label
2019-10-18T02:39:39.283Z|00022|ofproto_dpif|INFO|system@ovs-system:
Datapath supports ct_state_nat
2019-10-18T02:39:39.283Z|00023|ofproto_dpif|INFO|system@ovs-system:
Datapath does not support ct_orig_tuple
2019-10-18T02:39:39.283Z|00024|ofproto_dpif|INFO|system@ovs-system:
Datapath does not support ct_orig_tuple6
2019-10-18T02:39:39.283Z|00025|dpif_netlink|INFO|dpif_netlink_meter_transact
OVS_METER_CMD_SET failed
2019-10-18T02:39:39.283Z|00026|dpif_netlink|INFO|dpif_netlink_meter_transact
OVS_METER_CMD_SET failed
2019-10-18T02:39:39.283Z|00027|dpif_netlink|INFO|dpif_netlink_meter_transact
get failed
2019-10-18T02:39:39.283Z|00028|dpif_netlink|INFO|The kernel module has a
broken meter implementation.

with the log info it tell me my kernel doesn't support several modules,
what can i do to let the kernel support them. also i found some warn
message like below:

2019-10-18T02:39:52.861Z|2|dpif(handler5)|WARN|Dropped 4 log messages
in last 8 seconds (most recently, 8 seconds ago) due to excessive rate
2019-10-18T02:39:52.861Z|3|dpif(handler2)|WARN|system@ovs-system:
execute
ct(commit,zone=8,label=0/0x1),set(eth(src=00:00:00:2f:11:2c,dst=0a:00:00:14:00:0a)),
set(ipv4(src=172.20.0.6,dst=172.20.0.9,ttl=63)),ct(zone=10),recirc(0x1b)
failed (Invalid argument) on packet
udp,vlan_tci=0x,dl_src=0a:00:00:14:00:07,dl_dst=0
0:00:00:2f:11:2c,nw_src=172.20.0.6,nw_dst=172.20.0.9,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=53929,tp_dst=53
udp_csum:ebcf
 with metadata
skb_priority(0),skb_mark(0),ct_state(0x21),ct_zone(0x8),in_port(4) mtu 0
2019-10-18T02:39:52.861Z|3|dpif(handler5)|WARN|system@ovs-system:
execute
ct(commit,zone=8,label=0/0x1),set(eth(src=00:00:00:2f:11:2c,dst=0a:00:00:14:00:0a)),
set(ipv4(src=172.20.0.6,dst=172.20.0.9,ttl=63)),ct(zone=10),recirc(0x1a)
failed (Invalid argument) on packet
udp,vlan_tci=0x,dl_src=0a:00:00:14:00:07,dl_dst=0
0:00:00:2f:11:2c,nw_src=172.20.0.6,nw_dst=172.20.0.9,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=57585,tp_dst=53
udp_csum:9f77
 with metadata
skb_priority(0),skb_mark(0),ct_state(0x21),ct_zone(0x8),in_port(4) mtu 0


anyone can give me some suggestions how to resole this unnormal log
messages. thank you very much.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss