Hi

  I am getting some problems setting up an internal networks for our oVirt VMs 
using the OVN external provider. 

  The ovn-provider was setup during the ovirt engine installation (version 
4.3.7.2-1.el7). I then just created a new network with the ovn external 
provider and tested the connectivity using the corresponding button on the 
oVirt UI. Two of my VMs are Ubuntu 18.04 servers with ips 10.0.0.101 and 
10.0.0.102 with gateways 10.0.0.102 and 10.0.0.101 respectively. So I guess 
under normal circumstances they should be able to ping each other.

 I tried disabling the firewalld service on both the host and the ovirtengine 
but still nothing changed. However I noticed something odd on the firewalld 
status on the host. With firewalld enabled, I get:

[root@ovirt7 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Πεμ 2020-02-06 06:10:38 UTC; 14s ago
     Docs: man:firewalld(1)
 Main PID: 4436 (firewalld)
    Tasks: 2
   CGroup: /system.slice/firewalld.service
           └─4436 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid

Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet1 -j 
libvirt-O-vnet1' failed: Illegal target name 'libvirt-O-vnet1'.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -F libvirt-I-vnet1' failed: Chain 
'libvirt-I-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -X libvirt-I-vnet1' failed: Chain 
'libvirt-I-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet1' failed: Chain 
'libvirt-O-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet1' failed: Chain 
'libvirt-O-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -E libvirt-P-vnet1 libvirt-O-vnet1' 
failed: Chain 'libvirt-P-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -F I-vnet1-mac' failed: Chain 
'I-vnet1-mac' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -X I-vnet1-mac' failed: Chain 
'I-vnet1-mac' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -F I-vnet1-arp-mac' failed: Chain 
'I-vnet1-arp-mac' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: 
'/usr/sbin/ebtables --concurrent -t nat -X I-vnet1-arp-mac' failed: Chain 
'I-vnet1-arp-mac' doesn't exist.

On the ovirt engine I have:

[root@ovirtengine ~]# ovn-sbctl show
Chassis "0e225912-6318-4bc3-9c94-8f2ab937876d"
    hostname: "ovirt3.hua.gr"
    Encap geneve
        ip: "10.100.59.53"
        options: {csum="true"}
Chassis "47ff3058-effb-43c2-b9e0-9eaf6c72b1c2"
    hostname: "ovirt5.hua.gr"
    Encap geneve
        ip: "10.100.59.51"
        options: {csum="true"}
Chassis "1fcca34f-6017-4882-8c13-4835dad03387"
    hostname: localhost
    Encap geneve
        ip: "10.100.59.55"
        options: {csum="true"}
Chassis "25d35968-6c3c-4040-ab85-d881d3d524e4"
    hostname: "ovirt4.hua.gr"
    Encap geneve
        ip: "10.100.59.52"
        options: {csum="true"}
Chassis "32697bcc-cc6a-4a59-8424-887d20df2d10"
    hostname: "ovirt7.hua.gr"
    Encap geneve
        ip: "10.100.59.49"
        options: {csum="true"}
    Port_Binding "ff31a88f-23f0-48fe-a657-3cd24d51f69e"
    Port_Binding "01da6ee3-abff-4423-954a-c4abf350e390"
    Port_Binding "caa870e1-8b6c-48bd-ac61-bfeda8befd10"
Chassis "f60100e6-0ee5-4472-8095-cc48b5160f50"
    hostname: "ovirt6.hua.gr"
    Encap geneve
        ip: "10.100.59.50"
        options: {csum="true"}
Chassis "fa7e6cbe-fa7f-46bc-9760-f581725f60a8"
    hostname: "ovirt2.hua.gr"
    Encap geneve
        ip: "10.100.59.54"
        options: {csum="true"}
Chassis "58a9412f-bfad-4c98-9882-d7d006588e0b"
    hostname: "ovirt9.hua.gr"
    Encap geneve
        ip: "10.100.59.47"
        options: {csum="true"}
Chassis "09dc1148-8ce5-425d-8f93-8dbf43fd7828"
    hostname: "ovirt8.hua.gr"
    Encap geneve
        ip: "10.100.59.48"
        options: {csum="true"}

so I guess tunneling is correctly setup right. On the host I am seeing:
[root@ovirt7 ~]# ovs-vsctl show
886fe6e5-13ea-4889-ba35-1ac0a422ca23
    Bridge br-int
        fail_mode: secure
        Port "ovn-1fcca3-0"
            Interface "ovn-1fcca3-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.100.59.55"}
        Port "ovn-0e2259-0"
            Interface "ovn-0e2259-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.100.59.53"}
        Port "ovn-fa7e6c-0"
            Interface "ovn-fa7e6c-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.100.59.54"}
        Port "vnet3"
            Interface "vnet3"
        Port "vnet0"
            Interface "vnet0"
        Port "ovn-25d359-0"
            Interface "ovn-25d359-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.100.59.52"}
        Port br-int
            Interface br-int
                type: internal
        Port "ovn-f60100-0"
            Interface "ovn-f60100-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.100.59.50"}
        Port "ovn-47ff30-0"
            Interface "ovn-47ff30-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.100.59.51"}
        Port "ovn-58a941-0"
            Interface "ovn-58a941-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.100.59.47"}
        Port "vnet2"
            Interface "vnet2"
        Port "ovn-09dc11-0"
            Interface "ovn-09dc11-0"
                type: geneve
                options: {csum="true", key=flow, remote_ip="10.100.59.48"}
    ovs_version: "2.11.0"

and:

[root@ovirt7 ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master trunk state 
UP mode DEFAULT group default qlen 1000
    link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode 
DEFAULT group default qlen 1000
    link/ether 00:17:a4:77:00:32 brd ff:ff:ff:ff:ff:ff
19: eno1.59@eno1: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 1500 
qdisc noqueue master ovirtmgmt state UP mode DEFAULT group default qlen 1000
    link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
20: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode 
DEFAULT group default qlen 1000
    link/ether 2a:92:56:8b:36:b8 brd ff:ff:ff:ff:ff:ff
21: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode 
DEFAULT group default qlen 1000
    link/ether c6:70:c8:34:86:85 brd ff:ff:ff:ff:ff:ff
23: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN mode DEFAULT 
group default qlen 1000
    link/ether da:49:e1:b0:b2:4e brd ff:ff:ff:ff:ff:ff
25: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state 
UP mode DEFAULT group default qlen 1000
    link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
26: eno1.60@eno1: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 1500 
qdisc noqueue master vlan60 state UP mode DEFAULT group default qlen 1000
    link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
27: vlan60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
mode DEFAULT group default qlen 1000
    link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
28: eno1.61@eno1: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 1500 
qdisc noqueue master vlan61 state UP mode DEFAULT group default qlen 1000
    link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
29: vlan61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
mode DEFAULT group default qlen 1000
    link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
30: trunk: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
mode DEFAULT group default qlen 1000
    link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
39: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc mq master 
ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:6f:2b:f7:00:17 brd ff:ff:ff:ff:ff:ff
40: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc mq master 
ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:6f:2b:f7:00:18 brd ff:ff:ff:ff:ff:ff
41: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovirtmgmt 
state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:6f:2b:f7:00:0f brd ff:ff:ff:ff:ff:ff
42: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc mq master 
ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:6f:2b:f7:00:16 brd ff:ff:ff:ff:ff:ff
44: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue 
master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 06:6c:84:05:bf:6b brd ff:ff:ff:ff:ff:ff

I have not found anything useful in the log files.

Does anyone have any thoughts?

Thanks!

Thomas

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LVTYJBI5TTQIVV7E65CJFWKQMIC4I2NS/

Reply via email to