HI Viktor,
For VXLAN I'm not sure it matters.  I was actually going to test the same thing 
out.  In non-dpdk scenario, OVS will just use the local_ip config to determine 
which interface to send out the vxlan packet.  It doesn't need to have any 
connection to the port at all on the bridge, it just has to be an IP interface 
on the linux host.  For DPDK scenario, I'm not sure if that is any different 
due to things being done in userspace, but the IP is set on br-phy and the 
local_ip is set to that.  It may well be the case it is not needed.  I'll try 
to test as well and let you know what I find.

Thanks,

Tim Rozet
Red Hat SDN Team

----- Original Message -----
From: "Viktor Tikkanen (Nokia - FI/Espoo)" <[email protected]>
To: "Thomas F Herbert" <[email protected]>, [email protected]
Cc: "Dan Radez" <[email protected]>, "Feng Pan" <[email protected]>, "Tapio 
Tallgren (Nokia - FI/Espoo)" <[email protected]>, "Tim Rozet" 
<[email protected]>
Sent: Wednesday, March 8, 2017 6:41:05 AM
Subject: RE: OVS DPDK problems (question from #opnfv-apex)

Hi!

What is the purpose of connecting br-phy with any other bridges (e.g. with 
br-tun)? It seems that disconnecting br-phy from br-tun doesn’t affect traffic 
between VMs in different compute nodes in any way.

For example, I have two VMs in different compute nodes and iperf shows 
following numbers:

[root@ofp-dpdk-3 tcpperf]# iperf -c 30.0.0.59 -l 1024
------------------------------------------------------------
Client connecting to 30.0.0.59, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 30.0.0.64 port 38444 connected with 30.0.0.59 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  4.92 GBytes  4.23 Gbits/sec


Then I disconnect br-phy from br-tun:

[root@overcloud-novacompute-0 heat-admin]# ovs-vsctl del-port br-tun 
patch-br-tun
[root@overcloud-novacompute-0 heat-admin]# ovs-vsctl del-port br-phy 
patch-br-phy

[root@overcloud-novacompute-1 heat-admin]# ovs-vsctl del-port br-tun 
patch-br-tun
[root@overcloud-novacompute-1 heat-admin]# ovs-vsctl del-port br-phy 
patch-br-phy


And after that:

[root@ofp-dpdk-3 tcpperf]# iperf -c 30.0.0.59 -l 1024
------------------------------------------------------------
Client connecting to 30.0.0.59, TCP port 5001
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[  3] local 30.0.0.64 port 38440 connected with 30.0.0.59 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  4.96 GBytes  4.26 Gbits/sec


and counters are still growing:

[root@overcloud-novacompute-0 heat-admin]# ovs-vsctl list interface dpdk0
_uuid               : 1c331ecf-2798-46bf-bd84-3ce4048208c9
admin_state         : up
bfd                 : {}
bfd_status          : {}
cfm_fault           : []
cfm_fault_status    : []
cfm_flap_count      : []
cfm_health          : []
cfm_mpid            : []
cfm_remote_mpids    : []
cfm_remote_opstate  : []
duplex              : full
error               : []
external_ids        : {}
ifindex             : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current        : []
link_resets         : 0
link_speed          : 10000000000
link_state          : up
lldp                : {}
mac                 : []
mac_in_use          : "54:ab:3a:6f:8e:d1"
mtu                 : 1500
name                : "dpdk0"
ofport              : 1
ofport_request      : []
options             : {}
other_config        : {}
statistics          : {rx_bytes=952040349911, rx_dropped=0, rx_errors=0, 
rx_packets=865588987, tx_bytes=477282881764, tx_dropped=21407, tx_errors=0, 
tx_packets=708303046}
status              : {driver_name=rte_ixgbe_pmd, max_hash_mac_addrs="4096", 
max_mac_addrs="128", max_rx_pktlen="1518", max_rx_queues="128", 
max_tx_queues="64", max_vfs="0", max_vmdq_pools="64", min_rx_bufsize="1024", 
numa_id="0", pci-device_id="0x10fb", pci-vendor_id="0x32902", port_no="0"}
type                : dpdk
[root@overcloud-novacompute-0 heat-admin]#


Also when using in VMs an iperf-like application built with ODP-DPDK/OFP, the 
numbers don’t change significally.


Used origilan configuration in both compute nodes is:

[root@overcloud-novacompute-0 heat-admin]# sudo 
/usr/share/dpdk/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:05:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' 
drv=uio_pci_generic unused=vfio-pci

Network devices using kernel driver
===================================
0000:01:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=ens255f0 drv=ixgbe 
unused=vfio-pci,uio_pci_generic *Active*
0000:01:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=ens255f1 drv=ixgbe 
unused=vfio-pci,uio_pci_generic *Active*
0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens1f0 
drv=ixgbe unused=vfio-pci,uio_pci_generic
0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens1f1 
drv=ixgbe unused=vfio-pci,uio_pci_generic
0000:05:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens4f0 
drv=ixgbe unused=vfio-pci,uio_pci_generic *Active*

Other network devices
=====================
<none>
[root@overcloud-novacompute-0 heat-admin]# ovs-vsctl show
c16bdd05-5ae5-4421-9957-cbad7371f9f8
    Bridge br-int
        fail_mode: secure
        Port "vhu5f197e14-78"
            tag: 4095
            Interface "vhu5f197e14-78"
                type: dpdkvhostuser
        Port "vhu8ffa212c-2a"
            tag: 4095
            Interface "vhu8ffa212c-2a"
                type: dpdkvhostuser
        Port "vhua5770337-9b"
            tag: 4095
            Interface "vhua5770337-9b"
                type: dpdkvhostuser
        Port "vhu4d12e31f-8d"
            tag: 4095
            Interface "vhu4d12e31f-8d"
                type: dpdkvhostuser
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "vhu3dd361e9-18"
            tag: 16
            Interface "vhu3dd361e9-18"
                type: dpdkvhostuser
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "vhuf5ee62b3-29"
            tag: 4095
            Interface "vhuf5ee62b3-29"
                type: dpdkvhostuser
        Port "vhu892b022e-84"
            tag: 4095
            Interface "vhu892b022e-84"
                type: dpdkvhostuser
        Port "vhu2c68d191-ce"
            tag: 4095
            Interface "vhu2c68d191-ce"
                type: dpdkvhostuser
        Port "vhua4b2ecc1-2f"
            tag: 15
            Interface "vhua4b2ecc1-2f"
                type: dpdkvhostuser
    Bridge br-phy
        Port patch-br-phy
            Interface patch-br-phy
                type: patch
                options: {peer=patch-br-tun}
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port br-phy
            Interface br-phy
                type: internal
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0b000019"
            Interface "vxlan-0b000019"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="11.0.0.22", 
out_key=flow, remote_ip="11.0.0.25"}
        Port patch-br-tun
            Interface patch-br-tun
                type: patch
                options: {peer=patch-br-phy}
        Port "vxlan-0b000018"
            Interface "vxlan-0b000018"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="11.0.0.22", 
out_key=flow, remote_ip="11.0.0.24"}
        Port "vxlan-0b000017"
            Interface "vxlan-0b000017"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="11.0.0.22", 
out_key=flow, remote_ip="11.0.0.23"}
        Port "vxlan-0b000015"
            Interface "vxlan-0b000015"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="11.0.0.22", 
out_key=flow, remote_ip="11.0.0.21"}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.5.90"
[root@overcloud-novacompute-0 heat-admin]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens255f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 
1000
    link/ether 54:ab:3a:61:ed:08 brd ff:ff:ff:ff:ff:ff
    inet 192.168.37.12/24 brd 192.168.37.255 scope global ens255f0
       valid_lft forever preferred_lft forever
    inet6 fe80::56ab:3aff:fe61:ed08/64 scope link
       valid_lft forever preferred_lft forever
3: ens255f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 
1000
    link/ether 54:ab:3a:61:ed:09 brd ff:ff:ff:ff:ff:ff
    inet 12.0.0.22/24 brd 12.0.0.255 scope global ens255f1
       valid_lft forever preferred_lft forever
    inet6 fe80::56ab:3aff:fe61:ed09/64 scope link
       valid_lft forever preferred_lft forever
4: ens1f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN 
qlen 1000
    link/ether 90:e2:ba:b3:71:e8 brd ff:ff:ff:ff:ff:ff
5: ens1f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN 
qlen 1000
    link/ether 90:e2:ba:b3:71:e9 brd ff:ff:ff:ff:ff:ff
6: ens4f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 
1000
    link/ether 54:ab:3a:6f:8e:d0 brd ff:ff:ff:ff:ff:ff
    inet 192.0.2.8/24 brd 192.0.2.255 scope global ens4f0
       valid_lft forever preferred_lft forever
    inet6 fe80::56ab:3aff:fe6f:8ed0/64 scope link
       valid_lft forever preferred_lft forever
13: ovs-netdev: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN qlen 500
    link/ether 12:37:94:27:cc:e1 brd ff:ff:ff:ff:ff:ff
14: br-int: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN qlen 500
    link/ether 92:22:05:51:3e:4c brd ff:ff:ff:ff:ff:ff
15: br-tun: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state 
UNKNOWN qlen 500
    link/ether 06:cd:77:cd:8a:49 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4cd:77ff:fecd:8a49/64 scope link
       valid_lft forever preferred_lft forever
16: br-ex: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN qlen 500
    link/ether 5a:a7:28:b1:18:40 brd ff:ff:ff:ff:ff:ff
17: br-phy: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state 
UNKNOWN qlen 500
    link/ether 54:ab:3a:6f:8e:d1 brd ff:ff:ff:ff:ff:ff
    inet 11.0.0.22/24 brd 11.0.0.255 scope global br-phy
       valid_lft forever preferred_lft forever
    inet6 fe80::56ab:3aff:fe6f:8ed1/64 scope link
       valid_lft forever preferred_lft forever
[root@overcloud-novacompute-0 heat-admin]#

-Viktor

From: Thomas F Herbert [mailto:[email protected]]
Sent: Friday, March 03, 2017 4:05 PM
To: Tikkanen, Viktor (Nokia - FI/Espoo) <[email protected]>; Tim Rozet 
<[email protected]>
Cc: Dan Radez <[email protected]>; Feng Pan <[email protected]>; Tallgren, Tapio 
(Nokia - FI/Espoo) <[email protected]>
Subject: Re: OVS DPDK problems (question from #opnfv-apex)




On 03/03/2017 03:59 AM, Tikkanen, Viktor (Nokia - FI/Espoo) wrote:
Hi!

I changed the (default) bridging so that br-phy is connected (with 
patch-br-phy/patch-br-int) to br-int instead of br-tun and after that instances 
were able to start successfully and communicate with each other.
Connecting a dpdk port to a non-dpdk bridge won't work very well although I am 
surprised if it caused ovs-vsctl to seg-fault.


[root@overcloud-novacompute-1 heat-admin]# ovs-vsctl show
884c3af8-7107-494c-8bbf-3472074dfe5b
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0b000019"
            Interface "vxlan-0b000019"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="11.0.0.21", 
out_key=flow, remote_ip="11.0.0.25"}
        Port "vxlan-0b000018"
            Interface "vxlan-0b000018"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="11.0.0.21", 
out_key=flow, remote_ip="11.0.0.24"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0b000016"
            Interface "vxlan-0b000016"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="11.0.0.21", 
out_key=flow, remote_ip="11.0.0.22"}
        Port "vxlan-0b000017"
            Interface "vxlan-0b000017"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="11.0.0.21", 
out_key=flow, remote_ip="11.0.0.23"}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-int
        fail_mode: secure
        Port patch-br-int
            Interface patch-br-int
                type: patch
                options: {peer=patch-br-phy}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "vhudee22c4c-e4"
            tag: 1
            Interface "vhudee22c4c-e4"
                type: dpdkvhostuser
    Bridge br-phy
        Port br-phy
            Interface br-phy
                type: internal
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port patch-br-phy
            Interface patch-br-phy
                type: patch
                options: {peer=patch-br-int}
    ovs_version: "2.5.90"

The open question is: how the bridging is supposed to be done in apex setup 
with dpdk (where br-phy is supposed to be connected to)?

I refer also to the following comment from Peng Liu (sent 21/Sep/16 9:42 AM) in 
https://jira.opnfv.org/browse/APEX-274 :

“3. The br-phy should be connected to br-int instead of br-tun.”
The rule of thumb is to not mix kernel datapath ports with dpdk ports on the 
same bridge. That has very bad affect on performance.
br-tun is not a DPDK bridge so you don't want vhost-user or or your dpdk 
external IF connected there.
You don't want to mix dpdk ports with non-dpdk ports on the same bridge.
You should have vhost-user ports to instances on the same bridge or another 
dpdk bridge.
If there are multiple dpdk bridges, the dpdk bridges should be connected to 
each other with a patch port.
I think br-tun should aggregate tunnel traffic and the tunnels should be routed 
through the patch port to a dpdk bridge.


-Viktor

-----Original Message-----
From: Thomas F Herbert [mailto:[email protected]]
Sent: Thursday, March 02, 2017 11:42 PM
To: Tim Rozet <[email protected]><mailto:[email protected]>; Tikkanen, Viktor 
(Nokia - FI/Espoo) <[email protected]><mailto:[email protected]>
Cc: Dan Radez <[email protected]><mailto:[email protected]>; Feng Pan 
<[email protected]><mailto:[email protected]>
Subject: Re: OVS DPDK problems (question from #opnfv-apex)

Tim and Victor,

Please help me understand the topology and what you are tring do do?.

Compute node:

br-int <-> vhost-user <-> instances

There is already one instance running connected through vhostuser port:
vhu1b808358-2e

This is an attempt to add a 2nd instance through a 2nd vhost-user port.
and ovs-vsctl crashes attempting to add the port to br-int.


However, you mention below, ens4f1 which is a PHY. Shouldn't that be a port on 
br-ex? Is that port dpdk0?

If I am correct, the problem appears to occur when adding a 2nd vhostuser port 
on br-int and has nothing to do with the phy and br-ex?

I can log on and investigate if that helps.

--Tom


On 03/02/2017 02:32 PM, Tim Rozet wrote:
> +Feng, Tom
> Viktor is this a virtual deployment?  I haven't seen this crash before.  Once 
> we have 4.0 support for ovs dpdk with ovs 2.6 I would recommend going with 
> that.  Tom, do you want to try to debug this?  It is Colorado release with 
> 2.5.90.
>
> Thanks,
>
> Tim Rozet
> Red Hat SDN Team
>
> ----- Original Message -----
> From: "Viktor Tikkanen (Nokia - FI/Espoo)" 
> <[email protected]<mailto:[email protected]>>
> To: "Dan Radez" <[email protected]<mailto:[email protected]>>
> Cc: "Tim Rozet" <[email protected]<mailto:[email protected]>>
> Sent: Thursday, March 2, 2017 6:55:40 AM
> Subject: OVS DPDK problems (question from #opnfv-apex)
>
> Hi!
>
> I installed opnfv-apex-3.0-20161109 with following settings:
>
> [root@jumphost ~]# cat /etc/opnfv-apex/dpdk/deploy_settings.yaml
> global_params:
>   ha_enabled: true
>
> deploy_options:
>   sdn_controller: false
>   sdn_l3: false
>   tacker: false
>   congress: false
>   sfc: false
>   vpn: false
>   dataplane: ovs_dpdk
>   performance:
>     Controller:
>       kernel:
>         hugepages: 2048
>         hugepagesz: 2M
>     Compute:
>       kernel:
>         hugepagesz: 2M
>         hugepages: 8192
>         intel_iommu: 'on'
>         iommu: pt
>
> but when trying to launch an instance (cirros), following errors happened in 
> compute node:
>
> /var/log/nova/nova-compute.log:
>
> 2017-03-02 10:23:12.755 27069 INFO nova.virt.libvirt.driver
> [req-96454361-7be8-4800-9577-717a90d923b5
> 92d43f64bc4e493b8f240752b37e5225 cb9bbcb47a6147638451c1f61757bd0a - -
> -] [instance: 1923d73a-d855-42a3-af8f-22356e1d090b] Creating image
> 2017-03-02 10:23:16.667 27069 WARNING nova.virt.osinfo
> [req-96454361-7be8-4800-9577-717a90d923b5
> 92d43f64bc4e493b8f240752b37e5225 cb9bbcb47a6147638451c1f61757bd0a - -
> -] Cannot find OS information - Reason: (No configuration information
> found for operating system Empty)
> 2017-03-02 10:24:00.382 27069 INFO nova.compute.resource_tracker
> [req-400d9b3d-31ee-44c9-8d81-5f752bc18cab - - - - -] Auditing locally
> available compute resources for node
> overcloud-novacompute-0.opnfvapex.com
> 2017-03-02 10:24:00.744 27069 INFO nova.compute.resource_tracker
> [req-400d9b3d-31ee-44c9-8d81-5f752bc18cab - - - - -] Total usable
> vcpus: 48, total allocated vcpus: 1
> 2017-03-02 10:24:00.745 27069 INFO nova.compute.resource_tracker
> [req-400d9b3d-31ee-44c9-8d81-5f752bc18cab - - - - -] Final resource
> view: name=overcloud-novacompute-0.opnfvapex.com phys_ram=128730MB
> used_ram=4096MB phys_disk=4365GB used_disk=20GB total_vcpus=48
> used_vcpus=1 pci_stats=[]
> 2017-03-02 10:24:00.804 27069 INFO nova.compute.resource_tracker
> [req-400d9b3d-31ee-44c9-8d81-5f752bc18cab - - - - -] Compute_service
> record updated for
> overcloud-novacompute-0.opnfvapex.com:overcloud-novacompute-0.opnfvape
> x.com
> 2017-03-02 10:24:53.972 27069 INFO nova.compute.manager
> [req-fccb078c-e89f-46b6-b80d-e94411cd819f
> 92d43f64bc4e493b8f240752b37e5225 cb9bbcb47a6147638451c1f61757bd0a - -
> -] [instance: 1923d73a-d855-42a3-af8f-22356e1d090b] Get console output
> 2017-03-02 10:25:00.360 27069 INFO nova.compute.resource_tracker
> [req-400d9b3d-31ee-44c9-8d81-5f752bc18cab - - - - -] Auditing locally
> available compute resources for node
> overcloud-novacompute-0.opnfvapex.com
> 2017-03-02 10:25:00.721 27069 INFO nova.compute.resource_tracker
> [req-400d9b3d-31ee-44c9-8d81-5f752bc18cab - - - - -] Total usable
> vcpus: 48, total allocated vcpus: 1
> 2017-03-02 10:25:00.721 27069 INFO nova.compute.resource_tracker
> [req-400d9b3d-31ee-44c9-8d81-5f752bc18cab - - - - -] Final resource
> view: name=overcloud-novacompute-0.opnfvapex.com phys_ram=128730MB
> used_ram=4096MB phys_disk=4365GB used_disk=20GB total_vcpus=48
> used_vcpus=1 pci_stats=[]
> 2017-03-02 10:25:00.970 27069 INFO nova.compute.resource_tracker
> [req-400d9b3d-31ee-44c9-8d81-5f752bc18cab - - - - -] Compute_service
> record updated for
> overcloud-novacompute-0.opnfvapex.com:overcloud-novacompute-0.opnfvape
> x.com
> 2017-03-02 10:25:16.801 27069 ERROR nova.network.linux_net 
> [req-96454361-7be8-4800-9577-717a90d923b5 92d43f64bc4e493b8f240752b37e5225 
> cb9bbcb47a6147638451c1f61757bd0a - - -] Unable to execute ['ovs-vsctl', 
> '--timeout=120', '--', '--if-exists', 'del-port', u'vhu108e4391-3c', '--', 
> 'add-port', 'br-int', u'vhu108e4391-3c', '--', 'set', 'Interface', 
> u'vhu108e4391-3c', 
> u'external-ids:iface-id=108e4391-3ccb-4c37-a18c-94e1670f1712', 
> 'external-ids:iface-status=active', 
> u'external-ids:attached-mac=fa:16:3e:ed:69:74', 
> 'external-ids:vm-uuid=1923d73a-d855-42a3-af8f-22356e1d090b', 
> 'type=dpdkvhostuser']. Exception: Unexpected error while running command.
> Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl
> --timeout=120 -- --if-exists del-port vhu108e4391-3c -- add-port
> br-int vhu108e4391-3c -- set Interface vhu108e4391-3c
> external-ids:iface-id=108e4391-3ccb-4c37-a18c-94e1670f1712
> external-ids:iface-status=active
> external-ids:attached-mac=fa:16:3e:ed:69:74
> external-ids:vm-uuid=1923d73a-d855-42a3-af8f-22356e1d090b
> type=dpdkvhostuser Exit code: 142
> Stdout: u''
> Stderr: u'2017-03-02T10:25:16Z|00002|fatal_signal|WARN|terminating with 
> signal 14 (Alarm clock)\n'
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager
> [req-96454361-7be8-4800-9577-717a90d923b5
> 92d43f64bc4e493b8f240752b37e5225 cb9bbcb47a6147638451c1f61757bd0a - -
> -] [instance: 1923d73a-d855-42a3-af8f-22356e1d090b] Instance failed to
> spawn
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b] Traceback (most recent call last):
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in 
> _build_resources
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     yield resources
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2064, in 
> _build_and_run_instance
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     block_device_info=block_device_info)
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2779, in 
> spawn
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     block_device_info=block_device_info)
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4915, in 
> _create_domain_and_network
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     self.plug_vifs(instance, 
> network_info)
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 879, in 
> plug_vifs
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     self.vif_driver.plug(instance, vif)
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 756, in plug
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     func(instance, vif)
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 696, in 
> plug_vhostuser
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     self.plug_vhostuser_ovs(instance, 
> vif)
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 684, in 
> plug_vhostuser_ovs
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     
> interface_type=network_model.OVS_VHOSTUSER_INTERFACE_TYPE)
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 1387, in 
> create_ovs_vif_port
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     interface_type))
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]   File 
> "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 1366, in 
> _ovs_vsctl
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b]     raise 
> exception.OvsConfigurationFailure(inner_exception=e)
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b] OvsConfigurationFailure: OVS 
> configuration failed with: Unexpected error while running command.
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance:
> 1923d73a-d855-42a3-af8f-22356e1d090b] Command: sudo nova-rootwrap
> /etc/nova/rootwrap.conf ovs-vsctl --timeout=120 -- --if-exists
> del-port vhu108e4391-3c -- add-port br-int vhu108e4391-3c -- set
> Interface vhu108e4391-3c
> external-ids:iface-id=108e4391-3ccb-4c37-a18c-94e1670f1712
> external-ids:iface-status=active
> external-ids:attached-mac=fa:16:3e:ed:69:74
> external-ids:vm-uuid=1923d73a-d855-42a3-af8f-22356e1d090b
> type=dpdkvhostuser
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance:
> 1923d73a-d855-42a3-af8f-22356e1d090b] Exit code: 142
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b] Stdout: u''
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b] Stderr: 
> u'2017-03-02T10:25:16Z|00002|fatal_signal|WARN|terminating with signal 14 
> (Alarm clock)\n'.
> 2017-03-02 10:25:16.802 27069 ERROR nova.compute.manager [instance:
> 1923d73a-d855-42a3-af8f-22356e1d090b]
> 2017-03-02 10:25:16.818 27069 INFO nova.compute.manager
> [req-96454361-7be8-4800-9577-717a90d923b5
> 92d43f64bc4e493b8f240752b37e5225 cb9bbcb47a6147638451c1f61757bd0a - -
> -] [instance: 1923d73a-d855-42a3-af8f-22356e1d090b] Terminating
> instance
> 2017-03-02 10:25:16.823 27069 INFO nova.virt.libvirt.driver [-] [instance: 
> 1923d73a-d855-42a3-af8f-22356e1d090b] During wait destroy, instance 
> disappeared.
>
>
>
> /var/log/messages:
>
> Mar  2 10:23:16 localhost ovs-vsctl: ovs|00001|vsctl|INFO|Called as
> /bin/ovs-vsctl --timeout=120 -- --if-exists del-port vhu108e4391-3c --
> add-port br-int vhu108e4391-3c -- set Interface vhu108e4391-3c
> external-ids:iface-id=108e4391-3ccb-4c37-a18c-94e1670f1712
> external-ids:iface-status=active
> external-ids:attached-mac=fa:16:3e:ed:69:74
> external-ids:vm-uuid=1923d73a-d855-42a3-af8f-22356e1d090b
> type=dpdkvhostuser Mar  2 10:23:16 localhost kernel:
> ovs-vswitchd[16282]: segfault at 7fabda400030 ip 00007f15ba64c6f3 sp
> 00007fff2fe39250 error 4 in librte_eal.so.2[7f15ba635000+22000]
> Mar  2 10:23:18 localhost ovs-vswitchd[16281]:
> ovs|00003|daemon_unix(monitor)|ERR|1 crashes: pid 16282 died, killed
> (Segmentation fault), core dumped, restarting Mar  2 10:23:18 localhost 
> kernel: device ovs-netdev entered promiscuous mode ...
>
>
> Have you ever seen this kind of crashes?
>
>
> More outputs from the compute node:
>
> [root@overcloud-novacompute-0 heat-admin]# ovs-vsctl show
> c16bdd05-5ae5-4421-9957-cbad7371f9f8
>     Bridge br-int
>         fail_mode: secure
>         Port br-int
>             Interface br-int
>                 type: internal
>         Port patch-tun
>             Interface patch-tun
>                 type: patch
>                 options: {peer=patch-int}
>         Port int-br-ex
>             Interface int-br-ex
>                 type: patch
>                 options: {peer=phy-br-ex}
>         Port "vhu1b808358-2e"
>             Interface "vhu1b808358-2e"
>                 type: dpdkvhostuser
>     Bridge br-phy
>         Port "dpdk0"
>             Interface "dpdk0"
>                 type: dpdk
>         Port br-phy
>             Interface br-phy
>                 type: internal
>         Port patch-br-phy
>             Interface patch-br-phy
>                 type: patch
>                 options: {peer=patch-br-tun}
>     Bridge br-ex
>         Port phy-br-ex
>             Interface phy-br-ex
>                 type: patch
>                 options: {peer=int-br-ex}
>         Port br-ex
>             Interface br-ex
>                 type: internal
>     Bridge br-tun
>         fail_mode: secure
>         Port patch-int
>             Interface patch-int
>                 type: patch
>                 options: {peer=patch-tun}
>         Port "vxlan-0b000019"
>             Interface "vxlan-0b000019"
>                 type: vxlan
>                 options: {df_default="true", in_key=flow, 
> local_ip="11.0.0.22", out_key=flow, remote_ip="11.0.0.25"}
>         Port "vxlan-0b000018"
>             Interface "vxlan-0b000018"
>                 type: vxlan
>                 options: {df_default="true", in_key=flow, 
> local_ip="11.0.0.22", out_key=flow, remote_ip="11.0.0.24"}
>         Port "vxlan-0b000017"
>             Interface "vxlan-0b000017"
>                 type: vxlan
>                 options: {df_default="true", in_key=flow, 
> local_ip="11.0.0.22", out_key=flow, remote_ip="11.0.0.23"}
>         Port "vxlan-0b000015"
>             Interface "vxlan-0b000015"
>                 type: vxlan
>                 options: {df_default="true", in_key=flow, 
> local_ip="11.0.0.22", out_key=flow, remote_ip="11.0.0.21"}
>         Port patch-br-tun
>             Interface patch-br-tun
>                 type: patch
>                 options: {peer=patch-br-phy}
>         Port br-tun
>             Interface br-tun
>                 type: internal
>     ovs_version: "2.5.90"
> [root@overcloud-novacompute-0 heat-admin]#
> [root@overcloud-novacompute-0 heat-admin]# rpm -qa|grep dpdk
> dpdk-examples-16.04.0-1.el7.centos.x86_64
> dpdk-16.04.0-1.el7.centos.x86_64
> dpdk-tools-16.04.0-1.el7.centos.x86_64
> dpdk-devel-16.04.0-1.el7.centos.x86_64
> [root@overcloud-novacompute-0 heat-admin]# rpm -qa|grep openv
> python-openvswitch-2.5.0-2.el7.noarch
> openstack-neutron-openvswitch-8.1.3-0.20160813125040.5e6168f.el7.cento
> s.noarch
> openvswitch-2.5.90-0.12032.gitc61e93d6.1.el7.centos.x86_64
> [root@overcloud-novacompute-0 heat-admin]# ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host
>        valid_lft forever preferred_lft forever
> 2: ens255f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP 
> qlen 1000
>     link/ether 54:ab:3a:61:ed:08 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.37.12/24 brd 192.168.37.255 scope global ens255f0
>        valid_lft forever preferred_lft forever
>     inet6 fe80::56ab:3aff:fe61:ed08/64 scope link
>        valid_lft forever preferred_lft forever
> 3: ens255f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP 
> qlen 1000
>     link/ether 54:ab:3a:61:ed:09 brd ff:ff:ff:ff:ff:ff
>     inet 12.0.0.22/24 brd 12.0.0.255 scope global ens255f1
>        valid_lft forever preferred_lft forever
>     inet6 fe80::56ab:3aff:fe61:ed09/64 scope link
>        valid_lft forever preferred_lft forever
> 4: ens1f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN 
> qlen 1000
>     link/ether 90:e2:ba:b3:71:e8 brd ff:ff:ff:ff:ff:ff
> 5: ens1f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN 
> qlen 1000
>     link/ether 90:e2:ba:b3:71:e9 brd ff:ff:ff:ff:ff:ff
> 6: ens4f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 
> 1000
>     link/ether 54:ab:3a:6f:8e:d0 brd ff:ff:ff:ff:ff:ff
>     inet 192.0.2.8/24 brd 192.0.2.255 scope global ens4f0
>        valid_lft forever preferred_lft forever
>     inet6 fe80::56ab:3aff:fe6f:8ed0/64 scope link
>        valid_lft forever preferred_lft forever
> 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>     link/ether ee:4c:1d:2a:3b:80 brd ff:ff:ff:ff:ff:ff
> 19: ovs-netdev: <BROADCAST,PROMISC> mtu 1500 qdisc noop state DOWN qlen 500
>     link/ether 9a:b4:aa:4b:e0:7b brd ff:ff:ff:ff:ff:ff
> [root@overcloud-novacompute-0 heat-admin]#
> [root@overcloud-novacompute-0 heat-admin]# ethtool -i ens4f0
> driver: ixgbe
> version: 4.0.1-k-rh7.2
> firmware-version: 0x800004e0
> bus-info: 0000:05:00.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: yes
> supports-register-dump: yes
> supports-priv-flags: no
> [root@overcloud-novacompute-0 heat-admin]# ls 
> /sys/bus/pci/devices/0000\:05\:00.0/
> broken_parity_status  consistent_dma_mask_bits  dma_mask_bits    enable       
>   iommu_group    local_cpus  msi_irqs   power   rescan    resource0  rom      
>        subsystem         uevent
> class                 d3cold_allowed            driver           
> firmware_node  irq            modalias    net        ptp     reset     
> resource2  sriov_numvfs    subsystem_device  vendor
> config                device                    driver_override  iommu        
>   local_cpulist  msi_bus     numa_node  remove  resource  resource4  
> sriov_totalvfs  subsystem_vendor  vpd
> [root@overcloud-novacompute-0 heat-admin]# ls 
> /sys/bus/pci/devices/0000\:05\:00.1/
> broken_parity_status  consistent_dma_mask_bits  dma_mask_bits    enable       
>   iommu_group    local_cpus  numa_node  rescan    resource0  rom             
> subsystem         uevent  vpd
> class                 d3cold_allowed            driver           
> firmware_node  irq            modalias    power      reset     resource2  
> sriov_numvfs    subsystem_device  uio
> config                device                    driver_override  iommu        
>   local_cpulist  msi_bus     remove     resource  resource4  sriov_totalvfs  
> subsystem_vendor  vendor
> [root@overcloud-novacompute-0 heat-admin]#
> [root@overcloud-novacompute-0 heat-admin]# lsmod|grep 'uio\|open'
> openvswitch            84543  0
> libcrc32c              12644  1 openvswitch
> uio_pci_generic        12588  1
> uio                    19259  3 uio_pci_generic
> [root@overcloud-novacompute-0 heat-admin]#
>
> (the interface used by dpdk is ens4f1)
>
> -Viktor
>
>

--
*Thomas F Herbert*
SDN Group
Office of Technology
*Red Hat*


--
Thomas F Herbert
SDN Group
Office of Technology
Red Hat
_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to