Re: [Openstack] [PackStack][Neutron] erro port no present in bridge br-int

2018-11-06 Thread Budai Laszlo

Hi

we had a similar situation when the ``host`` entry in the neutron.conf was 
different than the host entry in the nova.conf on the compute nodes.
So if you're setting a ``host`` entry in one of these files, then make sure the 
other file contains the same ``host`` setting.

see 
https://docs.openstack.org/neutron/rocky/configuration/neutron.html#DEFAULT.host
 and 
https://docs.openstack.org/nova/rocky/configuration/config.html#DEFAULT.host

Kind regards,
Laszlo

On 11/6/18 6:04 PM, Akihiro Motoki wrote:

How is your [ovs] bridge_mapping in your configuration?
Flat network requires a corresponding bridge_mapping entry and you also need to 
create a corresponding bridge in advance.


2018年11月6日(火) 21:31 Soheil Pourbafrani mailto:soheil.i...@gmail.com>>:

Hi, I initilize an instance using a defined flat network and I got the 
error:
port no present in bridge br-int

I have a 2 node deployment (controller + network, compute).

The output of the command ovs-vsctl show is
*
*
*On the network node*
d3a06f16-d727-4333-9de6-cf4ce3b0ce36
     Manager "ptcp:6640:127.0.0.1"
         is_connected: true
     Bridge br-ex
         Controller "tcp:127.0.0.1:6633 "
             is_connected: true
         fail_mode: secure
         Port br-ex
             Interface br-ex
                 type: internal
         Port phy-br-ex
             Interface phy-br-ex
                 type: patch
                 options: {peer=int-br-ex}
         Port "ens33"
             Interface "ens33"
     Bridge br-int
         Controller "tcp:127.0.0.1:6633 "
             is_connected: true
         fail_mode: secure
         Port br-int
             Interface br-int
                 type: internal
         Port patch-tun
             Interface patch-tun
                 type: patch
                 options: {peer=patch-int}
         Port int-br-ex
             Interface int-br-ex
                 type: patch
                 options: {peer=phy-br-ex}
         Port "tapefb98047-57"
             tag: 1
             Interface "tapefb98047-57"
                 type: internal
         Port "qr-d62d0c14-51"
             tag: 1
             Interface "qr-d62d0c14-51"
                 type: internal
         Port "qg-5468707b-6d"
             tag: 2
             Interface "qg-5468707b-6d"
                 type: internal
     Bridge br-tun
         Controller "tcp:127.0.0.1:6633 "
             is_connected: true
         fail_mode: secure
         Port patch-int
             Interface patch-int
                 type: patch
                 options: {peer=patch-tun}
         Port br-tun
             Interface br-tun
                 type: internal
         Port "vxlan-c0a8003d"
             Interface "vxlan-c0a8003d"
                 type: vxlan
                 options: {df_default="true", in_key=flow, local_ip="192.168.0.62", 
out_key=flow, remote_ip="192.168.0.61"}
     ovs_version: "2.9.0"

*On the Compute node*
*
*
*55e62867-9c88-4925-b49c-55fb74d174bd*
*    Manager "ptcp:6640:127.0.0.1"*
*        is_connected: true*
*    Bridge br-ex*
*        Controller "tcp:127.0.0.1:6633 "*
*            is_connected: true*
*        fail_mode: secure*
*        Port phy-br-ex*
*            Interface phy-br-ex*
*                type: patch*
*                options: {peer=int-br-ex}*
*        Port "enp2s0"*
*            Interface "enp2s0"*
*        Port br-ex*
*            Interface br-ex*
*                type: internal*
*    Bridge br-tun*
*        Controller "tcp:127.0.0.1:6633 "*
*            is_connected: true*
*        fail_mode: secure*
*        Port br-tun*
*            Interface br-tun*
*                type: internal*
*        Port "vxlan-c0a8003e"*
*            Interface "vxlan-c0a8003e"*
*                type: vxlan*
*                options: {df_default="true", in_key=flow, local_ip="192.168.0.61", 
out_key=flow, remote_ip="192.168.0.62"}*
*        Port patch-int*
*            Interface patch-int*
*                type: patch*
*                options: {peer=patch-tun}*
*    Bridge br-int*
*        Controller "tcp:127.0.0.1:6633 "*
*            is_connected: true*
*        fail_mode: secure*
*        Port int-br-ex*
*            Interface int-br-ex*
*                type: patch*
*                options: {peer=phy-br-ex}*
*        Port br-int*
*            Interface br-int*
*                type: internal*
*        Port patch-tun*
*            Interface patch-tun*
 

Re: [Openstack] [openstack client] command completion

2018-11-06 Thread Bernd Bausch
Thanks for educating me, Doug and Jeremy. Bug submitted.

Just by the way: I checked [1] and [2] to find out where bugs might be
tracked. The former doesn't mention bugs, the latter is outdated.
Finding the way through the maze of OpenStack information is not always
easy.

[1]
https://governance.openstack.org/tc/reference/projects/openstackclient.html

[2] https://wiki.openstack.org/wiki/OpenStackClient

On 11/6/2018 10:09 PM, Doug Hellmann wrote:
> Bernd Bausch  writes:
>
>> Rocky Devstack sets up bash command completion for the openstack client,
>> e.g. /openstack net[TAB]/ expands to /network/. Sadly, there is no
>> command completion when using the client interactively:
>>
>> /$ openstack//
>> //(openstack) net[TAB][TAB][TAB][TAB][TAB]//[key breaks]/   #
>> nothing happens
>>
>> But I faintly remember that it worked in earlier releases. Can this be
>> configured, and how? Is this a bug?
> It seems like one.
>
>> By the way, there used to be a /python-openstackclient /section in
>> Launchpad. Doesn't exist anymore. Where are bugs tracked these days?
> According to [1] the bug tracker has moved to storyboard.
>
> Doug
>
> [1] 
> http://git.openstack.org/cgit/openstack/python-openstackclient/tree/README.rst#n41




signature.asc
Description: OpenPGP digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] DHCP not accessible on new compute node.

2018-11-06 Thread Torin Woltjer
So I did further ping tests and explored differences between my working compute 
nodes and my non-working compute node. Firstly, it seems that the VXLAN is 
working between the nonworking compute node and controller nodes. After 
manually setting IP addresses, I can ping from an instance on the non working 
node to 172.16.1.1 (neutron gateway); when running tcpdump I can see icmp on:
-compute's bridge interface
-compute's vxlan interface
-controller's vxlan interface
-controller's bridge interface
-controller's qrouter namespace

This behavior is expected and is the same for instances on the working compute 
nodes. However if I try to ping 172.16.1.2 (neutron dhcp) from an instance on 
the nonworking compute node, pings do not flow. If I use tcpdump to listen for 
pings I cannot hear any, even listening on the compute node itself; this 
includes listening on the vxlan, bridge, and the tap device directly. Once I 
try to ping in reverse, from the dhcp netns on the controller to the instance 
on the non-working compute node, pings begin to flow. The same is true for 
pings between the instance on the nonworking compute and an instance on the 
working compute. Pings do not flow, until the working instance pings. Once 
pings are flowing between the nonworking instance and neutron DHCP; I run 
dhclient on the instance and start listening for DHCP requests with tcpdump, 
and I hear them on:
-compute's bridge interface
-compute's vxlan interface
They don't make it to the controller node.

I've re-enabled l2-population on the controller's and rebooted them just in 
case, but the problem persists. A diff of /etc/ on all compute nodes shows that 
all openstack and networking related configuration is effectively identical. 
The last difference between the non-working compute node and the working 
compute nodes as far as I can tell, is that the new node has a different 
network card. The working nodes use "Broadcom Limited NetXtreme II BCM57712 10 
Gigabit Ethernet" and the nonworking node uses a "NetXen Incorporated NX3031 
Multifunction 1/10-Gigabit Server Adapter".

Are there any known issues with neutron and this brand of network adapter? I 
looked at the capabilities on both adapters and here are the differences:

Broadcom NetXen
 tx-tcp-ecn-segmentation: on tx-tcp-ecn-segmentation: off [fixed]
 rx-vlan-offload: on [fixed] rx-vlan-offload: off [fixed]
 receive-hashing: on receive-hashing: off [fixed]
 rx-vlan-filter: on  rx-vlan-filter: off [fixed]
 tx-gre-segmentation: on tx-gre-segmentation: off [fixed]
 tx-gre-csum-segmentation: ontx-gre-csum-segmentation: off [fixed]
 tx-ipxip4-segmentation: on  tx-ipxip4-segmentation: off [fixed]
 tx-udp_tnl-segmentation: on tx-udp_tnl-segmentation: off [fixed]
 tx-udp_tnl-csum-segmentation: ontx-udp_tnl-csum-segmentation: off 
[fixed]
 tx-gso-partial: on  tx-gso-partial: off [fixed]
 loopback: off   loopback: off [fixed]
 rx-udp_tunnel-port-offload: on  rx-udp_tunnel-port-offload: off [fixed]


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [PackStack][Neutron] erro port no present in bridge br-int

2018-11-06 Thread Akihiro Motoki
How is your [ovs] bridge_mapping in your configuration?
Flat network requires a corresponding bridge_mapping entry and you also
need to create a corresponding bridge in advance.


2018年11月6日(火) 21:31 Soheil Pourbafrani :

> Hi, I initilize an instance using a defined flat network and I got the
> error:
> port no present in bridge br-int
>
> I have a 2 node deployment (controller + network, compute).
>
> The output of the command ovs-vsctl show is
>
> *On the network node*
> d3a06f16-d727-4333-9de6-cf4ce3b0ce36
> Manager "ptcp:6640:127.0.0.1"
> is_connected: true
> Bridge br-ex
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port br-ex
> Interface br-ex
> type: internal
> Port phy-br-ex
> Interface phy-br-ex
> type: patch
> options: {peer=int-br-ex}
> Port "ens33"
> Interface "ens33"
> Bridge br-int
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port br-int
> Interface br-int
> type: internal
> Port patch-tun
> Interface patch-tun
> type: patch
> options: {peer=patch-int}
> Port int-br-ex
> Interface int-br-ex
> type: patch
> options: {peer=phy-br-ex}
> Port "tapefb98047-57"
> tag: 1
> Interface "tapefb98047-57"
> type: internal
> Port "qr-d62d0c14-51"
> tag: 1
> Interface "qr-d62d0c14-51"
> type: internal
> Port "qg-5468707b-6d"
> tag: 2
> Interface "qg-5468707b-6d"
> type: internal
> Bridge br-tun
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port patch-int
> Interface patch-int
> type: patch
> options: {peer=patch-tun}
> Port br-tun
> Interface br-tun
> type: internal
> Port "vxlan-c0a8003d"
> Interface "vxlan-c0a8003d"
> type: vxlan
> options: {df_default="true", in_key=flow,
> local_ip="192.168.0.62", out_key=flow, remote_ip="192.168.0.61"}
> ovs_version: "2.9.0"
>
> *On the Compute node*
>
> *55e62867-9c88-4925-b49c-55fb74d174bd*
> *Manager "ptcp:6640:127.0.0.1"*
> *is_connected: true*
> *Bridge br-ex*
> *Controller "tcp:127.0.0.1:6633 "*
> *is_connected: true*
> *fail_mode: secure*
> *Port phy-br-ex*
> *Interface phy-br-ex*
> *type: patch*
> *options: {peer=int-br-ex}*
> *Port "enp2s0"*
> *Interface "enp2s0"*
> *Port br-ex*
> *Interface br-ex*
> *type: internal*
> *Bridge br-tun*
> *Controller "tcp:127.0.0.1:6633 "*
> *is_connected: true*
> *fail_mode: secure*
> *Port br-tun*
> *Interface br-tun*
> *type: internal*
> *Port "vxlan-c0a8003e"*
> *Interface "vxlan-c0a8003e"*
> *type: vxlan*
> *options: {df_default="true", in_key=flow,
> local_ip="192.168.0.61", out_key=flow, remote_ip="192.168.0.62"}*
> *Port patch-int*
> *Interface patch-int*
> *type: patch*
> *options: {peer=patch-tun}*
> *Bridge br-int*
> *Controller "tcp:127.0.0.1:6633 "*
> *is_connected: true*
> *fail_mode: secure*
> *Port int-br-ex*
> *Interface int-br-ex*
> *type: patch*
> *options: {peer=phy-br-ex}*
> *Port br-int*
> *Interface br-int*
> *type: internal*
> *Port patch-tun*
> *Interface patch-tun*
> *type: patch*
> *options: {peer=patch-int}*
> *ovs_version: "2.9.0"*
>
> How can I solve the problem?
>
> Thanks
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] VMs cannot fetch metadata

2018-11-06 Thread Jay Pipes

https://bugs.launchpad.net/neutron/+bug/1777640

Best,
-jay

On 11/06/2018 08:21 AM, Terry Lundin wrote:

Hi all,

I've been struggling with instances suddenly not being able to fetch 
metadata from Openstack Queens (this has worked fine earlier).


Newly created VMs fail to connect to the magic ip, eg. 
http://169.254.169.254/, and won't initialize properly. Subsequently ssh 
login will fail since no key is uploaded.


The symptom is failed requests in the log

*Cirros:*
Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending select for 10.0.0.18...
Lease of 10.0.0.18 obtained, lease time 86400
route: SIOCADDRT: File exists
WARN: failed: route add -net "0.0.0.0/0" gw "10.0.0.1"
cirros-ds 'net' up at 0.94
checkinghttp://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 0.94. request failed
failed 2/20: up 3.01. request failed
failed 3/20: up 5.03. request failed
failed 4/20: up 7.04. request failed

*..and on Centos6:*
ci-info: | Route |   Destination   | Gateway  | Genmask | Interface | 
Flags |
ci-info: 
+---+-+--+-+---+---+
ci-info: |   0   | 169.254.169.254 | 10.0.0.1 | 255.255.255.255 |eth0   |  
UGH  |
ci-info: |   1   | 10.0.0.0| 0.0.0.0  |  255.255.255.0  |eth0   |   
U   |
ci-info: |   2   | 0.0.0.0 | 10.0.0.1 | 0.0.0.0 |eth0   |   
UG  |
ci-info: 
+---+-+--+-+---+---+
2018-11-06 08:10:07,892 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: 
unexpected error ['NoneType' object has no attribute 'status_code']
2018-11-06 08:10:08,906 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: 
unexpected error ['NoneType' object has no attribute 'status_code']
2018-11-06 08:10:09,925 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: 
unexpected error ['NoneType' object has no attribute
...

Using Curl manually, eg. '/curl http://169.254.169.254/openstack/' one 
gets:


/curl: (52) Empty reply from server/

*At the same time this error is showing up in the syslog on the controller:*

Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 460, 
in fire_timers

Nov  6 12:51:01 controller neutron-metadata-agent[3094]: timer()
Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
59, in __call__

Nov  6 12:51:01 controller neutron-metadata-agent[3094]: cb(*args, **kw)
Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
219, in main
Nov  6 12:51:01 controller neutron-metadata-agent[3094]: result = 
function(*args, **kwargs)
Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 793, in 
process_request
Nov  6 12:51:01 controller neutron-metadata-agent[3094]: 
proto.__init__(conn_state, self)
Nov  6 12:51:01 controller neutron-metadata-agent[3094]: TypeError: 
__init__() takes exactly 4 arguments (3 given)


*Neither rebooting the controller, reinstalling neutron, or restarting 
the services will do anything top fix this.*


Has anyone else seen this? We are using Queens with a single controller.

Kind Regards

Terje Lundin






___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack client] command completion

2018-11-06 Thread Jeremy Stanley
On 2018-11-06 10:58:04 +0900 (+0900), Bernd Bausch wrote:
[...]
> By the way, there used to be a /python-openstackclient /section in
> Launchpad. Doesn't exist anymore. Where are bugs tracked these days?

At the top of https://launchpad.net/python-openstackclient it says,
"Note that all Launchpad activity (just bugs & blueprints really)
has been migrated to OpenStack's Storyboard:
https://storyboard.openstack.org/#!/project_group/80";

I suppose now that project group name URL support is in for SB, they
could update that to the more memorable
https://storyboard.openstack.org/#!/project_group/openstackclient
instead.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] VMs cannot fetch metadata

2018-11-06 Thread Terry Lundin

Hi all,

I've been struggling with instances suddenly not being able to fetch 
metadata from Openstack Queens (this has worked fine earlier).


Newly created VMs fail to connect to the magic ip, eg. 
http://169.254.169.254/, and won't initialize properly. Subsequently ssh 
login will fail since no key is uploaded.


The symptom is failed requests in the log

*Cirros:*
Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending select for 10.0.0.18...
Lease of 10.0.0.18 obtained, lease time 86400
route: SIOCADDRT: File exists
WARN: failed: route add -net "0.0.0.0/0" gw "10.0.0.1"
cirros-ds 'net' up at 0.94
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 0.94. request failed
failed 2/20: up 3.01. request failed
failed 3/20: up 5.03. request failed
failed 4/20: up 7.04. request failed

*..and on Centos6:*
ci-info: | Route |   Destination   | Gateway  | Genmask | Interface | 
Flags |
ci-info: 
+---+-+--+-+---+---+
ci-info: |   0   | 169.254.169.254 | 10.0.0.1 | 255.255.255.255 |eth0   |  
UGH  |
ci-info: |   1   | 10.0.0.0| 0.0.0.0  |  255.255.255.0  |eth0   |   
U   |
ci-info: |   2   | 0.0.0.0 | 10.0.0.1 | 0.0.0.0 |eth0   |   
UG  |
ci-info: 
+---+-+--+-+---+---+
2018-11-06 08:10:07,892 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: 
unexpected error ['NoneType' object has no attribute 'status_code']
2018-11-06 08:10:08,906 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: 
unexpected error ['NoneType' object has no attribute 'status_code']
2018-11-06 08:10:09,925 - url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: 
unexpected error ['NoneType' object has no attribute
...

Using Curl manually, eg. '/curl http://169.254.169.254/openstack/' one 
gets:


/curl: (52) Empty reply from server/

*At the same time this error is showing up in the syslog on the controller:*

Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 460, 
in fire_timers

Nov  6 12:51:01 controller neutron-metadata-agent[3094]: timer()
Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
59, in __call__

Nov  6 12:51:01 controller neutron-metadata-agent[3094]: cb(*args, **kw)
Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
219, in main
Nov  6 12:51:01 controller neutron-metadata-agent[3094]: result = 
function(*args, **kwargs)
Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 793, in 
process_request
Nov  6 12:51:01 controller neutron-metadata-agent[3094]: 
proto.__init__(conn_state, self)
Nov  6 12:51:01 controller neutron-metadata-agent[3094]: TypeError: 
__init__() takes exactly 4 arguments (3 given)


*Neither rebooting the controller, reinstalling neutron, or restarting 
the services will do anything top fix this.*


Has anyone else seen this? We are using Queens with a single controller.

Kind Regards

Terje Lundin




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack client] command completion

2018-11-06 Thread Doug Hellmann
Bernd Bausch  writes:

> Rocky Devstack sets up bash command completion for the openstack client,
> e.g. /openstack net[TAB]/ expands to /network/. Sadly, there is no
> command completion when using the client interactively:
>
> /$ openstack//
> //(openstack) net[TAB][TAB][TAB][TAB][TAB]//[key breaks]/   #
> nothing happens
>
> But I faintly remember that it worked in earlier releases. Can this be
> configured, and how? Is this a bug?

It seems like one.

>
> By the way, there used to be a /python-openstackclient /section in
> Launchpad. Doesn't exist anymore. Where are bugs tracked these days?

According to [1] the bug tracker has moved to storyboard.

Doug

[1] 
http://git.openstack.org/cgit/openstack/python-openstackclient/tree/README.rst#n41

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [PackStack][Neutron] erro port no present in bridge br-int

2018-11-06 Thread Soheil Pourbafrani
Hi, I initilize an instance using a defined flat network and I got the
error:
port no present in bridge br-int

I have a 2 node deployment (controller + network, compute).

The output of the command ovs-vsctl show is

*On the network node*
d3a06f16-d727-4333-9de6-cf4ce3b0ce36
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "ens33"
Interface "ens33"
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "tapefb98047-57"
tag: 1
Interface "tapefb98047-57"
type: internal
Port "qr-d62d0c14-51"
tag: 1
Interface "qr-d62d0c14-51"
type: internal
Port "qg-5468707b-6d"
tag: 2
Interface "qg-5468707b-6d"
type: internal
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port "vxlan-c0a8003d"
Interface "vxlan-c0a8003d"
type: vxlan
options: {df_default="true", in_key=flow,
local_ip="192.168.0.62", out_key=flow, remote_ip="192.168.0.61"}
ovs_version: "2.9.0"

*On the Compute node*

*55e62867-9c88-4925-b49c-55fb74d174bd*
*Manager "ptcp:6640:127.0.0.1"*
*is_connected: true*
*Bridge br-ex*
*Controller "tcp:127.0.0.1:6633 "*
*is_connected: true*
*fail_mode: secure*
*Port phy-br-ex*
*Interface phy-br-ex*
*type: patch*
*options: {peer=int-br-ex}*
*Port "enp2s0"*
*Interface "enp2s0"*
*Port br-ex*
*Interface br-ex*
*type: internal*
*Bridge br-tun*
*Controller "tcp:127.0.0.1:6633 "*
*is_connected: true*
*fail_mode: secure*
*Port br-tun*
*Interface br-tun*
*type: internal*
*Port "vxlan-c0a8003e"*
*Interface "vxlan-c0a8003e"*
*type: vxlan*
*options: {df_default="true", in_key=flow,
local_ip="192.168.0.61", out_key=flow, remote_ip="192.168.0.62"}*
*Port patch-int*
*Interface patch-int*
*type: patch*
*options: {peer=patch-tun}*
*Bridge br-int*
*Controller "tcp:127.0.0.1:6633 "*
*is_connected: true*
*fail_mode: secure*
*Port int-br-ex*
*Interface int-br-ex*
*type: patch*
*options: {peer=phy-br-ex}*
*Port br-int*
*Interface br-int*
*type: internal*
*Port patch-tun*
*Interface patch-tun*
*type: patch*
*options: {peer=patch-int}*
*ovs_version: "2.9.0"*

How can I solve the problem?

Thanks
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack