[Yahoo-eng-team] [Bug 1825435] Re: TestRPC unit tests intermittently fail with "'>' not supported between instances of 'NoneType' and 'datetime.datetime'" - maybe due to "Fatal Python error: Cannot re

2019-04-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/655843
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1a4a521fefb71aa0817770f265063a9150653743
Submitter: Zuul
Branch:master

commit 1a4a521fefb71aa0817770f265063a9150653743
Author: Stephen Finucane 
Date:   Fri Apr 26 12:10:11 2019 +0100

test_rpc: Stop f** with global state

We're occasionally seeing stacktraces like this in our tests:

  Fatal Python error: Cannot recover from stack overflow.

  Current thread 0x7fe66549f740 (most recent call first):
File 
"...nova/.tox/py36/lib/python3.6/site-packages/oslo_config/cfg.py", line 2614 
in _get
File 
"...nova/.tox/py36/lib/python3.6/site-packages/oslo_config/cfg.py", line 2183 
in __getattr__
File 
"...nova/.tox/py36/lib/python3.6/site-packages/oslo_config/cfg.py", line 2614 
in _get
File 
"...nova/.tox/py36/lib/python3.6/site-packages/oslo_config/cfg.py", line 2183 
in __getattr__
File 
"...nova/.tox/py36/lib/python3.6/site-packages/oslo_config/cfg.py", line 2614 
in _get
...

From a look at the oslo.config source, this seems to be occurring
because 'ConfigOpts.__cache' is somehow undefined, which results in the
'_get' method attempting to call '__getattr__' [1], which calls '_get'
[2], which calls '__getattr__' and so on.

The exact reason this is happening isn't clear, but what is clear is
that how we handle global config options in the tests that are failing
is very strange and potentially subject to race conditions. We have a
clear pattern for mocking everything and anything - the mock module -
and we should be using this here. Start doing so, reworking a lot of the
tests in the process, in order to avoid messing with oslo.config and
triggering the issue entirely.

[1] 
https://github.com/openstack/oslo.config/blob/6.8.1/oslo_config/cfg.py#L2614
[2] 
https://github.com/openstack/oslo.config/blob/6.8.1/oslo_config/cfg.py#L2183

Change-Id: I468cef94185a1b59f379ca527050450e03664c67
Signed-off-by: Stephen Finucane 
Closes-Bug: #1825435


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1825435

Title:
  TestRPC unit tests intermittently fail with "'>' not supported between
  instances of 'NoneType' and 'datetime.datetime'" - maybe due to "Fatal
  Python error: Cannot recover from stack overflow."

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/45/649345/7/check/openstack-tox-py36/ba15c17
  /job-output.txt.gz#_2019-04-18_18_12_52_121182

  2019-04-18 18:12:52.121182 | ubuntu-bionic | {3} 
nova.tests.unit.test_rpc.TestRPC.test_get_transport_url_null [] ... inprogress
  2019-04-18 18:12:52.138024 | ubuntu-bionic | '>' not supported between 
instances of 'NoneType' and 'datetime.datetime'
  2019-04-18 18:12:52.353474 | ubuntu-bionic | ERROR: InvocationError for 
command /home/zuul/src/git.openstack.org/openstack/nova/.tox/py36/bin/stestr 
run (exited with code 1)

  Which seems to kill the test and stestr.

  There is also a stack overflow here:

  http://logs.openstack.org/45/649345/7/check/openstack-tox-py36/ba15c17
  /job-output.txt.gz#_2019-04-18_18_10_53_423952

  2019-04-18 18:10:53.423952 | ubuntu-bionic | Fatal Python error: Cannot 
recover from stack overflow.
  2019-04-18 18:10:53.423999 | ubuntu-bionic |
  2019-04-18 18:10:53.424162 | ubuntu-bionic | Current thread 
0x7f34d2bcc740 (most recent call first):
  2019-04-18 18:10:53.424472 | ubuntu-bionic |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/oslo_config/cfg.py",
 line 2614 in _get
  2019-04-18 18:10:53.424794 | ubuntu-bionic |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/oslo_config/cfg.py",
 line 2183 in __getattr__
  ...
  2019-04-18 18:10:53.476982 | ubuntu-bionic |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py36/lib/python3.6/site-packages/oslo_config/cfg.py",
 line 2183 in __getattr__
  2019-04-18 18:10:53.477236 | ubuntu-bionic |   ...
  2019-04-18 18:10:53.477293 | ubuntu-bionic | Aborted

  The stack overflow seems to be just nova since April 15:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22%7C%20Fatal%20Python%20error%3A%20Cannot%20recover%20from%20stack%20overflow.%5C%22%20AND%20tags%3A%5C%22console%5C%22=7d

  I don't see anything that looks related for nova changes around April
  15, but maybe something that was released to upper-constraints on
  April 15:

  
https://github.com/openstack/requirements/commit/a96ee0e258aafb2880149b3e25dd5959f7b37c09

  Nothing in there looks obvious though.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1826726] [NEW] [L3] stale device/process remain in network node after router removed

2019-04-28 Thread LIU Yulong
Public bug reported:

[L3] stale device/process remain in network node after router removed

ENV: stable/queens
Step to reproduce:
1. create HA router
2. attach router interface to subnet
3. remove router interface from subnet
4. delete router

Just run these 4 steps a few times, you may see there will be ha-ports
and neutron-keepalived-state-change process remain in the network node
after router was totally removed.

Test script:

function create_clean_net_struct()
{

  neutron net-create scale-test-net-${1}
  subnet_id=`neutron subnet-create --name scale-test-subnet-${1} 
scale-test-net-${1} 192.168.${1}.0/24|grep "| id"|awk '{print $4}'`
  router_id=`neutron router-create scale-test-router-${1}|grep "| id"|awk 
'{print $4}'`
  neutron router-interface-add $router_id subnet=$subnet_id

  neutron router-interface-delete $router_id subnet=$subnet_id
  neutron router-delete $router_id
  neutron subnet-delete scale-test-subnet-${1}
  neutron net-delete scale-test-net-${1}
}
create_clean_net_struct $1

** Affects: neutron
 Importance: Undecided
 Assignee: LIU Yulong (dragon889)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => LIU Yulong (dragon889)

** Summary changed:

- [L3] stable device remain in network node after router removed
+ [L3] stable device/process remain in network node after router removed

** Summary changed:

- [L3] stable device/process remain in network node after router removed
+ [L3] stale device/process remain in network node after router removed

** Description changed:

- [L3] stable device/process remain in network node after router removed
+ [L3] stale device/process remain in network node after router removed
  
  ENV: stable/queens
  Step to reproduce:
  1. create HA router
  2. attach router interface to subnet
  3. remove router interface from subnet
  4. delete router
  
  Just run these 4 steps a few times, you may see there will be ha-ports
  and neutron-keepalived-state-change process remain in the network node
  after router was totally removed.
  
- 
  Test script:
  
  function create_clean_net_struct()
  {
  
-   neutron net-create scale-test-net-${1}
-   subnet_id=`neutron subnet-create --name scale-test-subnet-${1} 
scale-test-net-${1} 192.168.${1}.0/24|grep "| id"|awk '{print $4}'`
-   router_id=`neutron router-create scale-test-router-${1}|grep "| id"|awk 
'{print $4}'`
-   neutron router-interface-add $router_id subnet=$subnet_id
+   neutron net-create scale-test-net-${1}
+   subnet_id=`neutron subnet-create --name scale-test-subnet-${1} 
scale-test-net-${1} 192.168.${1}.0/24|grep "| id"|awk '{print $4}'`
+   router_id=`neutron router-create scale-test-router-${1}|grep "| id"|awk 
'{print $4}'`
+   neutron router-interface-add $router_id subnet=$subnet_id
  
-   neutron router-interface-delete $router_id subnet=$subnet_id
-   neutron router-delete $router_id
-   neutron subnet-delete scale-test-subnet-${1}
-   neutron net-delete scale-test-net-${1}
+   neutron router-interface-delete $router_id subnet=$subnet_id
+   neutron router-delete $router_id
+   neutron subnet-delete scale-test-subnet-${1}
+   neutron net-delete scale-test-net-${1}
  }
  create_clean_net_struct $1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1826726

Title:
  [L3] stale device/process remain in network node after router removed

Status in neutron:
  New

Bug description:
  [L3] stale device/process remain in network node after router removed

  ENV: stable/queens
  Step to reproduce:
  1. create HA router
  2. attach router interface to subnet
  3. remove router interface from subnet
  4. delete router

  Just run these 4 steps a few times, you may see there will be ha-ports
  and neutron-keepalived-state-change process remain in the network node
  after router was totally removed.

  Test script:

  function create_clean_net_struct()
  {

    neutron net-create scale-test-net-${1}
    subnet_id=`neutron subnet-create --name scale-test-subnet-${1} 
scale-test-net-${1} 192.168.${1}.0/24|grep "| id"|awk '{print $4}'`
    router_id=`neutron router-create scale-test-router-${1}|grep "| id"|awk 
'{print $4}'`
    neutron router-interface-add $router_id subnet=$subnet_id

    neutron router-interface-delete $router_id subnet=$subnet_id
    neutron router-delete $router_id
    neutron subnet-delete scale-test-subnet-${1}
    neutron net-delete scale-test-net-${1}
  }
  create_clean_net_struct $1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1826726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822801] Re: Baremetal port's host_id get updated during instance restart

2019-04-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/649345
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=091aa3289694a27704e48931a579f50a3179b036
Submitter: Zuul
Branch:master

commit 091aa3289694a27704e48931a579f50a3179b036
Author: Hamdy Khader 
Date:   Tue Apr 2 17:26:14 2019 +0300

Do not perform port update in case of baremetal instance.

In case of a baremetal instance, the instance's port binding:host_id
gets updated during instance reboot to the nova compute host id by
the periodic task: _heal_instance_info_cache. This regression was
introduced in commit: I75fd15ac2a29e420c09499f2c41d11259ca811ae

This is an un-desirable change as ironic virt driver did the original
port binding, nova should not update the value.
In case of a baremetal port, the binding:host_id represents the ironic
node_uuid. In case of a SmartNIC(baremetal) port[1] the binding:host_id
represent the SmartNIC hostname and it MUST not change since ironic
relies on that information as well as the Neutron agent that runs on
the SmartNIC.

A new API method was added, "manages_port_bindings()", to ComputeDriver
that defaults to False, and overriden in IronicDriver to True.

A call to this API method is now made in _heal_instance_info_cache() to
prevent port update for instance ports in case the underlying
ComputeDriver manages the port binding.

[1] I658754f7f8c74087b0aabfdef222a2c0b5698541

Change-Id: I47d1aba17cd2e9fff67846cc243c8fbd9ac21659
Closes-Bug: #1822801


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822801

Title:
  Baremetal port's host_id get updated during instance restart

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  In case of baremetal overcloud, the instance ports gets updated during 
instance reboot[1] to change host_id
  to be the nova compute host_id.

  This way baremetal ports' host_id will be changed to indicate the nova
  host_id instead of ironic node uuid.

  In case of normal instance or even baremetal instance it wouldn't be a 
problem but in case of SmartNIC
  baremetal instance the port's host_id is important to communicate with the 
relevant Neutron agent running on the SmartNIC as the port's host_id contains 
the SmartNIC host name.

  
  Reproduce:
  - deploy baremetal overcloud 
  - create baremetal instance
  - after creation complete, check port details and notice 
binding_host_id=overcloud-controller-0.localdomain

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L7191

  
  Nova version:
  ()[root@overcloud-controller-0 /]# rpm -qa | grep nova
  puppet-nova-14.4.1-0.20190322112825.740f45a.el7.noarch
  python2-nova-19.0.0-0.20190322140639.d7c8924.el7.noarch
  python2-novajoin-1.1.2-0.20190322123935.e8b18c4.el7.noarch
  openstack-nova-compute-19.0.0-0.20190322140639.d7c8924.el7.noarch
  python2-novaclient-13.0.0-0.20190311121537.62bf880.el7.noarch
  openstack-nova-common-19.0.0-0.20190322140639.d7c8924.el7.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1822801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825088] Re: [port forwarding] should not process port forwarding if snat node only run DHCP

2019-04-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/653423
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4082e280c8574708ec644f9eaa1cdb4a2169141c
Submitter: Zuul
Branch:master

commit 4082e280c8574708ec644f9eaa1cdb4a2169141c
Author: LIU Yulong 
Date:   Wed Apr 17 20:52:39 2019 +0800

Not process port forwarding if no snat functionality

If dvr router is processed on a 'dvr_snat' node but without snat
functionality, the port forwarding should not be processed on
this host since the snat-namespace will never be created. For
instance, the isolated DHCP node which only have the namespace
of qrouter. The l3 agent will process this router, but should
not do any port forwarding actions.

Change-Id: I6ecd86089643f4eb98865a8d8d0dec4359564026
Closes-Bug: #1825088


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1825088

Title:
  [port forwarding] should not process port forwarding if snat node only
  run DHCP

Status in neutron:
  Fix Released

Bug description:
  [port forwarding] should not process port forwarding if snat node only
  run DHCP

  Assuming you have 3 network nodes, the agent modes are all `dvr_snat`.
  One `dvr_ha` router1 is scheduled to node1 and node2. The dhcp
  namespace (connected to the router1) is scheduled to node3. Then snat-
  namespace will only exist on node1 and node2. But on onde3, the l3
  agent will also process this router because of the DHCP namespace, as
  well as the port_forwarding extension. Then exception raised.

  Log trace:
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent 
[req-6507e5bd-a500-47ea-9f71-e0123eb24e63 - a890d8d8264640ba9bae20d03e4071fd - 
- -] Failed to process compatible router: 867e1473-4495-4513-8759-dee4cb1b9cef: 
AttributeError: 'NoneType' object has no attribute 'ipv4'
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 655, in 
_process_router_update
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 573, in 
_process_router_if_compatible
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent 
self._process_added_router(router)
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/l3_agent_with_metering.py", 
line 524, in wrapped
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent func(self, 
*args, **kwargs)
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/l3_agent_with_metering.py", 
line 532, in _process_added_router
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent 
self)._process_added_router(router)
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 583, in 
_process_added_router
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent 
self.l3_ext_manager.add_router(self.context, router)
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/l3_agent_extensions_manager.py",
 line 42, in add_router
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent 
extension.obj.add_router(context, data)
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/extensions/port_forwarding.py",
 line 453, in add_router
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent 
self.process_port_forwarding(context, data)
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/extensions/port_forwarding.py",
 line 443, in process_port_forwarding
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent context, ri, 
ri.fip_managed_by_port_forwardings)
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/extensions/port_forwarding.py",
 line 433, in check_local_port_forwardings
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent namespace, 
iptable_manager)
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"", line 2, in _process_create
  2019-04-17 12:39:53.064 3823374 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/coordination.py", line 76, in 
_synchronized
  2019-04-17 12:39:53.064 3823374 ERROR 

[Yahoo-eng-team] [Bug 1826557] Re: Only TC class "htb" is supported

2019-04-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/655920
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=dbe8d330727ec3ed4822124c454e00e8e3055b7d
Submitter: Zuul
Branch:master

commit dbe8d330727ec3ed4822124c454e00e8e3055b7d
Author: Rodolfo Alonso Hernandez 
Date:   Fri Apr 26 14:39:01 2019 +

Only TC class "htb" is supported

In tc_lib.add_tc_policy_class [1], only "htb" type is supported.


[1]https://opendev.org/openstack/neutron/src/branch/stable/stein/neutron/agent/linux/tc_lib.py#L379

Change-Id: I2cb809c069c0e8cdd289b9977cb335ef3e2e3931
Closes-Bug: #1826557


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1826557

Title:
  Only TC class "htb" is supported

Status in neutron:
  Fix Released

Bug description:
  In tc_lib.add_tc_policy_class [1], only "htb" type is supported.

  [1]
  
https://opendev.org/openstack/neutron/src/branch/stable/stein/neutron/agent/linux/tc_lib.py#L379

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1826557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826701] [NEW] the rbd backend root disks of the virtual machine will be cleared if launch failed when do evacuate

2019-04-28 Thread zhangyujun
Public bug reported:

I found that some of the instances could not get started after do
evacuate failed, then I check the nova-compute log [1], that point out
instance root disk volume 'c23c04c9-2a8b-492e-8130-99aafa30b563_disk'
can not be found in ceph compute pool, look back to evacuate failed log
[2],  there was an error occur 'libvirtError: Failed to create
controller memory for group: No space left on device' when do launch.

check the nova code, when this error occured, the evacuate action in 
nova-compute function call stack is:
 
https://github.com/openstack/nova/blob/324db786c86eeb69278736c8e9db6d22f68080e6/nova/compute/manager.py#L3044
nova.compute.mananger.ComputeManager.rebuild_instance -> 
_do_rebuild_instance_with_claim -> _do_rebuild_instance -> 
_rebuild_default_impl -> driver.spawn

https://github.com/openstack/nova/blob/324db786c86eeb69278736c8e9db6d22f68080e6/nova/virt/libvirt/driver.py#L3154
nova.virt.libvirt.driver.LibvirtDriver.spawn -> _create_domain_and_network ->  
_cleanup_failed_start -> cleanup -> _cleanup_rbd

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/storage/rbd_utils.py#L360
nova.virt.libvirt.storage.rbd_utils.RBDDriver.cleanup_volumes -> _destroy_volume

this logic make the instance root disk with rbd image backend was clean
up in ceph, and nerver get started again, and even make the data lost,
is this reasonable?


[1] instance start failed log
2019-04-26 11:44:45.298 46085 WARNING nova.virt.osinfo 
[req-aad46bca-3bc7-48b8-98b8-c735f52a0a9c 5a55fb96f12e42f1a8402faf4593eb4a 
7bcb8c85147e42e99a4f4687179a2203 - - -] Cannot find OS information - Reason: 
(No configuration information found for operating system CentOS)
2019-04-26 11:44:45.403 46085 WARNING nova.virt.osinfo 
[req-56e8e35a-66b5-4d5f-8365-46bed361c6d8 5a55fb96f12e42f1a8402faf4593eb4a 
7bcb8c85147e42e99a4f4687179a2203 - - -] Cannot find OS information - Reason: 
(No configuration information found for operating system CentOS)
2019-04-26 11:44:45.424 46085 INFO os_vif 
[req-56e8e35a-66b5-4d5f-8365-46bed361c6d8 5a55fb96f12e42f1a8402faf4593eb4a 
7bcb8c85147e42e99a4f4687179a2203 - - -] Successfully plugged vif 
VIFOpenVSwitch(active=False,address=fa:16:3e:69:df:d7,bridge_name='br-int',has_traffic_filtering=True,id=89b882ea-15f0-4e2c-b3b1-a515e3a29f52,network=Network(1c212f11-51cf-4114-aebe-1fc016364426),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap89b882ea-15')
2019-04-26 11:44:45.516 46085 WARNING nova.virt.osinfo 
[req-ffe9f008-f920-4887-a453-424b86e9046e 5a55fb96f12e42f1a8402faf4593eb4a 
7bcb8c85147e42e99a4f4687179a2203 - - -] Cannot find OS information - Reason: 
(No configuration information found for operating system CentOS)
2019-04-26 11:44:45.536 46085 INFO os_vif 
[req-ffe9f008-f920-4887-a453-424b86e9046e 5a55fb96f12e42f1a8402faf4593eb4a 
7bcb8c85147e42e99a4f4687179a2203 - - -] Successfully plugged vif 
VIFOpenVSwitch(active=False,address=fa:16:3e:62:d6:59,bridge_name='br-int',has_traffic_filtering=True,id=cdd730dc-c806-424f-8d7a-96253e9a72b1,network=Network(1c212f11-51cf-4114-aebe-1fc016364426),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tapcdd730dc-c8')
2019-04-26 11:44:45.666 46085 WARNING nova.virt.osinfo 
[req-aad46bca-3bc7-48b8-98b8-c735f52a0a9c 5a55fb96f12e42f1a8402faf4593eb4a 
7bcb8c85147e42e99a4f4687179a2203 - - -] Cannot find OS information - Reason: 
(No configuration information found for operating system CentOS)
2019-04-26 11:44:45.684 46085 INFO os_vif 
[req-aad46bca-3bc7-48b8-98b8-c735f52a0a9c 5a55fb96f12e42f1a8402faf4593eb4a 
7bcb8c85147e42e99a4f4687179a2203 - - -] Successfully plugged vif 
VIFOpenVSwitch(active=False,address=fa:16:3e:c9:7d:69,bridge_name='br-int',has_traffic_filtering=True,id=06510302-8d87-4d3e-90e1-2a7bcbb14f6b,network=Network(1c212f11-51cf-4114-aebe-1fc016364426),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap06510302-8d')
2019-04-26 11:44:46.000 46085 ERROR nova.virt.libvirt.guest 
[req-56e8e35a-66b5-4d5f-8365-46bed361c6d8 5a55fb96f12e42f1a8402faf4593eb4a 
7bcb8c85147e42e99a4f4687179a2203 - - -] Error launching a defined domain with 
XML: 
  instance-00bc
  c23c04c9-2a8b-492e-8130-99aafa30b563
  
http://openstack.org/xmlns/libvirt/nova/1.0;>
  
  SIIT-SL-ES1
  2019-04-26 03:44:45
  
32768
500
0
0
16
  
  
admin
admin
  
  

  
  33554432
  33554432
  16
  
16384
  
  

  OpenStack Foundation
  OpenStack Nova
  0.0.1
  c792c755-66d4-4ef9-b7db-dd6bb7ff89f8
  c23c04c9-2a8b-492e-8130-99aafa30b563
  Virtual Machine

  
  
hvm


  
  


  
  


  
  



  
  destroy
  restart
  destroy
  
/usr/libexec/qemu-kvm