[Yahoo-eng-team] [Bug 1693950] Re: test_walk_versions fails with'Command Out of Sync' error

2018-06-25 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1693950

Title:
  test_walk_versions fails with'Command Out of Sync' error

Status in neutron:
  Expired

Bug description:
  Spotted in the gate.
  Version: stable/newton

  test_walk_versions

  sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2014,
  'Command Out of Sync')

  http://logs.openstack.org/96/460696/1/check/gate-neutron-dsvm-
  functional-ubuntu-xenial/0e31fb6/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1693950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1778616] [NEW] DHCP agent block network-vif-plugged cause Instance failed to build

2018-06-25 Thread LIU Yulong
Public bug reported:

ENV:
Neutron stable/queens (12.0.1)
CentOS 7 (3.10.0-514.26.2.el7.x86_64)
Ceph v10.2.9 Jewel

How to reproduce:
Concurrently create 256 VMs in a single network which has 2 dhcp agents.

Exception:
nova-compute side:
2018-06-25 17:56:09.394 43886 DEBUG nova.compute.manager 
[req-22395cd8-2461-411c-9da4-7da1be23e480 78ae27a6ab794fe39e64f57310c15b0e 
70f17debbc324d81bbf76aaa2e3c1bd0 - default default] [instance: 
c6c1d69a-5ee6-4097-8294-8adb02b49a12] Preparing to wait for external event 
network-vif-plugged-6f6962b8-b1b0-48a4-99dd-ae19ec1a0f87 
prepare_for_instance_event 
/usr/lib/python2.7/site-packages/nova/compute/manager.py:325
2018-06-25 18:01:10.670 43886 WARNING nova.virt.libvirt.driver 
[req-679f1393-777d-4b9f-9faa-410e9d7e79b2 78ae27a6ab794fe39e64f57310c15b0e 
70f17debbc324d81bbf76aaa2e3c1bd0 - default default] [instance: 
c6c1d69a-5ee6-4097-8294-8adb02b49a12] Timeout waiting for 
[('network-vif-plugged', u'6f6962b8-b1b0-48a4-99dd-ae19ec1a0f87')] for instance 
with vm_state building and task_state spawning.: Timeout: 300 seconds


neutron server log:
A failed nova boot, the neutron port only have one log:
2018-06-25 17:56:06.270 221045 DEBUG neutron.db.provisioning_blocks 
[req-4bddb839-e9d2-4188-a95a-9a67990c05c0 fc06c70220e74dfd90ebb516ff0da51d 
9ff72ac0624c48c390535d58e8f0b3a1 - default default] Transition to ACTIVE for 
port object 6f6962b8-b1b0-48a4-99dd-ae19ec1a0f87 will not be triggered until 
provisioned by entity DHCP. add_provisioning_component 
/usr/lib/python2.7/site-packages/neutron/db/provisioning_blocks.py:73

It will never get the `Provisioning complete` notification.


Furthermore, a successfully booted nova instance, for the provisioning_blocks, 
it also takes 40s for the DHCP:
2018-06-25 18:00:17.180 266883 DEBUG neutron.db.provisioning_blocks 
[req-37107d50-4777-4f56-b9a0-eaf9b69c17d9 fc06c70220e74dfd90ebb516ff0da51d 
9ff72ac0624c48c390535d58e8f0b3a1 - default default] Transition to ACTIVE for 
port object bcf1767c-1b9d-48ca-a5dc-c74587fa35e5 will not be triggered until 
provisioned by entity DHCP. add_provisioning_component 
/usr/lib/python2.7/site-packages/neutron/db/provisioning_blocks.py:73
2018-06-25 18:00:57.165 266884 DEBUG neutron.db.provisioning_blocks 
[req-720231ac-5996-432d-b033-37c340532127 - - - - -] Provisioning for port 
bcf1767c-1b9d-48ca-a5dc-c74587fa35e5 completed by entity DHCP. 
provisioning_complete 
/usr/lib/python2.7/site-packages/neutron/db/provisioning_blocks.py:132
2018-06-25 18:00:57.167 266884 DEBUG neutron.db.provisioning_blocks 
[req-720231ac-5996-432d-b033-37c340532127 - - - - -] Provisioning complete for 
port bcf1767c-1b9d-48ca-a5dc-c74587fa35e5 triggered by entity DHCP. 
provisioning_complete 
/usr/lib/python2.7/site-packages/neutron/db/provisioning_blocks.py:138


Code:
For [1], seems a sync action with lock will block all port_update rpc. But for 
our test, there is no dhcp sync during the instance boot.
For [2], this lock is the essential issue, all the port update are coming from 
same network, some of them will not get the lock for more than 300 seconds.

[1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L424
[2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L428


Potential solution:
Add more dhcp agents for the network, for our test settings, add 
dhcp_agents_per_network = 10, then it works.
But, such config is really not so good for the production envrionment.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1778616

Title:
  DHCP agent block network-vif-plugged cause Instance failed to build

Status in neutron:
  New

Bug description:
  ENV:
  Neutron stable/queens (12.0.1)
  CentOS 7 (3.10.0-514.26.2.el7.x86_64)
  Ceph v10.2.9 Jewel

  How to reproduce:
  Concurrently create 256 VMs in a single network which has 2 dhcp agents.

  Exception:
  nova-compute side:
  2018-06-25 17:56:09.394 43886 DEBUG nova.compute.manager 
[req-22395cd8-2461-411c-9da4-7da1be23e480 78ae27a6ab794fe39e64f57310c15b0e 
70f17debbc324d81bbf76aaa2e3c1bd0 - default default] [instance: 
c6c1d69a-5ee6-4097-8294-8adb02b49a12] Preparing to wait for external event 
network-vif-plugged-6f6962b8-b1b0-48a4-99dd-ae19ec1a0f87 
prepare_for_instance_event 
/usr/lib/python2.7/site-packages/nova/compute/manager.py:325
  2018-06-25 18:01:10.670 43886 WARNING nova.virt.libvirt.driver 
[req-679f1393-777d-4b9f-9faa-410e9d7e79b2 78ae27a6ab794fe39e64f57310c15b0e 
70f17debbc324d81bbf76aaa2e3c1bd0 - default default] [instance: 
c6c1d69a-5ee6-4097-8294-8adb02b49a12] Timeout waiting for 
[('network-vif-plugged', u'6f6962b8-b1b0-48a4-99dd-ae19ec1a0f87')] for instance 
with vm_state building and task_state spawning.: Timeout: 300 seconds

  
  neutron server log:
  A failed nova boot, the neutron port only have one log:
  

[Yahoo-eng-team] [Bug 1778603] [NEW] Documentation builds failing with Sphinx 1.7.5

2018-06-25 Thread Lance Bragstad
Public bug reported:

Sphinx 1.7.5 apparently includes a new warning that is causing our
documentation builds to fail.

$ .tox/docs/bin/pip list | grep Sphinx
Sphinx   1.7.5
$ git log -n1
commit 057c59f16fc04e5d4e63f408ac5810ebe6d5ac99
Merge: f5a83da 50fd693
Author: Zuul 
Date:   Fri Jun 22 11:48:39 2018 +

Merge "Fix duplicate role names in trusts bug"

The following is the trace when doing `tox -e docs` or `tox -re docs`

reading sources... [ 99%] install/keystone-verify-ubuntu
reading sources... [ 99%] install/shared/note_configuration_vary_by_distribution
reading sources... [ 99%] user/application_credentials
reading sources... [ 99%] user/index
reading sources... [100%] user/json_home

Warning, treated as error:
/home/lbragstad/keystone/keystone/oauth1/validator.py:docstring of 
keystone.oauth1.validator.OAuthValidator.save_verifier:4:Field list ends 
without a blank line; unexpected unindent.
ERROR: InvocationError: '/home/lbragstad/keystone/.tox/docs/bin/sphinx-build -W 
-b html doc/source doc/build/html'

** Affects: keystone
 Importance: High
 Status: Triaged

** Description changed:

  Sphinx 1.7.5 apparently includes a new warning that is causing our
  documentation builds to fail.
  
  $ .tox/docs/bin/pip list | grep Sphinx
- Sphinx   1.7.5
+ Sphinx   1.7.5
  $ git log -n1
  commit 057c59f16fc04e5d4e63f408ac5810ebe6d5ac99
  Merge: f5a83da 50fd693
  Author: Zuul 
  Date:   Fri Jun 22 11:48:39 2018 +
  
- Merge "Fix duplicate role names in trusts bug"
- 
+ Merge "Fix duplicate role names in trusts bug"
  
  The following is the trace when doing `tox -e docs` or `tox -re docs`
  
  reading sources... [ 99%] install/keystone-verify-ubuntu
  reading sources... [ 99%] 
install/shared/note_configuration_vary_by_distribution
  reading sources... [ 99%] user/application_credentials
  reading sources... [ 99%] user/index
  reading sources... [100%] user/json_home
  
- 
  Warning, treated as error:
  /home/lbragstad/keystone/keystone/oauth1/validator.py:docstring of 
keystone.oauth1.validator.OAuthValidator.save_verifier:4:Field list ends 
without a blank line; unexpected unindent.
  ERROR: InvocationError: '/home/lbragstad/keystone/.tox/docs/bin/sphinx-build 
-W -b html doc/source doc/build/html'
- 
- 
- Full trace: http://paste.openstack.org/show/724266/

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1778603

Title:
  Documentation builds failing with Sphinx 1.7.5

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  Sphinx 1.7.5 apparently includes a new warning that is causing our
  documentation builds to fail.

  $ .tox/docs/bin/pip list | grep Sphinx
  Sphinx   1.7.5
  $ git log -n1
  commit 057c59f16fc04e5d4e63f408ac5810ebe6d5ac99
  Merge: f5a83da 50fd693
  Author: Zuul 
  Date:   Fri Jun 22 11:48:39 2018 +

  Merge "Fix duplicate role names in trusts bug"

  The following is the trace when doing `tox -e docs` or `tox -re docs`

  reading sources... [ 99%] install/keystone-verify-ubuntu
  reading sources... [ 99%] 
install/shared/note_configuration_vary_by_distribution
  reading sources... [ 99%] user/application_credentials
  reading sources... [ 99%] user/index
  reading sources... [100%] user/json_home

  Warning, treated as error:
  /home/lbragstad/keystone/keystone/oauth1/validator.py:docstring of 
keystone.oauth1.validator.OAuthValidator.save_verifier:4:Field list ends 
without a blank line; unexpected unindent.
  ERROR: InvocationError: '/home/lbragstad/keystone/.tox/docs/bin/sphinx-build 
-W -b html doc/source doc/build/html'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1778603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750353] Re: _get_changed_synthetic_fields() does not guarantee returned fields to be updatable

2018-06-25 Thread Boden R
** Changed in: neutron
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750353

Title:
  _get_changed_synthetic_fields() does not guarantee returned fields to
  be updatable

Status in neutron:
  In Progress

Bug description:
  While revising [1], I discovered an issue of
  _get_changed_synthetic_fields(): it does not guarantee returned fields
  to be updatable.

  How to reproduce:
   Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
  -> return fields
  (Pdb) fields
  {'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
  where 'host' and 'port_id' are not updatable.

  [1] https://review.openstack.org/#/c/544206/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L696

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1778591] [NEW] GET /allocations/{uuid} on a consumer with no allocations provides no generation

2018-06-25 Thread Chris Dent
Public bug reported:

If we write some allocations with PUT /allocations/{uuid} at modern
microversions, a consumer record is created for {uuid} and a generation
is created for that consumer. Each subsequent attempt to PUT
/allocations/{uuid} must include a matching consumer generation.

If the allocations for a consumer are cleared (either DELETE, or PUT
/allocations/{uuid} with an empty dict of allocations) two things go
awry:

* the consumer record, with a generation, stays around
* GET /allocations/{uuid} returns the following:

   {u'allocations': {}}

That is, no generation is provided, and we have no way figure one out
other than inspecting the details of the error response.

Some options to address this:

* Return the generation in that response
* When the allocations for a consumer go empty, remove the consumer
* Something else?

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1778591

Title:
  GET /allocations/{uuid} on a consumer with no allocations provides no
  generation

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  If we write some allocations with PUT /allocations/{uuid} at modern
  microversions, a consumer record is created for {uuid} and a
  generation is created for that consumer. Each subsequent attempt to
  PUT /allocations/{uuid} must include a matching consumer generation.

  If the allocations for a consumer are cleared (either DELETE, or PUT
  /allocations/{uuid} with an empty dict of allocations) two things go
  awry:

  * the consumer record, with a generation, stays around
  * GET /allocations/{uuid} returns the following:

 {u'allocations': {}}

  That is, no generation is provided, and we have no way figure one out
  other than inspecting the details of the error response.

  Some options to address this:

  * Return the generation in that response
  * When the allocations for a consumer go empty, remove the consumer
  * Something else?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1778591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1778576] [NEW] making new allocations for one consumer against multiple resource providers fails with 409

2018-06-25 Thread Chris Dent
Public bug reported:

If you PUT some allocations for a new consumer (thus no generation), and
those allocations are against more than one resource provider, a 409
failure will happen with:

consumer generation conflict - expected 0 but got None

This because in _new_allocations in handlers/allocation.py we always use
the generation provided in the incoming data when we call
util.ensure_consumer. This works for the first resource provider but
then on the second one the consumer exists, so our generation has to be
different now.

One possible fix (already in progress) is to use the generation from
new_allocations[0].consumer.generation in subsequent trips round the
loop calling _new_allocations.

I guess we must have missed some test cases. I'll make sure to add some
when working on this. I found the problem with my placecat stuff.

** Affects: nova
 Importance: High
 Assignee: Chris Dent (cdent)
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1778576

Title:
  making new allocations for one consumer against multiple resource
  providers fails with 409

Status in OpenStack Compute (nova):
  New

Bug description:
  If you PUT some allocations for a new consumer (thus no generation),
  and those allocations are against more than one resource provider, a
  409 failure will happen with:

  consumer generation conflict - expected 0 but got None

  This because in _new_allocations in handlers/allocation.py we always
  use the generation provided in the incoming data when we call
  util.ensure_consumer. This works for the first resource provider but
  then on the second one the consumer exists, so our generation has to
  be different now.

  One possible fix (already in progress) is to use the generation from
  new_allocations[0].consumer.generation in subsequent trips round the
  loop calling _new_allocations.

  I guess we must have missed some test cases. I'll make sure to add
  some when working on this. I found the problem with my placecat stuff.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1778576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1778563] [NEW] Resize/Cold-migrate doesn't recreate vGPUs

2018-06-25 Thread Sylvain Bauza
Public bug reported:

When resizing an instance having vGPUs, the resized instance will miss
them even if the related allocations have VGPU resources.

The main problem is that we're not passing allocations down to the virt
drivers when finish_migration().

** Affects: nova
 Importance: High
 Assignee: Sylvain Bauza (sylvain-bauza)
 Status: Confirmed


** Tags: resize vgpu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1778563

Title:
  Resize/Cold-migrate doesn't recreate vGPUs

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When resizing an instance having vGPUs, the resized instance will miss
  them even if the related allocations have VGPU resources.

  The main problem is that we're not passing allocations down to the
  virt drivers when finish_migration().

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1778563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777157] Re: cold migration fails for ceph volume instances

2018-06-25 Thread Vladislav Belogrudov
cannot reproduce it - will reopen and will recollect logs with debug if
it happens again

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777157

Title:
  cold migration fails for ceph volume instances

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  queens release,

  running instance from ceph volume, in horizon: 'migrate'

  Instance is in ERROR state. Horizon reports:

  Error: Failed to perform requested operation on instance "instance3", the
  instance has an error status: Please try again later [Error: list index out
  of range].

  yet another instance got:

  Error: Failed to perform requested operation on instance "instance4", the
  instance has an error status: Please try again later [Error: Conflict
  updating instance 6b837382-2a75-46a6-9a09-8c3f90f0ffd7. Expected:
  {'task_state': [u'resize_prep']}. Actual: {'task_state': None}].

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483159] Re: Canonical naming for non-x86 architectures

2018-06-25 Thread Scott Moser
This bug is believed to be fixed in simplestreams in version 0.1.0. If
this is still a problem for you, please make a comment and set the state
back to New

Thank you.

** Changed in: simplestreams
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483159

Title:
  Canonical naming for non-x86 architectures

Status in OpenStack Compute (nova):
  Invalid
Status in simplestreams:
  Fix Released
Status in nova package in Ubuntu:
  Won't Fix

Bug description:
  Various non-x86 architectures (POWER and ARM) don't correctly
  canonicalize into things that libvirt natively understands.

  The attached patches normalizes some alternative architecture strings
  into standardized ones for Nova/libvirt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777591] Re: ‘limit’ in allocation_candidates where sometimes make fore_hosts invalid

2018-06-25 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Triaged

** Changed in: nova/queens
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777591

Title:
  ‘limit’ in allocation_candidates where sometimes make fore_hosts
  invalid

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  Triaged

Bug description:
 Now 'limit' parameter in allocation_candidates api use default value 1000, 
this makes better performace in large scale environment. However, when creating 
a vm/bm with force_hosts to schedule, 'limit' parameter will cut some nodes out 
in allocation_candidates, and sometimes force_hosts method returns 'No hosts 
matched due to not matching...'
  Example:
 test environment with 10 compute nodes, set max_placement_results = 3
 nova boot test --image 9c09cb52-03b9-4631-898d-d443d0dbbf9e  --flavor c1 
--nic none --availability-zone nova:devstack
 No hosts matched due to not matching 'force_hosts' value of 'devstack'

 Debug:
 return provider_summaries:
 
{u'268a3d69-6cf1-418a-aaa8-f2127f4f4468':...,u'a2c3e9e7-53a6-4e15-b150-39bb4135c6a9':...u'0aa80b5e-a0fa-47a6-a4b5-51b21b721ce9':...}
 and node devstack:69d2fe55-e391-4d99-a1fe-8b0b5aad60e7 not in 
provider_summaries.
 I think in large scale environment(compute nodes > 2000), and set default 
max_placement_results=1000 will make force_hosts unavailable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750121] Re: Dynamic routing: adding speaker to agent fails

2018-06-25 Thread Corey Bryant
This bug was fixed in the package neutron-dynamic-routing - 
2:11.0.0-0ubuntu2~cloud0
---

 neutron-dynamic-routing (2:11.0.0-0ubuntu2~cloud0) xenial-pike; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron-dynamic-routing (2:11.0.0-0ubuntu2) artful; urgency=medium
 .
   * d/gbp.conf: Create stable/pike branch.
   * d/p/fix-failure-when-adding-a-speaker-to-an-agent.patch: Cherry-picked
 from upstream stable/pike branch to ensure adding speaker to agent
 doesn't fail (LP: #1750121).


** Changed in: cloud-archive/pike
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750121

Title:
  Dynamic routing: adding speaker to agent fails

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive pike series:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing source package in Artful:
  Won't Fix
Status in neutron-dynamic-routing source package in Bionic:
  Fix Released
Status in neutron-dynamic-routing source package in Cosmic:
  Fix Released

Bug description:
  SRU details for Ubuntu
  --
  [Impact]
  See "Original description" below.

  [Test Case]
  See "Original description" below.

  [Regression Potential]
  Low. This is fixed upstream in corresponding stable branches.

  
  Original description
  
  When following 
https://docs.openstack.org/neutron-dynamic-routing/latest/contributor/testing.html
 everything works fine because the speaker is scheduled to the agent 
automatically (in contrast to what the docs say). But if I remove the speaker 
from the agent and add it again with

  $ neutron bgp-dragent-speaker-remove 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker
  $ neutron bgp-dragent-speaker-add 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker

  the following error is seen in the log:

  Feb 17 10:56:30 test-node01 neutron-bgp-dragent[18999]: ERROR
  neutron_dynamic_routing.services.bgp.agent.bgp_dragent [None req-
  da9a22ae-52a2-4be7-a3e8-2dc2dc970fdd admin admin] Call to driver for
  BGP Speaker d2aa5935-30c2-4369-83ee-b3a0ff77cc49 add_bgp_peer has
  failed with exception 'auth_type'.

  The same thing happens when there are multiple agents and one tries to
  add the speaker to one of the other agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1750121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750121] Re: Dynamic routing: adding speaker to agent fails

2018-06-25 Thread Corey Bryant
Marking Artful as Won't Fix since it's nearly EOL.

** Changed in: neutron-dynamic-routing (Ubuntu Artful)
   Status: Fix Committed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750121

Title:
  Dynamic routing: adding speaker to agent fails

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive pike series:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing source package in Artful:
  Won't Fix
Status in neutron-dynamic-routing source package in Bionic:
  Fix Released
Status in neutron-dynamic-routing source package in Cosmic:
  Fix Released

Bug description:
  SRU details for Ubuntu
  --
  [Impact]
  See "Original description" below.

  [Test Case]
  See "Original description" below.

  [Regression Potential]
  Low. This is fixed upstream in corresponding stable branches.

  
  Original description
  
  When following 
https://docs.openstack.org/neutron-dynamic-routing/latest/contributor/testing.html
 everything works fine because the speaker is scheduled to the agent 
automatically (in contrast to what the docs say). But if I remove the speaker 
from the agent and add it again with

  $ neutron bgp-dragent-speaker-remove 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker
  $ neutron bgp-dragent-speaker-add 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker

  the following error is seen in the log:

  Feb 17 10:56:30 test-node01 neutron-bgp-dragent[18999]: ERROR
  neutron_dynamic_routing.services.bgp.agent.bgp_dragent [None req-
  da9a22ae-52a2-4be7-a3e8-2dc2dc970fdd admin admin] Call to driver for
  BGP Speaker d2aa5935-30c2-4369-83ee-b3a0ff77cc49 add_bgp_peer has
  failed with exception 'auth_type'.

  The same thing happens when there are multiple agents and one tries to
  add the speaker to one of the other agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1750121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1778515] [NEW] nova-manage list_cells doesn't work is some special characters are in the passwords

2018-06-25 Thread Surya Seetharaman
Public bug reported:

nova-manage cell_v2 list_cells does not work without the --verbose flag
if square brackets are used in the DB or transport URL passwords because
we use
https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlsplit
during masking of the passwords and it is explicitly mentioned that this
gives a "ValueError: Invalid IPv6 URL", for square brackets if they are
not symmetric
(https://github.com/enthought/Python-2.7.3/blob/69fe0ffd2d85b4002cacae1f28ef2eb0f25e16ae/Lib/urlparse.py#L181).
Also using "?" or "#" in passwords do not mask the password. These
exceptions surely need to be at least documented somewhere in nova if
they cannot be fixed.

This is what the Error looks like, which is not very helpful:
An error has occurred:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1762, in main
ret = fn(*fn_args, **fn_kwargs)
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1454, in 
list_cells
mask_passwd_in_url(cell.database_connection)])
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 100, in 
mask_passwd_in_url
parsed = urlparse.urlparse(url)
  File "/usr/lib64/python2.7/urlparse.py", line 142, in urlparse
tuple = urlsplit(url, scheme, allow_fragments)
  File "/usr/lib64/python2.7/urlparse.py", line 213, in urlsplit
raise ValueError("Invalid IPv6 URL")
ValueError: Invalid IPv6 URL

** Affects: nova
 Importance: Undecided
 Assignee: Surya Seetharaman (tssurya)
 Status: New


** Tags: cells nova-manage

** Description changed:

  nova-manage cell_v2 list_cells does not work without the --verbose flag
  if square brackets are used in the DB or transport URL passwords because
  we use
  https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlsplit
  during masking of the passwords and it is explicitly mentioned that this
  gives a "ValueError: Invalid IPv6 URL", for square brackets if they are
  not symmetric
  
(https://github.com/enthought/Python-2.7.3/blob/69fe0ffd2d85b4002cacae1f28ef2eb0f25e16ae/Lib/urlparse.py#L181).
- Also using "?" or "#" in passwords did not mask the password. These
+ Also using "?" or "#" in passwords do not mask the password. These
  surely need to be at least documented somewhere in nova if they cannot
  be fixed.

** Description changed:

  nova-manage cell_v2 list_cells does not work without the --verbose flag
  if square brackets are used in the DB or transport URL passwords because
  we use
  https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlsplit
  during masking of the passwords and it is explicitly mentioned that this
  gives a "ValueError: Invalid IPv6 URL", for square brackets if they are
  not symmetric
  
(https://github.com/enthought/Python-2.7.3/blob/69fe0ffd2d85b4002cacae1f28ef2eb0f25e16ae/Lib/urlparse.py#L181).
  Also using "?" or "#" in passwords do not mask the password. These
  surely need to be at least documented somewhere in nova if they cannot
  be fixed.
+ 
+ 
+ This is what the Error looks like, which is not very helpful:
+ An error has occurred:
+ Traceback (most recent call last):
+   File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1762, in 
main
+ ret = fn(*fn_args, **fn_kwargs)
+   File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1454, in 
list_cells
+ mask_passwd_in_url(cell.database_connection)])
+   File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 100, in 
mask_passwd_in_url
+ parsed = urlparse.urlparse(url)
+   File "/usr/lib64/python2.7/urlparse.py", line 142, in urlparse
+ tuple = urlsplit(url, scheme, allow_fragments)
+   File "/usr/lib64/python2.7/urlparse.py", line 213, in urlsplit
+ raise ValueError("Invalid IPv6 URL")
+ ValueError: Invalid IPv6 URL

** Description changed:

  nova-manage cell_v2 list_cells does not work without the --verbose flag
  if square brackets are used in the DB or transport URL passwords because
  we use
  https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlsplit
  during masking of the passwords and it is explicitly mentioned that this
  gives a "ValueError: Invalid IPv6 URL", for square brackets if they are
  not symmetric
  
(https://github.com/enthought/Python-2.7.3/blob/69fe0ffd2d85b4002cacae1f28ef2eb0f25e16ae/Lib/urlparse.py#L181).
  Also using "?" or "#" in passwords do not mask the password. These
- surely need to be at least documented somewhere in nova if they cannot
- be fixed.
- 
+ exceptions surely need to be at least documented somewhere in nova if
+ they cannot be fixed.
  
  This is what the Error looks like, which is not very helpful:
  An error has occurred:
  Traceback (most recent call last):
-   File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1762, in 
main
- ret = fn(*fn_args, **fn_kwargs)
-   File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1454, in 
list_cells
- 

[Yahoo-eng-team] [Bug 1771325] Re: placement trait and inventory handler use nonstandard HTTP error message details

2018-06-25 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
 Assignee: (unassigned) => Elod Illes (elod-illes)

** Changed in: nova/queens
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

** Changed in: nova/queens
   Status: New => Fix Committed

** Changed in: nova/pike
   Status: New => In Progress

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/queens
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771325

Title:
  placement trait and inventory handler use nonstandard HTTP error
  message details

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  Fix Committed

Bug description:
  DELETE /traits/

  Actual
  --

  {"errors": [{"status": 400, "request_id": "req-b30e30ba-9fce-403f-
  9f24-b6e32cd0b8c9", "detail": "Cannot delete standard trait
  HW_GPU_API_DXVA.\n\n   ", "title": "Bad Request"}]}

  Expected
  

  {"errors": [{"status": 400, "request_id": "req-3caa15be-a726-41f2
  -a7cb-f4afb3c97a44", "detail": "The server could not comply with the
  request since it is either malformed or otherwise incorrect.\n\n
  Cannot delete standard trait HW_GPU_API_DXVA.  ", "title": "Bad
  Request"}]}

  Most of the placement wsgi code passes one positional argument to the
  constructor of the  webob.exc.HTTPXXX exception classes but the trait
  [1] and inventory handlers uses the 'explanation' kwargs. As the above
  example shows this leads to different behavior. This inconsistency
  leads to incorrect behavior in osc placement client [2].

  [1] 
https://github.com/openstack/nova/blob/ae131868f71700d69053b65a0a37f9c2d65c3770/nova/api/openstack/placement/handlers/trait.py#L133
  [2] 
https://github.com/openstack/osc-placement/blob/2357807c95d74afc836852e1c54f0631c6fd2d60/osc_placement/http.py#L35

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775308] Re: Listing placement usages (total or per resource provider) in a new process can result in a 500

2018-06-25 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Fix Committed

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/queens
 Assignee: (unassigned) => Chris Dent (cdent)

** Changed in: nova/pike
   Status: New => In Progress

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/pike
 Assignee: (unassigned) => Chris Dent (cdent)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775308

Title:
  Listing placement usages (total or per resource provider) in a new
  process can result in a 500

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  Fix Committed

Bug description:
  When requesting /usages or /resource_providers/{uuid}/usages it is
  possible to cause a 500 error if placement is running in a multi-
  process scenario and the usages query is the first request a process
  has received. This is because the methods which provide UsageLists do
  not _ensure_rc_cache, resulting in:

File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/placement/objects/resource_provider.py",
 line 2374, in _from_db_object
 rc_str = _RC_CACHE.string_from_id(source['resource_class_id'])
 AttributeError: 'NoneType' object has no attribute 'string_from_id'

  We presumably don't see this in our usual testing because any process
  has already had other requests happen, setting the cache.

  For now, the fix is to add the _ensure_rc_cache call in the right
  places, but long term if/when we switch to the os-resource-class model
  we can do the caching or syncing a bit differently (see
  https://review.openstack.org/#/c/553857/ for an example).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1775308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1778498] [NEW] cannot launch instance

2018-06-25 Thread Husni Alhamdani
Public bug reported:

i have problem while to launch the instance, i'm using ubuntu 18.04 LTS
and openstack Queen.

And when i want to launch instance i got this error "Unexpected API Error. 
Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API 
log if possible.
 (HTTP 500) 
(Request-ID: req-7302d9ed-0485-4714-9e57-c15d21a66dbd"

and this is the compute service (nova-api) log :

"2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi return 
self.session.request(url, method, **kwargs)
2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 698, in 
request
2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi resp = 
send(**kwargs)
2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 776, in 
_send_request
2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi raise 
exceptions.UnknownConnectionError(msg, e)
2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi 
UnknownConnectionError: Unexpected exception for 
htpp://controller:9696/v2.0/security-groups?fields=id=adf0b87b-28d0-4295-a6dd-222ed065ffc2:
 No connection adapters were found for 
'htpp://controller:9696/v2.0/security-groups?fields=id=adf0b87b-28d0-4295-a6dd-222ed065ffc2'
2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi 
2018-06-25 10:53:01.729 2634 INFO nova.api.openstack.wsgi 
[req-5bfdf750-70c1-470b-b849-895a60c4deb0 - - - - -] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.

2018-06-25 10:53:01.737 2634 INFO nova.osapi_compute.wsgi.server 
[req-5bfdf750-70c1-470b-b849-895a60c4deb0 - - - - -] 20.20.20.10 "POST 
/v2.1/servers HTTP/1.1" status: 500 len: 665 time: 0.8935342"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1778498

Title:
  cannot launch instance

Status in OpenStack Compute (nova):
  New

Bug description:
  i have problem while to launch the instance, i'm using ubuntu 18.04
  LTS and openstack Queen.

  And when i want to launch instance i got this error "Unexpected API Error. 
Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API 
log if possible.
   (HTTP 
500) (Request-ID: req-7302d9ed-0485-4714-9e57-c15d21a66dbd"

  and this is the compute service (nova-api) log :

  "2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi return 
self.session.request(url, method, **kwargs)
  2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 698, in 
request
  2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi resp = 
send(**kwargs)
  2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 776, in 
_send_request
  2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi raise 
exceptions.UnknownConnectionError(msg, e)
  2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi 
UnknownConnectionError: Unexpected exception for 
htpp://controller:9696/v2.0/security-groups?fields=id=adf0b87b-28d0-4295-a6dd-222ed065ffc2:
 No connection adapters were found for 
'htpp://controller:9696/v2.0/security-groups?fields=id=adf0b87b-28d0-4295-a6dd-222ed065ffc2'
  2018-06-25 10:53:01.724 2634 ERROR nova.api.openstack.wsgi 
  2018-06-25 10:53:01.729 2634 INFO nova.api.openstack.wsgi 
[req-5bfdf750-70c1-470b-b849-895a60c4deb0 - - - - -] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
  
  2018-06-25 10:53:01.737 2634 INFO nova.osapi_compute.wsgi.server 
[req-5bfdf750-70c1-470b-b849-895a60c4deb0 - - - - -] 20.20.20.10 "POST 
/v2.1/servers HTTP/1.1" status: 500 len: 665 time: 0.8935342"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1778498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776841] Re: functional-py35 test timed_out

2018-06-25 Thread Brian Rosmaita
Job has intermittent time outs:
http://zuul.openstack.org/api/builds?job_name=openstack-tox-functional-
py35=openstack/glance

Does not appear traceable to a particular patch.


** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1776841

Title:
  functional-py35 test timed_out

Status in Glance:
  Invalid

Bug description:
  See this review page: https://review.openstack.org/#/c/575323/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1776841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777475] Re: Undercloud vm in state error after update of the undercloud.

2018-06-25 Thread Bogdan Dobrelya
We need the similar fix for t-h-t/puppet in order to fix this for
containerized undercloud, which is going to be default installation
method in Rocky. The instack only fix is not complete, reopening.

** Changed in: tripleo
   Status: Fix Released => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777475

Title:
  Undercloud vm in state error after update of the undercloud.

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  Hi,

  after an update of the undercloud, the undercloud vm is in error:

  [stack@undercloud-0 ~]$ openstack server list 

   
  
+--+--+++++

   
  | ID   | Name | Status | Networks 
  | Image  | Flavor |   

  
+--+--+++++

   
  | 9f80c38a-9f33-4a18-88e0-b89776e62150 | compute-0| ERROR  | 
ctlplane=192.168.24.18 | overcloud-full | compute|  
 
  | e87efe17-b939-4df2-af0c-8e2effd58c95 | controller-1 | ERROR  | 
ctlplane=192.168.24.9  | overcloud-full | controller |  
 
  | 5a3ea20c-75e8-49fe-90b6-edad01fc0a48 | controller-2 | ERROR  | 
ctlplane=192.168.24.13 | overcloud-full | controller |  
 
  | ba0f26e7-ec2c-4e61-be8e-05edf00ce78a | controller-0 | ERROR  | 
ctlplane=192.168.24.8  | overcloud-full | controller |  
 
  
+--+--+++++
 

  
  Originally found starting there 
https://bugzilla.redhat.com/show_bug.cgi?id=1590297#c14

  It boils down to a ordering issue between openstack-ironic-conductor
  and openstack-nova-compute, a simple reproducer is:

  sudo systemctl stop openstack-ironic-conductor
  sudo systemctl restart openstack-nova-compute

  on the undercloud.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp