[Yahoo-eng-team] [Bug 1815859] [NEW] Volume Snapshot table 'Project' column have wrong information

2019-02-13 Thread Vishal Manchanda
Public bug reported:

In volume snapshot table 'Project' column returns 
'os-vol-tenant-attr:tenant_id' (response from volume) [1] 
 but it have to return 'os-extended-snapshot-attributes:project_id' (response 
from snapshot).

[1].https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/snapshots/views.py#L68

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1815859

Title:
  Volume Snapshot table 'Project' column have wrong information

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In volume snapshot table 'Project' column returns 
'os-vol-tenant-attr:tenant_id' (response from volume) [1] 
   but it have to return 'os-extended-snapshot-attributes:project_id' (response 
from snapshot).

  
[1].https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/snapshots/views.py#L68

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1815859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815849] [NEW] nova-compute driver type has been changed from 'ironic' to 'libvirted', the nova-compute sevice is still can get by 'ironic' hypervisor_type

2019-02-13 Thread lynn
Public bug reported:

Description
===
the cloud has 2 nova-compute service and  both of their compute_driver is 
ironic. then changed one nova-compute service to libvirt.

when add more bm-nodes to the cloud,but some bm-nodes didn't show up in
the hypervisor list

** Affects: nova
 Importance: Undecided
 Assignee: lynn (lynn901)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => lynn (lynn901)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815849

Title:
  nova-compute driver type has been changed from 'ironic' to
  'libvirted', the nova-compute sevice is still can get  by 'ironic'
  hypervisor_type

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  the cloud has 2 nova-compute service and  both of their compute_driver is 
ironic. then changed one nova-compute service to libvirt.

  when add more bm-nodes to the cloud,but some bm-nodes didn't show up
  in the hypervisor list

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754716] Re: Disconnect volume on live migration source fails if initialize_connection doesn't return identical output

2019-02-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/551302
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b626c0dc7b113365002e743e6de2aeb40121fc81
Submitter: Zuul
Branch:master

commit b626c0dc7b113365002e743e6de2aeb40121fc81
Author: Matthew Booth 
Date:   Fri Mar 9 14:41:49 2018 +

Avoid redundant initialize_connection on source post live migration

During live migration we update bdm.connection_info for attached volumes
in pre_live_migration to reflect the new connection on the destination
node. This means that after migration completes the BDM no longer has a
reference to the original connection_info to do the detach on the source
host. To address this, change I3dfb75eb added a second call to
initialize_connection on the source host to re-fetch the source host
connection_info before calling disconnect.

Unfortunately the cinder driver interface does not strictly require that
multiple calls to initialize_connection will return consistent results.
Although they normally do in practice, there is at least one cinder
driver (delliscsi) which doesn't. This results in a failure to
disconnect on the source host post migration.

This change avoids the issue entirely by fetching the BDMs prior to
modification on the destination node. As well as working round this
specific issue, it also avoids a redundant cinder call in all cases.

Note that this massively simplifies post_live_migration in the libvirt
driver. The complexity removed was concerned with reconstructing the
original connection_info. This required considering the cinder v2 and v3
use cases, and reconstructing the multipath_id which was written to
connection_info by the libvirt fibrechannel volume connector on
connection. These things are not necessary when we just use the original
data unmodified.

Other drivers affected are Xenapi and HyperV. Xenapi doesn't touch
volumes in post_live_migration, so is unaffected. HyperV did not
previously account for differences in connection_info between source and
destination, so was likely previously broken. This change should fix it.

Closes-Bug: #1754716
Closes-Bug: #1814245
Change-Id: I0390c9ff51f49b063f736ca6ef868a4fa782ede5


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1754716

Title:
  Disconnect volume on live migration source fails if
  initialize_connection doesn't return identical output

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  During live migration we update bdm.connection_info for attached
  volumes in pre_live_migration to reflect the new connection on the
  destination node. This means that after migration completes we no
  longer have a reference to the original connection_info to do the
  detach on the source host, so we have to re-fetch it with a second
  call to initialize_connection before calling disconnect.

  Unfortunately the cinder driver interface does not strictly require
  that multiple calls to initialize_connection will return consistent
  results. Although they normally do in practice, there is at least one
  cinder driver (delliscsi) which doesn't. This results in a failure to
  disconnect on the source host post migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1754716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1814245] Re: _disconnect_volume incorrectly called for multiattach volumes during post_live_migration

2019-02-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/551302
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b626c0dc7b113365002e743e6de2aeb40121fc81
Submitter: Zuul
Branch:master

commit b626c0dc7b113365002e743e6de2aeb40121fc81
Author: Matthew Booth 
Date:   Fri Mar 9 14:41:49 2018 +

Avoid redundant initialize_connection on source post live migration

During live migration we update bdm.connection_info for attached volumes
in pre_live_migration to reflect the new connection on the destination
node. This means that after migration completes the BDM no longer has a
reference to the original connection_info to do the detach on the source
host. To address this, change I3dfb75eb added a second call to
initialize_connection on the source host to re-fetch the source host
connection_info before calling disconnect.

Unfortunately the cinder driver interface does not strictly require that
multiple calls to initialize_connection will return consistent results.
Although they normally do in practice, there is at least one cinder
driver (delliscsi) which doesn't. This results in a failure to
disconnect on the source host post migration.

This change avoids the issue entirely by fetching the BDMs prior to
modification on the destination node. As well as working round this
specific issue, it also avoids a redundant cinder call in all cases.

Note that this massively simplifies post_live_migration in the libvirt
driver. The complexity removed was concerned with reconstructing the
original connection_info. This required considering the cinder v2 and v3
use cases, and reconstructing the multipath_id which was written to
connection_info by the libvirt fibrechannel volume connector on
connection. These things are not necessary when we just use the original
data unmodified.

Other drivers affected are Xenapi and HyperV. Xenapi doesn't touch
volumes in post_live_migration, so is unaffected. HyperV did not
previously account for differences in connection_info between source and
destination, so was likely previously broken. This change should fix it.

Closes-Bug: #1754716
Closes-Bug: #1814245
Change-Id: I0390c9ff51f49b063f736ca6ef868a4fa782ede5


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1814245

Title:
  _disconnect_volume incorrectly called for multiattach volumes  during
  post_live_migration

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Triaged
Status in OpenStack Compute (nova) rocky series:
  Triaged

Bug description:
  Description
  ===

  Idc5cecffa9129d600c36e332c97f01f1e5ff1f9f introduced a simple check to
  ensure disconnect_volume is only called when detaching a multi-attach
  volume from the final instance using it on a given host.

  That change however doesn't take LM into account and more specifically
  the call to _disconect_volume during post_live_migration at the end of
  the migration from the source. At this point the original instance has
  already moved so the call to objects.InstanceList.get_uuids_by_host
  will only return one local instance that is using the volume instead
  of two, allowing disconnect_volume to be called.

  Depending on the backend being used this call can succeed removing the
  connection to the volume for the remaining instance or os-brick can
  fail in situations where it needs to flush I/O etc from the in-use
  connection.

  
  Steps to reproduce
  ==

  * Launch two instances attached to the same multiattach volume on the same 
host.
  * LM one of these instances to another host.

  Expected result
  ===

  No calls to disconnect_volume are made and the remaining instance on
  the host is still able to access the multi-attach volume.

  Actual result
  =

  A call to disconnect_volume is made and the remaining instance is
  unable to access the volume *or* the LM fails due to os-brick failures
  to disconnect the in-use volume on the host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 master

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)

 Libvirt + KVM

  
  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 LVM/iSCSI with multipath enabled reproduces the os-brick failure.

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

  # nova show testvm2
  [..]
  | fault

[Yahoo-eng-team] [Bug 1815827] [NEW] [RFE] neutron-lib: rehome neutron.object.base along with rbac db/objects

2019-02-13 Thread Boden R
Public bug reported:

This isn't a request for a new feature per say, but rather a placeholder
for the neutron drivers team to take a look at [1].

Specifically I'm hoping for drivers team agreement that the 
modules/functionality being rehomed in [1] makes sense; no actual (deep) code 
review of [1] is necessary at this point.
 
Assuming we can agree that the logic in [1] makes sense to rehome, I can 
proceed by chunking it up into smaller patches that will make the 
rehome/consume process easier.

This work is part of [2] that's described in [3][4]. However as
commented in [1], it's also necessary to rehome the rbac db/objects
modules and their dependencies that weren't discussed previously.


[1] https://review.openstack.org/#/c/621000
[2] https://blueprints.launchpad.net/neutron/+spec/neutron-lib-decouple-db
[3] 
https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutronlib-decouple-db-apiutils.html
[4] 
https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutronlib-decouple-models.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815827

Title:
  [RFE] neutron-lib: rehome neutron.object.base along with rbac
  db/objects

Status in neutron:
  New

Bug description:
  This isn't a request for a new feature per say, but rather a
  placeholder for the neutron drivers team to take a look at [1].

  Specifically I'm hoping for drivers team agreement that the 
modules/functionality being rehomed in [1] makes sense; no actual (deep) code 
review of [1] is necessary at this point.
   
  Assuming we can agree that the logic in [1] makes sense to rehome, I can 
proceed by chunking it up into smaller patches that will make the 
rehome/consume process easier.

  This work is part of [2] that's described in [3][4]. However as
  commented in [1], it's also necessary to rehome the rbac db/objects
  modules and their dependencies that weren't discussed previously.

  
  [1] https://review.openstack.org/#/c/621000
  [2] https://blueprints.launchpad.net/neutron/+spec/neutron-lib-decouple-db
  [3] 
https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutronlib-decouple-db-apiutils.html
  [4] 
https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutronlib-decouple-models.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815810] [NEW] [RFE] Allow keystone to query sub-group membership for group role-assignment

2019-02-13 Thread Drew Freiberger
Public bug reported:

A common request we see from corporate environments when providing
Active Directory/LDAP integration into keystone is the ability for role
assignments to apply for users who are members of a sub-group of the
role-assigned group.

For instance, if you have the following groups
cn=Project1_Users,dc=com
  member: user1
  member: user2
  memberGroup: cn=Project1_Admins,dc=com
cn=Project1_Admins,dc=com
  member: adminuser

And you defined Project1 in openstack and then defined group
Project1_Users to be assigned "Member" role in Project1, only user1 and
user2 would be granted that role, even though adminuser is technically a
member of that group from an Active Directory perspective. You would
have to also assign Project1_Admins to the "Member" role for Project1
for adminuser to be granted Member rights.

The keystone code does not handle subgroup membership for group-role-
assignment, so any subgroups of a group that is granted a role, will
also have to be individually granted the role under the same project(s).

In ActiveDirectory, there is a memberOf OID subquery
(memberOf:1.2.840.113556.1.4.1941:=)  that allows for
harnessing the directory's ability to chase sub-group referrals, and it
returns a list of user records, however this is not the method that
keystone wishes to ingest group members, instead querying a group for
it's $group_member_attribute value(s).

It would be very useful to either be able to identify a way to query
group membership based on this OID query or to define a
"group_subgroup_attribute" that keystone would perform cursive lookups
through for further members of the role-assigned group.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: canonical-bootstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1815810

Title:
  [RFE] Allow keystone to query sub-group membership for group role-
  assignment

Status in OpenStack Identity (keystone):
  New

Bug description:
  A common request we see from corporate environments when providing
  Active Directory/LDAP integration into keystone is the ability for
  role assignments to apply for users who are members of a sub-group of
  the role-assigned group.

  For instance, if you have the following groups
  cn=Project1_Users,dc=com
member: user1
member: user2
memberGroup: cn=Project1_Admins,dc=com
  cn=Project1_Admins,dc=com
member: adminuser

  And you defined Project1 in openstack and then defined group
  Project1_Users to be assigned "Member" role in Project1, only user1
  and user2 would be granted that role, even though adminuser is
  technically a member of that group from an Active Directory
  perspective. You would have to also assign Project1_Admins to the
  "Member" role for Project1 for adminuser to be granted Member rights.

  The keystone code does not handle subgroup membership for group-role-
  assignment, so any subgroups of a group that is granted a role, will
  also have to be individually granted the role under the same
  project(s).

  In ActiveDirectory, there is a memberOf OID subquery
  (memberOf:1.2.840.113556.1.4.1941:=)  that allows for
  harnessing the directory's ability to chase sub-group referrals, and
  it returns a list of user records, however this is not the method that
  keystone wishes to ingest group members, instead querying a group for
  it's $group_member_attribute value(s).

  It would be very useful to either be able to identify a way to query
  group membership based on this OID query or to define a
  "group_subgroup_attribute" that keystone would perform cursive lookups
  through for further members of the role-assigned group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1815810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815799] [NEW] Attempting to delete a baremetal server places server into ERROR state

2019-02-13 Thread Lars Kellogg-Stedman
Public bug reported:

When deleting a baremetal server, we see:

  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] Traceback (most recent call last):
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2698, in 
do_terminate_instance
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] self._delete_instance(context, 
instance, bdms)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] rv = f(*args, **kwargs)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2635, in 
_delete_instance
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] self._shutdown_instance(context, 
instance, bdms)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2527, in 
_shutdown_instance
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] requested_networks)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] self.force_reraise()
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] six.reraise(self.type_, self.value, 
self.tb)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2514, in 
_shutdown_instance
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] block_device_info)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 1245, in 
destroy
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] self._cleanup_deploy(node, instance, 
network_info)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 461, in 
_cleanup_deploy
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] 
self._remove_instance_info_from_node(node, instance)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 396, in 
_remove_instance_info_from_node
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] self.ironicclient.call('node.update', 
node.uuid, patch)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/nova/virt/ironic/client_wrapper.py", line 
170, in call
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] return self._multi_getattr(client, 
method)(*args, **kwargs)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/ironicclient/v1/node.py", line 360, in update
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] params=params)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/ironicclient/common/base.py", line 232, in 
_update
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a] resp, body = 
self.api.json_request(method, url, **kwargs)
  2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance: 
dcb4f055-cda4-4d61-ba8f-976645c4e92a]   File 
"/usr/lib/python2.7/site-packages/ironicclient/common/http.py", line 678, in 
json_request
  2019-02-13 

[Yahoo-eng-team] [Bug 1815591] Re: Out-of-date configuration options and no cross-referencing in scheduler filter guide

2019-02-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/636635
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=194c8c4a5fee14799b816e726316409055706cb8
Submitter: Zuul
Branch:master

commit 194c8c4a5fee14799b816e726316409055706cb8
Author: Alexandra Settle 
Date:   Wed Feb 13 14:11:26 2019 +

Adding cross refs for config options in scheduler filter guide

Change-Id: I98b5f0d9e18197382bc3a74f8f57d3ba1b11d899
Closes-bug: #1815591


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815591

Title:
  Out-of-date configuration options and no cross-referencing in
  scheduler filter guide

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We document all the filter schedulers in [1]. Most of these take some
  kind of configuration options and we document this. However, there is
  no cross-referencing between these options and our overall
  configuration guide. This is a primarily a usability issue, but when
  options aren't cross-referenced, it also tends to lead to outdated
  docs as options get moved around. I suspect there are at least a few
  typos in here. We should address this by using the
  ':oslo.config:option:' and ':oslo.config:group:' roles throughout this
  guide.

  [1] https://docs.openstack.org/nova/rocky/user/filter-scheduler.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815797] [NEW] "rpc_response_max_timeout" configuration variable not present in fullstack tests

2019-02-13 Thread Rodolfo Alonso
Public bug reported:

The configuration variable "rpc_response_max_timeout" is not defined in
fullstack tests.

Error log: http://logs.openstack.org/52/636652/1/check/neutron-
fullstack/91b459a/logs/dsvm-fullstack-
logs/TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart_OVS,VLANs
,openflow-cli_/neutron-openvswitch-agent--2019-02-13--
15-54-07-617752.txt.gz

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815797

Title:
  "rpc_response_max_timeout" configuration variable not present in
  fullstack tests

Status in neutron:
  New

Bug description:
  The configuration variable "rpc_response_max_timeout" is not defined
  in fullstack tests.

  Error log: http://logs.openstack.org/52/636652/1/check/neutron-
  fullstack/91b459a/logs/dsvm-fullstack-
  
logs/TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart_OVS,VLANs
  ,openflow-cli_/neutron-openvswitch-agent--2019-02-13--
  15-54-07-617752.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662483] Re: detach_volume races with delete

2019-02-13 Thread Matt Riedemann
** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => Gary Kotton (garyk)

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/rocky
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662483

Title:
  detach_volume races with delete

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  If a client does:

  nova volume-detach foo vol
  nova delete foo

  Assuming the volume-detach takes a moment, which it normally does, the
  delete will race with it also also attempt to detach the same volume.
  It's possible there are no side effects from this other than untidy
  log messages, but this is difficult to prove.

  I found this looking through CI logs.

  Note that volume-detach can also race with other instance operations,
  including itself. I'm almost certain that if you poke hard enough
  you'll find some combination that breaks things badly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1662483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662483] Re: detach_volume races with delete

2019-02-13 Thread Matt Riedemann
** This bug is no longer a duplicate of bug 1683972
   Overlapping iSCSI volume detach/attach can leave behind broken SCSI devices 
and multipath maps.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662483

Title:
  detach_volume races with delete

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  If a client does:

  nova volume-detach foo vol
  nova delete foo

  Assuming the volume-detach takes a moment, which it normally does, the
  delete will race with it also also attempt to detach the same volume.
  It's possible there are no side effects from this other than untidy
  log messages, but this is difficult to prove.

  I found this looking through CI logs.

  Note that volume-detach can also race with other instance operations,
  including itself. I'm almost certain that if you poke hard enough
  you'll find some combination that breaks things badly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1662483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815792] [NEW] Unable to delete instance when volume detached at the same time

2019-02-13 Thread Gary Kotton
Public bug reported:

When using K8s above OpenStack there is a race condition when a
persistant volume will be deleted at the same time that a instance using
that volume is deleted. This results in the instance going into an error
state.

201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
Traceback (most recent call last):
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2478, in 
do_terminate_instance
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
self._delete_instance(context, instance, bdms, quotas)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/hooks.py", line 154, in inner
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
rv = f(*args, **kwargs)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2441, in 
_delete_instance
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
quotas.rollback()
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
self.force_reraise()
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
six.reraise(self.type_, self.value, self.tb)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2405, in 
_delete_instance
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
self._shutdown_instance(context, instance, bdms)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
self._detach_volume_vmdk(connection_info, instance)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/volumeops.py", line 
609, in _detach_volume_vmdk
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
self.detach_disk_from_vm(vm_ref, instance, device)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/volumeops.py", line 
157, in detach_disk_from_vm
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
vm_util.reconfigure_vm(self._session, vm_ref, vmdk_detach_config_spec)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vm_util.py", line 
2681, in reconfigure_vm
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
session._wait_for_task(reconfig_task)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File 

[Yahoo-eng-team] [Bug 1815793] [NEW] 'nova hypervisor-servers' returns unexpected results when working with Ironic

2019-02-13 Thread Tzu-Mainn Chen
Public bug reported:

Description
===
I have a baremetal environment using Nova to schedule on top of four Ironic 
baremetal nodes. I successfully deployed an instance to one of those nodes. 
Then I ran 'nova hypervisor-servers' against each of the nodes, and the 
instance showed up against every node.

Steps to reproduce
==
After registering the nodes in Ironic and creating the server, I can see that 
the server is deployed on a single Ironic node:

[stack@localhost compute(keystone_admin)]$ openstack hypervisor list
++--+-+-+---+
| ID | Hypervisor Hostname  | Hypervisor Type | Host IP | 
State |
++--+-+-+---+
|  1 | a00696d5-32ba-475e-9528-59bf11cffea6 | ironic  | 192.168.1.2 | 
up|
|  2 | 534653c9-890d-4b25-9d6d-f4f260945384 | ironic  | 192.168.1.2 | 
up|
|  5 | dba7d2e5-0013-48c7-8dc3-2ccc949f4715 | ironic  | 192.168.1.2 | 
up|
|  7 | 015954fa-c900-4798-8c04-808a1504fe35 | ironic  | 192.168.1.2 | 
up|
++--+-+-+---+
[stack@localhost compute(keystone_admin)]$ openstack server list
+--++++-++
| ID   | Name   | Status | Networks 
  | Image   | Flavor |
+--++++-++
| 8b09bb55-c288-4d95-8caf-60550a661350 | instance-0 | ACTIVE | 
provisioning=192.168.1.221 | centos7 | trait-test |
+--++++-++
[stack@localhost compute(keystone_admin)]$ openstack baremetal node list
+--+-+--+-++-+
| UUID | Name| Instance UUID
| Power State | Provisioning State | Maintenance |
+--+-+--+-++-+
| a00696d5-32ba-475e-9528-59bf11cffea6 | dell-14 | None 
| power off   | available  | False   |
| 534653c9-890d-4b25-9d6d-f4f260945384 | dell-15 | None 
| power off   | available  | False   |
| dba7d2e5-0013-48c7-8dc3-2ccc949f4715 | dell-12 | None 
| power off   | available  | False   |
| 015954fa-c900-4798-8c04-808a1504fe35 | dell-13 | 
8b09bb55-c288-4d95-8caf-60550a661350 | power on| active | False 
  |
+--+-+--+-++-+

Then, if I run 'nova hypervisor-servers', the one instance shows up for
multiple nodes:

[stack@localhost compute(keystone_admin)]$ nova hypervisor-servers 
015954fa-c900-4798-8c04-808a1504fe35 
+--+---+--+--+
| ID   | Name  | Hypervisor ID  
  | Hypervisor Hostname  |
+--+---+--+--+
| 8b09bb55-c288-4d95-8caf-60550a661350 | instance-0012 | 
015954fa-c900-4798-8c04-808a1504fe35 | 015954fa-c900-4798-8c04-808a1504fe35 |
+--+---+--+--+
[stack@localhost compute(keystone_admin)]$ nova hypervisor-servers 
a00696d5-32ba-475e-9528-59bf11cffea6 
+--+---+--+--+
| ID   | Name  | Hypervisor ID  
  | Hypervisor Hostname  |
+--+---+--+--+
| 8b09bb55-c288-4d95-8caf-60550a661350 | instance-0012 | 
a00696d5-32ba-475e-9528-59bf11cffea6 | a00696d5-32ba-475e-9528-59bf11cffea6 |
+--+---+--+--+
[stack@localhost compute(keystone_admin)]$ nova hypervisor-servers 
534653c9-890d-4b25-9d6d-f4f260945384 

[Yahoo-eng-team] [Bug 1815791] [NEW] Race condition causes Nova to shut off a successfully deployed baremetal server

2019-02-13 Thread Lars Kellogg-Stedman
Public bug reported:

When booting a baremetal server with Nova, we see Ironic report a
successful power on:

  ironic-conductor.log:2019-02-13 10:52:15.901 7 INFO
ironic.conductor.utils [req-774350ce-9392-4096-b66c-20ad3d794e4e
7a9b1ac45e084e7cbeb9cb740ffe8d08 41ea8af8d00e46438c7be3b182bbb53f -
default default] Successfully set node a00696d5-32ba-
475e-9528-59bf11cffea6 power state to power on by power on.

But seconds later, Nova (a) triggers a power state sync and then (b)
decided the node is in state "power off" and shuts it down:

nova-compute.log:2019-02-13 10:52:17.289 7 DEBUG nova.compute.manager 
[req-9bcae7d4-4201-40ea-a66c-c5954117f0e4 - - - - -] Triggering sync for uuid 
dcb4f055-cda4-4d61-ba8f-976645c4e92a _sync_power_states 
/usr/lib/python2.7/site-packages/nova/compute/manager.py:7516
nova-compute.log:2019-02-13 10:52:17.295 7 DEBUG 
oslo_concurrency.lockutils [-] Lock "dcb4f055-cda4-4d61-ba8f-976645c4e92a" 
acquired by "nova.compute.manager.query_driver_power_state_and_sync" :: waited 
0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:327
nova-compute.log:2019-02-13 10:52:17.344 7 WARNING nova.compute.manager 
[-] [instance: dcb4f055-cda4-4d61-ba8f-976645c4e92a] Instance shutdown by 
itself. Calling the stop API. Current vm_state: active, current task_state: 
None, original DB power_state: 4, current VM power_state: 4
nova-compute.log:2019-02-13 10:52:17.345 7 DEBUG nova.compute.api [-] 
[instance: dcb4f055-cda4-4d61-ba8f-976645c4e92a] Going to try to stop instance 
force_stop /usr/lib/python2.7/site-packages/nova/compute/api.py:2291

It looks like Nova is using stale cache data to make this decision.

jroll on irc suggests a solution may look like
https://review.openstack.org/#/c/636699/ (bypass cache data to verify
power state before shutting down the server).

This is with nova @ ad842aa and ironic @ 4404292.

** Affects: nova
 Importance: Undecided
 Assignee: Jim Rollenhagen (jim-rollenhagen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815791

Title:
  Race condition causes Nova to shut off a successfully deployed
  baremetal server

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When booting a baremetal server with Nova, we see Ironic report a
  successful power on:

ironic-conductor.log:2019-02-13 10:52:15.901 7 INFO
  ironic.conductor.utils [req-774350ce-9392-4096-b66c-20ad3d794e4e
  7a9b1ac45e084e7cbeb9cb740ffe8d08 41ea8af8d00e46438c7be3b182bbb53f -
  default default] Successfully set node a00696d5-32ba-
  475e-9528-59bf11cffea6 power state to power on by power on.

  But seconds later, Nova (a) triggers a power state sync and then (b)
  decided the node is in state "power off" and shuts it down:

nova-compute.log:2019-02-13 10:52:17.289 7 DEBUG nova.compute.manager 
[req-9bcae7d4-4201-40ea-a66c-c5954117f0e4 - - - - -] Triggering sync for uuid 
dcb4f055-cda4-4d61-ba8f-976645c4e92a _sync_power_states 
/usr/lib/python2.7/site-packages/nova/compute/manager.py:7516
nova-compute.log:2019-02-13 10:52:17.295 7 DEBUG 
oslo_concurrency.lockutils [-] Lock "dcb4f055-cda4-4d61-ba8f-976645c4e92a" 
acquired by "nova.compute.manager.query_driver_power_state_and_sync" :: waited 
0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:327
nova-compute.log:2019-02-13 10:52:17.344 7 WARNING nova.compute.manager 
[-] [instance: dcb4f055-cda4-4d61-ba8f-976645c4e92a] Instance shutdown by 
itself. Calling the stop API. Current vm_state: active, current task_state: 
None, original DB power_state: 4, current VM power_state: 4
nova-compute.log:2019-02-13 10:52:17.345 7 DEBUG nova.compute.api [-] 
[instance: dcb4f055-cda4-4d61-ba8f-976645c4e92a] Going to try to stop instance 
force_stop /usr/lib/python2.7/site-packages/nova/compute/api.py:2291

  It looks like Nova is using stale cache data to make this decision.

  jroll on irc suggests a solution may look like
  https://review.openstack.org/#/c/636699/ (bypass cache data to verify
  power state before shutting down the server).

  This is with nova @ ad842aa and ironic @ 4404292.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643991] Re: 504 Gateway Timeout when creating a port

2019-02-13 Thread James Denton
** Changed in: openstack-ansible
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643991

Title:
  504 Gateway Timeout when creating a port

Status in neutron:
  Invalid
Status in openstack-ansible:
  Invalid

Bug description:
  We are using Openstack installed on Containers and trying to create ports or 
networks using Neutron CLI. But we received this error message on CLI: "504 
Gateway Time-out The server didn't respond in time." and there is no error in 
the Neutron Logs. Sometimes networks or ports are created and sometimes not.
  In another test, we are trying to perform a Deploy and neutron creates and 
then delete the network or port, and the same error message appears in the 
Ironic log.

  The message error is like this:
  https://bugs.launchpad.net/fuel/+bug/1540346

  
  Error log from CLI:

  DEBUG: neutronclient.v2_0.client Error message: 
  504 Gateway Time-outThe server didn't respond in 
time.

  DEBUG: neutronclient.v2_0.client POST call to neutron for 
http://XXX.XX.XXX.XXX:9696/v2.0/ports.json used request id None
  ERROR: neutronclient.shell 504 Gateway Time-out
  The server didn't respond in time.
  
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
877, in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
114, in run_command
  return cmd.run(known_args)
File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 324, in run
  return super(NeutronCommand, self).run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 100, 
in run
  column_names, data = self.take_action(parsed_args)
File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
 line 407, in take_action
  data = obj_creator(body)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 750, in create_port
  return self.post(self.ports_path, body=body)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 365, in post
  headers=headers, params=params)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 300, in do_request
  self._handle_fault_response(status_code, replybody, resp)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 275, in _handle_fault_response
  exception_handler_v20(status_code, error_body)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 91, in exception_handler_v20
  request_ids=request_ids)
  NeutronClientException: 504 Gateway Time-out
  The server didn't respond in time.
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1643991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815771] [NEW] Credentials are not cached

2019-02-13 Thread Jose Castro Leon
Public bug reported:

While trying to improve performance on validation of EC2 credentials, we
have just realized than the credentials are always fetched from the
underlying database.

If there is a flood on credential validation, this will transform in a
increase of load on the database server that could impact the service.

** Affects: keystone
 Importance: Undecided
 Assignee: Jose Castro Leon (jose-castro-leon)
 Status: In Progress

** Description changed:

  While trying to improve performance on validation of EC2 credentials, we
  have just realized than the credentials are always fetched from the
  underlying database.
  
  If there is a flood on credential validation, this will transform in a
- load on the database server.
+ increase of load on the database server that could impact the service.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1815771

Title:
  Credentials are not cached

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  While trying to improve performance on validation of EC2 credentials,
  we have just realized than the credentials are always fetched from the
  underlying database.

  If there is a flood on credential validation, this will transform in a
  increase of load on the database server that could impact the service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1815771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1749574] Re: [tracking] removal and migration of pycrypto

2019-02-13 Thread Mohammed Naser
** Changed in: openstack-ansible
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749574

Title:
  [tracking] removal and migration of pycrypto

Status in Barbican:
  In Progress
Status in Compass:
  New
Status in daisycloud:
  New
Status in OpenStack Backup/Restore and DR (Freezer):
  New
Status in Fuel for OpenStack:
  New
Status in OpenStack Compute (nova):
  Triaged
Status in openstack-ansible:
  Fix Released
Status in OpenStack Global Requirements:
  Fix Released
Status in pyghmi:
  Fix Committed
Status in Solum:
  Fix Released
Status in Tatu:
  New
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  trove
  tatu
  barbican
  compass
  daisycloud
  freezer
  fuel
  nova
  openstack-ansible - https://review.openstack.org/544516
  pyghmi - https://review.openstack.org/569073
  solum

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1749574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1807400] Re: networksegments table in neutron can not be cleared automatically

2019-02-13 Thread James Denton
Marking invalid for OSA. If this is still an issue, please submit for
Neutron project.

** Changed in: openstack-ansible
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1807400

Title:
  networksegments table in neutron can not be cleared automatically

Status in neutron:
  Invalid
Status in openstack-ansible:
  Invalid

Bug description:
  _process_port_binding function in neutron/plugins/ml2/plugin.py used
  clear_binding_levels to clear ml2_port_binding_levels table, but it
  will not do anything to networksegments under hierarchical port
  bonding condition

  @db_api.context_manager.writer
  def clear_binding_levels(context, port_id, host):
  if host:
  for l in (context.session.query(models.PortBindingLevel).
filter_by(port_id=port_id, host=host)):
  context.session.delete(l)
  LOG.debug("For port %(port_id)s, host %(host)s, "
"cleared binding levels",
{'port_id': port_id,
 'host': host})

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1807400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815763] [NEW] Unbound regex in config options

2019-02-13 Thread Jim Rollenhagen
Public bug reported:

Oslo.config uses re.search() to check config values against the allowed
regex. This checks if the regex matches anywhere in the string, rather
than checking if the entire string matches the regex.

Nova has three config options that appear as if the entire string should match 
the given regex:
* DEFAULT.instance_usage_audit_period
* cinder.catalog_info
* serial_console.port_range

However, these are not bounded with ^ and $ to ensure the entire string
matches.

** Affects: nova
 Importance: Low
 Assignee: Jim Rollenhagen (jim-rollenhagen)
 Status: In Progress

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815763

Title:
  Unbound regex in config options

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Oslo.config uses re.search() to check config values against the
  allowed regex. This checks if the regex matches anywhere in the
  string, rather than checking if the entire string matches the regex.

  Nova has three config options that appear as if the entire string should 
match the given regex:
  * DEFAULT.instance_usage_audit_period
  * cinder.catalog_info
  * serial_console.port_range

  However, these are not bounded with ^ and $ to ensure the entire
  string matches.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815762] [NEW] you can end up in a state where qvo* interfaces aren't owned by ovs which results in a dangling connection

2019-02-13 Thread Ian Kumlien
Public bug reported:

While upgrading to rocky, we ended up with a broken openvswitch
infrastructure and moved back to the old openvswitch.

We ended up with new machines working, old machines didn't and it took a
while to realize that we had qvo* interfaces that not only wasn't
plugged but also wasn't owned by ovs-system - basically the virtual
equivalent of forgetting to plug in the cable ;)

This was quickly addressed by running this bash-ism on all nodes:
for x in `ip a |grep qvo |grep @qvb |grep -v ovs-system | awk '{ print $2 '}` ; 
do y=${x%%"@"*} && ip link delete $y ; done ; docker restart nova_compute

However, nova could pretty easily sanity check this =)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815762

Title:
  you can end up in a state where qvo* interfaces aren't owned by ovs
  which results in a dangling connection

Status in OpenStack Compute (nova):
  New

Bug description:
  While upgrading to rocky, we ended up with a broken openvswitch
  infrastructure and moved back to the old openvswitch.

  We ended up with new machines working, old machines didn't and it took
  a while to realize that we had qvo* interfaces that not only wasn't
  plugged but also wasn't owned by ovs-system - basically the virtual
  equivalent of forgetting to plug in the cable ;)

  This was quickly addressed by running this bash-ism on all nodes:
  for x in `ip a |grep qvo |grep @qvb |grep -v ovs-system | awk '{ print $2 '}` 
; do y=${x%%"@"*} && ip link delete $y ; done ; docker restart nova_compute

  However, nova could pretty easily sanity check this =)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815758] [NEW] Error in ip_lib.get_devices_info() retrieving veth interface info

2019-02-13 Thread Rodolfo Alonso
Public bug reported:

In ip_lib.get_devices_info(), if the device retrieved is one of the
interfaces of a veth pair and the other one is created in other
namespace, the information of the second interface won't be available in
the list of interfaces of the first interface namespace. Because of
this, is not possible to assign the "parent_name" information in the
returned dict.

By default, if the interface is a veth pair, this key shouldn't be
populated.

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815758

Title:
  Error in ip_lib.get_devices_info() retrieving veth interface info

Status in neutron:
  New

Bug description:
  In ip_lib.get_devices_info(), if the device retrieved is one of the
  interfaces of a veth pair and the other one is created in other
  namespace, the information of the second interface won't be available
  in the list of interfaces of the first interface namespace. Because of
  this, is not possible to assign the "parent_name" information in the
  returned dict.

  By default, if the interface is a veth pair, this key shouldn't be
  populated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815728] [NEW] Can delete port attached to VM without Nova noticing

2019-02-13 Thread David Rabel
Public bug reported:

In the Rocky release it still possible to delete a port that is attached
to a VM as primary network interface. Nova doesn't even seem to notice
when this happens. Shouldn't there be any kind of precaution form
Neutron's side?

Reproduce:

$ openstack port create ...
$ openstack server create --port ...
$ openstack port delete ...
$ openstack server show ...

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815728

Title:
  Can delete port attached to VM without Nova noticing

Status in neutron:
  New

Bug description:
  In the Rocky release it still possible to delete a port that is
  attached to a VM as primary network interface. Nova doesn't even seem
  to notice when this happens. Shouldn't there be any kind of precaution
  form Neutron's side?

  Reproduce:

  $ openstack port create ...
  $ openstack server create --port ...
  $ openstack port delete ...
  $ openstack server show ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815726] [NEW] nova-manage db sync --version command failed with stack trace.

2019-02-13 Thread Sagar Waghmare
Public bug reported:

nova-manage db sync --version  is failing with stack
trace as below,

[stack@hostname ~]$ nova-manage db sync --version 392
.
.
.
ERROR: Could not access cell0.
Has the nova_api database been created?
Has the nova_cell0 database been created?
Has "nova-manage api_db sync" been run?
Has "nova-manage cell_v2 map_cell0" been run?
Is [api_database]/connection set in nova.conf?
Is the cell0 database connection URL correct?
Error: "Database schema file with version 392 doesn't exist."
An error has occurred:
Traceback (most recent call last):
  File "/opt/stack/nova/nova/cmd/manage.py", line 2357, in main
ret = fn(*fn_args, **fn_kwargs)
  File "/opt/stack/nova/nova/cmd/manage.py", line 490, in sync
return migration.db_sync(version)
  File "/opt/stack/nova/nova/db/migration.py", line 26, in db_sync
return IMPL.db_sync(version=version, database=database, context=context)
  File "/opt/stack/nova/nova/db/sqlalchemy/migration.py", line 61, in db_sync
repository, version)
  File "/usr/lib/python2.7/site-packages/migrate/versioning/api.py", line 186, 
in upgrade
return _migrate(url, repository, version, upgrade=True, err=err, **opts)
  File "", line 2, in _migrate
  File "/usr/lib/python2.7/site-packages/migrate/versioning/util/__init__.py", 
line 167, in with_engine
return f(*a, **kw)
  File "/usr/lib/python2.7/site-packages/migrate/versioning/api.py", line 345, 
in _migrate
changeset = schema.changeset(version)
  File "/usr/lib/python2.7/site-packages/migrate/versioning/schema.py", line 
82, in changeset
changeset = self.repository.changeset(database, start_ver, version)
  File "/usr/lib/python2.7/site-packages/migrate/versioning/repository.py", 
line 225, in changeset
changes = [self.version(v).script(database, op) for v in versions]
  File "/usr/lib/python2.7/site-packages/migrate/versioning/repository.py", 
line 189, in version
return self.versions.version(*p, **k)
  File "/usr/lib/python2.7/site-packages/migrate/versioning/version.py", line 
174, in version
"exist.") % {'args': VerNum(vernum)})
VersionNotFoundError: "Database schema file with version 392 doesn't exist."


As, '--version' parameter was deprecated in the Pike cycle and will not be 
supported in future versions of nova, the expected error message is as below,

ERROR: Could not access cell0.
Has the nova_api database been created?
Has the nova_cell0 database been created?
Has "nova-manage api_db sync" been run?
Has "nova-manage cell_v2 map_cell0" been run?
Is [api_database]/connection set in nova.conf?
Is the cell0 database connection URL correct?
Error: "Database schema file with version 392 doesn't exist."


Openstack version: openstack 3.17.0 (Rocky)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815726

Title:
  nova-manage db sync --version   command failed with
  stack trace.

Status in OpenStack Compute (nova):
  New

Bug description:
  nova-manage db sync --version  is failing with stack
  trace as below,

  [stack@hostname ~]$ nova-manage db sync --version 392
  .
  .
  .
  ERROR: Could not access cell0.
  Has the nova_api database been created?
  Has the nova_cell0 database been created?
  Has "nova-manage api_db sync" been run?
  Has "nova-manage cell_v2 map_cell0" been run?
  Is [api_database]/connection set in nova.conf?
  Is the cell0 database connection URL correct?
  Error: "Database schema file with version 392 doesn't exist."
  An error has occurred:
  Traceback (most recent call last):
File "/opt/stack/nova/nova/cmd/manage.py", line 2357, in main
  ret = fn(*fn_args, **fn_kwargs)
File "/opt/stack/nova/nova/cmd/manage.py", line 490, in sync
  return migration.db_sync(version)
File "/opt/stack/nova/nova/db/migration.py", line 26, in db_sync
  return IMPL.db_sync(version=version, database=database, context=context)
File "/opt/stack/nova/nova/db/sqlalchemy/migration.py", line 61, in db_sync
  repository, version)
File "/usr/lib/python2.7/site-packages/migrate/versioning/api.py", line 
186, in upgrade
  return _migrate(url, repository, version, upgrade=True, err=err, **opts)
File "", line 2, in _migrate
File 
"/usr/lib/python2.7/site-packages/migrate/versioning/util/__init__.py", line 
167, in with_engine
  return f(*a, **kw)
File "/usr/lib/python2.7/site-packages/migrate/versioning/api.py", line 
345, in _migrate
  changeset = schema.changeset(version)
File "/usr/lib/python2.7/site-packages/migrate/versioning/schema.py", line 
82, in changeset
  changeset = self.repository.changeset(database, start_ver, version)
File "/usr/lib/python2.7/site-packages/migrate/versioning/repository.py", 
line 225, in changeset
  changes = [self.version(v).script(database, op) for v in versions]
File