[Yahoo-eng-team] [Bug 1832307] [NEW] Functional test neutron.tests.functional.agent.linux.test_ip_lib.IpMonitorTestCase. test_add_remove_ip_address_and_interface is failing

2019-06-11 Thread Slawek Kaplonski
Public bug reported:

Some examples of failure:

http://logs.openstack.org/60/635060/7/check/neutron-functional-
python27/be0cb14/testr_results.html.gz

http://logs.openstack.org/64/663464/2/check/neutron-functional-
python27/43cf142/testr_results.html.gz

http://logs.openstack.org/14/639814/10/check/neutron-functional-
python27/82513e0/testr_results.html.gz

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1832307

Title:
  Functional test
  neutron.tests.functional.agent.linux.test_ip_lib.IpMonitorTestCase.
  test_add_remove_ip_address_and_interface is failing

Status in neutron:
  Confirmed

Bug description:
  Some examples of failure:

  http://logs.openstack.org/60/635060/7/check/neutron-functional-
  python27/be0cb14/testr_results.html.gz

  http://logs.openstack.org/64/663464/2/check/neutron-functional-
  python27/43cf142/testr_results.html.gz

  http://logs.openstack.org/14/639814/10/check/neutron-functional-
  python27/82513e0/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1832307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703074] Re: No success message for deleting resources on network topology panel

2019-06-11 Thread Vishal Manchanda
Unable to reprodue this bug for master branch so marking it as invalid.
When i delete a resource like netwrok and subnet from the network topology 
panel,then it always shows a
sucess message after the resource deletion.


** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1703074

Title:
  No success message for deleting resources on network topology panel

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When adding a resource from the network topology panel, it shows a
  success message if the action is successful. However, when deleting a
  resource from the network topology panel, there’s no success message
  when the deletion is successful. I think it will be helpful to show
  the success message for the deletion as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1703074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831435] Re: Access the snapshot page under the admin panel. If the snapshot is in the middle state, an error log will appear.

2019-06-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/662721
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f0149ee0dbeb20779fc2af2e2f1be11ad2216493
Submitter: Zuul
Branch:master

commit f0149ee0dbeb20779fc2af2e2f1be11ad2216493
Author: pengyuesheng 
Date:   Mon Jun 3 16:57:06 2019 +0800

Add the group_snapshot attribute to the snapshot in UpdateRow

On snpashot panel in admin dashboards,
The group_snapshot attribute is required in the snapshot table,
but it does not exist in UpdateRow.

This patch Add the group_snapshot attribute to the snapshot

Change-Id: I00fc431fa3c5b8da40e5b24507165a2f3dfead47
Closes-Bug: #1831435


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1831435

Title:
  Access the snapshot page under the admin panel. If the snapshot is in
  the middle state, an error log will appear.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Access the snapshot page under the admin panel. If the snapshot is in the 
middle state, an error log will appear.
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/exception.py", 
line 41, in inner
  response = get_response(request)
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
187, in _get_response
  response = self.process_exception_by_middleware(e, request)
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
185, in _get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 113, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 84, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
68, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
88, in dispatch
  return handler(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 223, 
in get
  handled = self.construct_tables()
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 209, 
in construct_tables
  preempted = table.maybe_preempt()
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 1698, 
in maybe_preempt
  error = exceptions.handle(request, ignore=True)
File "/usr/lib/python2.7/site-packages/horizon/exceptions.py", line 348, in 
handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 1694, 
in maybe_preempt
  new_row.load_cells(datum)
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 619, 
in load_cells
  cell = table._meta.cell_class(datum, column, self)
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 741, 
in __init__
  self.data = self.get_data(datum, column, row)
File 
"/usr/lib/python2.7/site-packages/mistraldashboard/default/smart_cell.py", line 
70, in get_data
  data = column.get_data(datum)
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 424, 
in get_data
  data = self.get_raw_data(datum)
File 
"/usr/share/openstack-dashboard/openstack_dashboard/dashboards/project/snapshots/tables.py",
 line 191, in get_raw_data
  group_snapshot = snapshot.group_snapshot
File "/usr/share/openstack-dashboard/openstack_dashboard/api/base.py", line 
120, in __getattribute__
  return object.__getattribute__(self, attr)
  AttributeError: 'VolumeSnapshot' object has no attribute 'group_snapshot'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1831435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832380] [NEW] Unable to list instances in horizon if instance launched from image which has switched from public to private

2019-06-11 Thread Sean Murphy
Public bug reported:

We have an operational cluster in which we provide a set of images to
users; we update these images every few months (security patches etc).

We upgraded from Rocky to Stein and we are now unable to list any
instances in horizon in the case that one instance in the project has
been launched from an image which was once public but has been
subsequently switched to private (older Operating Systems basically).

When we make the image public, we can again list the instances.

Horizon version: kolla/ubuntu-source-horizon:stein

See logs below - the key is the id of the image that has been switched
from public to private.


[Tue Jun 11 18:08:52.920855 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] Internal Server Error: /project/instances/
[Tue Jun 11 18:08:52.921010 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] Traceback (most recent call last):
[Tue Jun 11 18:08:52.921026 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/django/core/handlers/exception.py",
 line 41, in inner
[Tue Jun 11 18:08:52.921036 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] response = get_response(request)
[Tue Jun 11 18:08:52.921044 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 187, in _get_response
[Tue Jun 11 18:08:52.921054 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] response = 
self.process_exception_by_middleware(e, request)
[Tue Jun 11 18:08:52.921062 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 185, in _get_response
[Tue Jun 11 18:08:52.921090 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] response = wrapped_callback(request, 
*callback_args, **callback_kwargs)
[Tue Jun 11 18:08:52.921100 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
36, in dec
[Tue Jun 11 18:08:52.921108 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] return view_func(request, *args, **kwargs)
[Tue Jun 11 18:08:52.921116 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
52, in dec
[Tue Jun 11 18:08:52.921124 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] return view_func(request, *args, **kwargs)
[Tue Jun 11 18:08:52.921132 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
36, in dec
[Tue Jun 11 18:08:52.921139 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] return view_func(request, *args, **kwargs)
[Tue Jun 11 18:08:52.921147 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
113, in dec
[Tue Jun 11 18:08:52.921155 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] return view_func(request, *args, **kwargs)
[Tue Jun 11 18:08:52.921162 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
84, in dec
[Tue Jun 11 18:08:52.921170 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] return view_func(request, *args, **kwargs)
[Tue Jun 11 18:08:52.921178 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 68, in view
[Tue Jun 11 18:08:52.921188 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] return self.dispatch(request, *args, **kwargs)
[Tue Jun 11 18:08:52.921197 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 88, in dispatch
[Tue Jun 11 18:08:52.921205 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] return handler(request, *args, **kwargs)
[Tue Jun 11 18:08:52.921213 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/tables/views.py", line 
223, in get
[Tue Jun 11 18:08:52.921220 2019] [wsgi:error] [pid 34:tid 140549440083712] 
[remote 192.168.20.104:42822] handled =

[Yahoo-eng-team] [Bug 1617452] Re: Can't update MAC address for direct-physical ports

2019-06-11 Thread Brian Haley
*** This bug is a duplicate of bug 1830383 ***
https://bugs.launchpad.net/bugs/1830383

This bug was fixed in a different way in
https://review.opendev.org/#/c/661298/ - I'll mark as a duplicate of the
bug mentioned there.

** This bug has been marked a duplicate of bug 1830383
   SRIOV: MAC address in use error

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617452

Title:
  Can't update MAC address for direct-physical ports

Status in neutron:
  Confirmed

Bug description:
  This bug also affect nova and is described in details there:
  https://bugs.launchpad.net/nova/+bug/1617429

  Nova needs to be fixed in order to update the MAC address of the
  neutron ports of type direct-physical.

  A fix has been proposed for the nova issue.  However, sending a MAC
  address update to neutron-server reports the following error:

  Unable to complete operation on port
  d19c4cef-7415-4113-ba92-2495f00384d2, port is already bound, port
  type: hostdev_physical, old_mac 90:e2:ba:48:27:ed, new_mac
  00:1e:67:51:36:71.

  
  Description:
  

  Booting a guest with a neutron port of type 'direct-physical' will
  cause nova to allocate a PCI passthrough device for the port. The MAC
  address of the PCI passthrough device in the guest is not a virtual
  MAC address (fa:16:...) but the MAC address of the physical device
  since the full device is allocated to the guest (compared to SR-IOV
  where a virtual MAC address is arbitrarily chosen for the port).

  When resizing the guest (to another flavor), nova will allocate a new
  PCI device for the guest. After the resize, the guest will be bound to
  another PCI device which has a different MAC address. However the MAC
  address on the neutron port is not updated, causing DHCP to not work
  because the MAC address is unknown.

  The same issue can be observed when migrating a guest to another host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1617452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832392] [NEW] nova.tests.unit.test_hacking.HackingTestCase.test_check_doubled_words failing in py27 jobs since June 10

2019-06-11 Thread Matt Riedemann
*** This bug is a duplicate of bug 1804062 ***
https://bugs.launchpad.net/bugs/1804062

Public bug reported:

Seen here:

http://logs.openstack.org/61/660761/7/check/openstack-tox-
py27/b10a9a6/testr_results.html.gz

ft1.34: 
nova.tests.unit.test_hacking.HackingTestCase.test_check_doubled_words_StringException:
 Traceback (most recent call last):
  File "nova/tests/unit/test_hacking.py", line 593, in test_check_doubled_words
expected_errors=errors)
  File "nova/tests/unit/test_hacking.py", line 295, in _assert_has_errors
self.assertEqual(expected_errors or [], actual_errors)
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: [(1, 0, 'N343')] != []

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22testtools.matchers._impl.MismatchError%3A%20%5B(1%2C%200%2C%20'N343')%5D%20!%3D%20%5B%5D%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d

Looks like this is happening on master and stable/stein, probably due to
bionic nodes being used. It's happening on multiple node providers so
must be something new in bionic py27 as of June 10.

I can't really tell what version of python 2.7 is on the nodes when this
fails though.

http://changelogs.ubuntu.com/changelogs/pool/main/p/python2.7/python2.7_2.7.15~rc1-1ubuntu0.1/changelog
isn't showing me any changes since 2018 either.

** Affects: nova
 Importance: High
 Status: Confirmed

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1832392

Title:
  nova.tests.unit.test_hacking.HackingTestCase.test_check_doubled_words
  failing in py27 jobs since June 10

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/61/660761/7/check/openstack-tox-
  py27/b10a9a6/testr_results.html.gz

  ft1.34: 
nova.tests.unit.test_hacking.HackingTestCase.test_check_doubled_words_StringException:
 Traceback (most recent call last):
File "nova/tests/unit/test_hacking.py", line 593, in 
test_check_doubled_words
  expected_errors=errors)
File "nova/tests/unit/test_hacking.py", line 295, in _assert_has_errors
  self.assertEqual(expected_errors or [], actual_errors)
File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: [(1, 0, 'N343')] != []

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22testtools.matchers._impl.MismatchError%3A%20%5B(1%2C%200%2C%20'N343')%5D%20!%3D%20%5B%5D%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d

  Looks like this is happening on master and stable/stein, probably due
  to bionic nodes being used. It's happening on multiple node providers
  so must be something new in bionic py27 as of June 10.

  I can't really tell what version of python 2.7 is on the nodes when
  this fails though.

  
http://changelogs.ubuntu.com/changelogs/pool/main/p/python2.7/python2.7_2.7.15~rc1-1ubuntu0.1/changelog
  isn't showing me any changes since 2018 either.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1832392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1824239] Re: RFE: predictable role ids

2019-06-11 Thread OpenStack Infra
** Changed in: keystone
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1824239

Title:
  RFE: predictable role ids

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Make it possible to know what the ID of a role will be prior to
  creating it. This allows synchronization between multiple keystone
  servers

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1824239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780973] Re: Failure during live migration leaves BDM with incorrect connection_info

2019-06-11 Thread Matt Riedemann
** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/stein
   Status: New => Fix Released

** Changed in: nova/stein
   Importance: Undecided => Medium

** Changed in: nova/stein
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1780973

Title:
  Failure during live migration leaves BDM with incorrect
  connection_info

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in OpenStack Compute (nova) stein series:
  Fix Released

Bug description:
  _rollback_live_migration doesn't restore connection_info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1780973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821696] Re: Failed to start instances with encrypted volumes

2019-06-11 Thread Matt Riedemann
** Changed in: nova/stein
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821696

Title:
  Failed to start instances with encrypted volumes

Status in kolla-ansible:
  Fix Committed
Status in kolla-ansible rocky series:
  New
Status in kolla-ansible stein series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in OpenStack Compute (nova) stein series:
  Fix Released

Bug description:
  Description
  ===
  We hit this bug after doing a complete cluster shutdown due to server room 
maintenance. The bug is however more easily reproducible.

  When cold starting an instance with an encrypted volume attached, it
  fails so start with a VolumeEncryptionNotSupported error.

  https://github.com/openstack/os-
  brick/blob/stable/rocky/os_brick/encryptors/cryptsetup.py#L52

  Steps to reproduce
  ==

  * Deploy Openstack with Barbican support using Kolla.
  * Create an encrypted volume type
  * Create an encrypted volume
  * Create an instance and attach the encrypted folder
  * Enjoy your new instance and volume, install software and store data
  * In our case, we shut down the entire cluster and restarted it again. First 
all instances were stopped in Horizon using Shut down instance command. We use 
Ceph so we then stopped that using these procedures 
https://ceph.com/planet/how-to-do-a-ceph-cluster-maintenance-shutdown/ and then 
shut down the compute / storage nodes and then the controller nodes one by one. 
Then we started the cluster in the reverse order, verified all services were up 
and running, examined logs and then started the instances. 
  * Instances without encrypted volumes started fine.
  * Instances with encrypted volumes fail to start with 
VolumeEncryptionNotSupported.

  Note: It is possible to recreate the problem by using a Hard Reboot
  (possibly related https://bugs.launchpad.net/nova/+bug/1597234) or by
  shutting down instances and then restarting all Openstack services on
  that compute node.

  Expected results
  
  Instances with encrypted volumes should start fine, even after a Hard Reboot 
or a complete cluster shutdown.

  Actual results
  ==
  Instances with encrypted volumes failed to start with 
VolumeEncryptionNotSupported

  https://pastebin.com/mvMbJQRb

  Environment
  ===

  1. Openstack version
  Environment is established by Kolla (Rocky release).

  2. Hypervisor
  KVM on RHEL

  3. Storage type
  Ceph using Kolla (Rocky release)

  Analysis
  
  There seems to be a problem related to this code not behaving as expected:

  
https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/driver.py#L1049

  It seems that it is expected that the exception should be ignored and
  logged, but for some reason, the `ctxt.reraise = False` does not work
  as expected:

  self.force_reraise() is called in
  
https://github.com/openstack/oslo.utils/blob/stable/rocky/oslo_utils/excutils.py#L220
  which it should not have hit since `reraise` is expected to be
  `False`.

  We did some hacking and just swallowed the exception by commenting out
  the `excutils.save_and_reraise_exception()` section and replacing it
  with a simple `pass`.

  Then the instance booted - but it could not boot from the image. But,
  it was then possible to remove the encrypted volume attachment, reboot
  the server and then reattach the encrypted volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1821696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830926] Re: Links to reno are incorrect

2019-06-11 Thread Matt Riedemann
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => In Progress

** Changed in: nova/stein
   Status: New => Fix Released

** Changed in: nova/rocky
   Importance: Undecided => Medium

** Changed in: nova/stein
   Importance: Undecided => Medium

** Changed in: nova/rocky
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

** Changed in: nova/stein
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830926

Title:
  Links to reno are incorrect

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in OpenStack Compute (nova) stein series:
  Fix Released

Bug description:
  There are multiple links to reno in the "release notes" section of the
  contributor guide:

  https://docs.openstack.org/nova/stein/contributor/releasenotes.html

  These are versioned links but reno is unversioned. This is resulting
  in breaking links when on stable branches like the above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1830926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737131] Re: Superfluous re-mount attempts with the Quobyte Nova driver and multi-registry volume URLs

2019-06-11 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova/stein
   Importance: Undecided => Low

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => In Progress

** Changed in: nova/rocky
 Assignee: (unassigned) => Silvan Kaiser (2-silvan)

** Changed in: nova/rocky
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737131

Title:
  Superfluous re-mount attempts with the Quobyte Nova driver and multi-
  registry volume URLs

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in OpenStack Compute (nova) stein series:
  Fix Committed

Bug description:
  When using a multi-registry volume URL in the Cinder Quobyte driver
  the Nova Quobyte driver does not detect existing mounts correctly.
  Upon trying to mount the given volume the driver fails because the
  mount already exists:

  [..]
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/virt/block_device.py", line 389, in attach
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server 
device_type=self['device_type'], encryption=encryption)
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1248, in attach_volume
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server 
self._connect_volume(connection_info, disk_info, instance)
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1181, in _connect_volume
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server 
vol_driver.connect_volume(connection_info, disk_info, instance)
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
274, in inner
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/virt/libvirt/volume/quobyte.py", line 147, in 
connect_volume
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server 
CONF.libvirt.quobyte_client_cfg)
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/virt/libvirt/volume/quobyte.py", line 61, in mount_volume
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server 
utils.execute(*command)
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/utils.py", line 229, in execute
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server return 
processutils.execute(*cmd, **kwargs)
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
419, in execute
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server 
cmd=sanitized_cmd)
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server 
ProcessExecutionError: Unexpected error while running command.
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server Command: 
mount.quobyte --disable-xattrs 
78.46.57.153:7861,78.46.57.153:7861,78.46.57.153:7861/82000e41-c6ac-4be2-b31a-0543db93767c
 /mnt/quobyte-volume/531b7439e360bdea0a79870354906cab
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server Exit code: 4
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server Stdout: 
u'mount.quobyte failed: Unable to initialize mount point\n'
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server Stderr: 
u"Logging to file /opt/stack/logs/quobyte_client.log.\nfuse: mountpoint is not 
empty\nfuse: if you are sure this is safe, use the 'nonempty' mount option\n"
  2017-12-08 08:32:29.277 25660 ERROR oslo_messaging.rpc.server

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1737131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831309] Re: Nova System Architecture in nova

2019-06-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/663175
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ebddacf001bfc7bf0a1388ef3fd166d0996b09cc
Submitter: Zuul
Branch:master

commit ebddacf001bfc7bf0a1388ef3fd166d0996b09cc
Author: Takashi NATSUME 
Date:   Wed Jun 5 09:00:00 2019 +0900

Replace 'is comprised of' with 'comprises'

Replace 'is comprised of' with 'comprises'
because the 'is comprised of' is disputed use.

Change-Id: If66d2e4583b00102d52635457f3c3f8c2adee1be
Closes-Bug: #1831309


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1831309

Title:
  Nova System Architecture in nova

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  - [x] This doc is inaccurate in this way: Wrong use of the word "comprise"
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 

  "Nova is comprised of multiple server processes"

  "to comprise" means "to be composed of" so the above phrase, would
  read as "Nova is composed of of multiple server processes."

  The correct phrased would either be:

  "Nova comprises multiple server processes" or "Nova is composed of
  multiple server processes"

  ---
  Release:  on 2018-10-30 10:13:24
  SHA: 6e4ab9714cc0ca147f61997aa7b68f88185ade5c
  Source: 
https://opendev.org/openstack/nova/src/doc/source/user/architecture.rst
  URL: https://docs.openstack.org/nova/latest/user/architecture.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1831309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815167] Re: Manage Compute services in nova manual typos

2019-06-11 Thread Takashi NATSUME
This bug has been fixed in https://review.opendev.org/#/c/638475/ .

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815167

Title:
  Manage Compute services in nova manual typos

Status in OpenStack Compute (nova):
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 

   openstack compute service set --disable --disable-reason trial log nova 
nova-compute
   openstack compute service set --disable --disable-reason "trial log" nova 
nova-compute

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 16.1.8.dev9 on 2019-02-05 18:29
  SHA: 415c94cdf8cc5a5288bdf00fad0fed2ee79f411c
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/services.rst
  URL: https://docs.openstack.org/nova/pike/admin/services.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1754048] Re: Federated domain is reported when validating a federated token

2019-06-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/653068
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c2be944fb89f94a10d7105b2e072eeab5582c5a7
Submitter: Zuul
Branch:master

commit c2be944fb89f94a10d7105b2e072eeab5582c5a7
Author: Kristi Nikolla 
Date:   Tue Apr 16 14:11:36 2019 -0400

Report correct domain in federated user token

Regardless of what domain the user was in, the domain reported in
the token would be hardcoded to 'Federated' (regardless of the
federated_domain_name config option).

This patch removes the places where the domain was overwritten,
and allows the correct domain to flow to the rendered token.
It also updates the tests where it was being checked for
the 'Federated' domain.

Change-Id: Idad4e077c488d87f75172664fb519232eb78e292
Closes-Bug: 1754048


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1754048

Title:
  Federated domain is reported when validating a federated token

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Prior to introducing per idp domains, all federated users lived in the
  Federated domain. That is not the case anymore but Keystone keeps
  reporting that federated users are part of that domain rather their
  per-idp domains.

  Token validation: http://paste.openstack.org/show/693652/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1754048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831180] Re: Image Name is optional parameter on create and update image form

2019-06-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/662374
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=9f18dde70e11a70a04c5fa89b900dfa3a7887c03
Submitter: Zuul
Branch:master

commit 9f18dde70e11a70a04c5fa89b900dfa3a7887c03
Author: pengyuesheng 
Date:   Fri May 31 11:12:30 2019 +0800

Image Name is optional parameter on create and update image form

Change-Id: I36f26a30311a5827f8f411d1e3ed40ac434188b1
Closes-Bug: #1831180


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1831180

Title:
  Image Name is optional parameter on create and update image form

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Image Name is optional parameter on create and update image form

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1831180/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825961] Re: Horizon displays an error message on an empty instances list page

2019-06-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/653675
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=36515b38b2ab990aa16e14e9a410b710df661652
Submitter: Zuul
Branch:master

commit 36515b38b2ab990aa16e14e9a410b710df661652
Author: Pavlo Shchelokovskyy 
Date:   Thu Apr 18 10:08:44 2019 +0300

Do not try to access sets by index

code in neutron api in some circumstances tries to access
a set or a frozenset by index, which does not work.
In particular it may manifest as

Unable to connect to Neutron: 'frozenset' object has
no attribute '__getitem__'

error in the horizon.log when opening an empty instances list page.

Convert to list instead, and skip conversion for `collections.Sequence`
objects only that support all the methods used for this object
further in this method.

Closes-bug: #1825961
Change-Id: I141a28d96f71c06a1ebe44d7067ccf4609e22db6


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1825961

Title:
  Horizon displays an error message on an empty instances list page

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The following error is observed in the Horizon log:
  Unable to connect to Neutron: 'frozenset' object has no attribute '_getitem_'

  The error happens on the network_list requests when the parameter list
  is too long.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1825961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp