[Yahoo-eng-team] [Bug 1715374] Re: Reloading compute with SIGHUP prevents instances from booting

2019-10-21 Thread pkrev
** Changed in: openstack-ansible
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715374

Title:
  Reloading compute with SIGHUP prevents instances from booting

Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.service:
  Fix Released
Status in tripleo:
  Won't Fix

Bug description:
  When trying to boot a new instance at a compute-node, where nova-
  compute received SIGHUP(the SIGHUP is used as a trigger for reloading
  mutable options), it always failed.

== nova/compute/manager.py ==
  def cancel_all_events(self):
  if self._events is None:
  LOG.debug('Unexpected attempt to cancel events during shutdown.')
  return
  our_events = self._events
  # NOTE(danms): Block new events
  self._events = None<--- Set self._events to 
"None" 
  ...
  =

This will cause a NovaException when prepare_for_instance_event() was 
called.
It's the cause of the failure of network allocation.

  == nova/compute/manager.py ==
  def prepare_for_instance_event(self, instance, event_name):
  ...
  if self._events is None:
  # NOTE(danms): We really should have a more specific error
  # here, but this is what we use for our default error case
  raise exception.NovaException('In shutdown, no new events '
'can be scheduled')
  =

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708505] Re: create encrypted volume fails

2019-10-21 Thread Keith Berger
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1708505

Title:
  create encrypted volume fails

Status in Cinder:
  Opinion
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  stack with current devstack as of 8/3/2017 (Pike)

  enable barbican in local.conf
  [[local|localrc]]
  enable_plugin barbican https://git.openstack.org/openstack/barbican

  
  once devstack finishes and services are up you can see 
/etc/cinder/cinder.conf 
  ...
  [key_manager]
  api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager


  from cmdln
    master-vm  vagrant   master  ~  devstack  cinder list
  cinder 
type+++--+--+-+--+-+
  | ID | Status | Name | Size | Volume Type | Bootable | Attached to |
  +++--+--+-+--+-+
  +++--+--+-+--+-+
  -  master-vm  vagrant   master  ~  devstack  cinder type-list
  +--+--+-+---+
  | ID   | Name | Description | Is_Public |
  +--+--+-+---+
  | 0be4eb35-7835-4a3b-89f8-fc71e9c303a2 | lvm  | -   | True  |
  | ba936fd6-d01a-40f9-82fc-933b9bd9da75 | nfs  | -   | True  |
  +--+--+-+---+
    master-vm  vagrant   master  ~  devstack  cinder type-create LUKS
  +--+--+-+---+
  | ID   | Name | Description | Is_Public |
  +--+--+-+---+
  | d1e9a6bc-c2bf-4d57-b1c7-0b6440833606 | LUKS | -   | True  |
  +--+--+-+---+
    master-vm  vagrant   master  ~  devstack  cinder type-key LUKS set 
volume_backend_name=lvm
    master-vm  vagrant   master  ~  devstack  cinder 
encryption-type-create --cipher aes-xts-plain64 --key_size 512 \
       --control_location front-end LUKS 
nova.volume.encryptors.luks.LuksEncryptor
  
+--+---+-+--+--+
  | Volume Type ID   | Provider 
 | Cipher  | Key Size | Control Location |
  
+--+---+-+--+--+
  | d1e9a6bc-c2bf-4d57-b1c7-0b6440833606 | 
nova.volume.encryptors.luks.LuksEncryptor | aes-xts-plain64 | 512  | 
front-end|
  
+--+---+-+--+--+
    master-vm  vagrant   master  ~  devstack 
    master-vm  vagrant   master  ~  devstack  cinder create 
--volume-type LUKS --name test 1
  ERROR: Key manager error (HTTP 400) (Request-ID: 
req-b49e8300-5076-4c62-9831-9dbfec61e2ee)



  
  cinder-api.log

  Aug 03 17:56:47 master-vm devstack@c-api.service[13448]: ERROR 
castellan.key_manager.barbican_key_manager [None 
req-b49e8300-5076-4c62-9831-9dbfec61e2ee admin admin] Order is in ERROR status 
- status code: 500, status reason: Process TypeOrder failure seen - please 
contact site administrator.
  Aug 03 17:56:47 master-vm devstack@c-api.service[13448]: ERROR 
cinder.volume.flows.api.create_volume [None 
req-b49e8300-5076-4c62-9831-9dbfec61e2ee admin admin] Key manager error: 
KeyManagerError: Key manager error: Order is in ERROR status - status code: 
500, status reason: Process TypeOrder failure seen - please contact site 
administrator.
  Aug 03 17:56:47 master-vm devstack@c-api.service[13448]: ERROR 
cinder.volume.flows.api.create_volume Traceback (most recent call last):
  Aug 03 17:56:47 master-vm devstack@c-api.service[13448]: ERROR 
cinder.volume.flows.api.create_volume   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 400, in 
_get_encryption_key_id
  Aug 03 17:56:47 master-vm devstack@c-api.service[13448]: ERROR 
cinder.volume.flows.api.create_volume length=length)
  Aug 03 17:56:47 master-vm devstack@c-api.service[13448]: ERROR 
cinder.volume.flows.api.create_volume   File 
"/usr/local/lib/python2.7/dist-packages/castellan/key_manager/barbican_key_manager.py",
 line 229, in create_key
  Aug 03 17:56:47 master-vm devstack@c-api.service[13448]: ERROR 
cinder.volume.flows.api.create_volume order = 
self._get_active_order(barbican_client, order_ref)
  Aug 03 17:56:47 master-vm devstack@c-api.service[13448]: ERROR 
cinder.volume.flows.api.create_volume  

[Yahoo-eng-team] [Bug 1849196] Re: Remove the 512 bit key option for aes-xts-plain64 encrypted volumes

2019-10-21 Thread Keith Berger
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1849196

Title:
  Remove the 512 bit key option for aes-xts-plain64 encrypted volumes

Status in Cinder:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Key size listed for Encrpyted volumes using aes-xts-plain64 is not
  correct. If you use 512, you will get an error about an unsupported
  key size. This has to do with how barbican receives the key
  information from cinder.

  
https://github.com/openstack/cinder/blob/master/cinder/volume/volume_utils.py#L919

  does not pass a "mode" so this block

  
https://github.com/openstack/barbican/blob/stable/rocky/barbican/plugin/crypto/simple_crypto.py#L222

  evaluates to 512 and this is not present in this list

  
https://github.com/openstack/barbican/blob/stable/rocky/barbican/plugin/crypto/base.py#L64

  The following docs needs updated to only reflect a 256 bit key.

  https://docs.openstack.org/horizon/train/admin/manage-volumes.html
  https://docs.openstack.org/horizon/stein/admin/manage-volumes.html
  https://docs.openstack.org/horizon/rocky/admin/manage-volumes.html
  https://docs.openstack.org/horizon/queens/admin/manage-volumes.html

  
  Also the text needs to be updated.

  
  Key Size (bits)

  512 (Recommended for aes-xts-plain64. 256 should be used for 
aes-cbc-essiv)
  Using this selection for aes-xts, the underlying key size 
would only be 256-bits*

  256 Using this selection for aes-xts, the underlying key
  size would only be 128-bits*

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1849196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840465] Re: [SRU] Fails to list security groups if one or more exists without rules

2019-10-21 Thread Corey Bryant
** Changed in: horizon (Ubuntu Eoan)
   Status: In Progress => Fix Released

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/queens
   Importance: Undecided => Medium

** Changed in: cloud-archive/queens
   Status: New => Triaged

** Changed in: cloud-archive/rocky
   Importance: Undecided => Medium

** Changed in: cloud-archive/rocky
   Status: New => Triaged

** Changed in: cloud-archive/stein
   Importance: Undecided => Medium

** Changed in: cloud-archive/stein
   Status: New => Triaged

** Changed in: cloud-archive/train
   Status: New => Fix Released

** Changed in: cloud-archive/train
   Importance: Undecided => Medium

** Changed in: horizon (Ubuntu Eoan)
   Importance: Undecided => Medium

** Changed in: horizon (Ubuntu Disco)
   Importance: Undecided => Medium

** Changed in: horizon (Ubuntu Bionic)
   Importance: Undecided => Medium

** Changed in: horizon (Ubuntu)
   Importance: Undecided => Medium

** Changed in: horizon (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1840465

Title:
  [SRU] Fails to list security groups if one or more exists without
  rules

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Triaged
Status in Ubuntu Cloud Archive stein series:
  Triaged
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Bionic:
  In Progress
Status in horizon source package in Disco:
  In Progress
Status in horizon source package in Eoan:
  Fix Released

Bug description:
  Horizon 14.0.2 (rocky)
  If a security group without any rules exists the listing of security groups 
fails with a KeyError.

  Traceback (most recent call last):
    File 
"/usr/share/openstack-dashboard/openstack_dashboard/api/rest/utils.py", line 
127, in _wrapped
  data = function(self, request, *args, **kw)
    File 
"/usr/share/openstack-dashboard/openstack_dashboard/api/rest/network.py", line 
44, in get
  security_groups = api.neutron.security_group_list(request)
    File "/usr/lib/python2.7/site-packages/horizon/utils/memoized.py", line 95, 
in wrapped
  value = cache[key] = func(*args, **kwargs)
    File "/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py", 
line 1641, in security_group_list
  return SecurityGroupManager(request).list(**params)
    File "/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py", 
line 372, in list
  return self._list(**params)
    File "/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py", 
line 359, in _list
  return [SecurityGroup(sg) for sg in secgroups.get('security_groups')]
    File "/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py", 
line 240, in __init__
  for rule in sg['security_group_rules']]
  KeyError: 'security_group_rules'

  ===

  [Impact]

  By default, new security groups created through horizon or CLI include
  2 default security rules. Upon managing those rules and removing them
  (to perhaps add others or limit traffic completely), the security
  group page errors out and prevents listing of *all* security groups if
  the empty security group is within the list to be displayed.
  Therefore, not only is the empty security group affected, but all
  others as well, as they cannot be listed. The root cause of the bug is
  that the payload does not include the expected key
  "security_group_rules" for that security group when there are no
  rules.

  A fix has been implemented for Train (from master), Stein, Rocky and
  Queens releases and should be backported so the issue is addressed on
  previous those releases. The fix prevents the crash by ensuring the
  key "security_group_rules" is present with an empty list in case it
  was not included in the payload.

  [Test Case]

  1. Reproducing the issue

  1a. Go to the Security Group section at Project > Network > Security Groups
  1b. Create a security group
  1c. Click the Manage Rules button for that security group you just created
  1d. Delete the two default rules
  1e. Go back to the Security Group section at Project > Network > Security 
Groups
  1f. Security groups are no longer being listed and there will be an error 
popup: "Error: Unable 

[Yahoo-eng-team] [Bug 1849192] Re: [SRU] stein stable releases

2019-10-21 Thread Corey Bryant
** Also affects: nova (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: cinder (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** No longer affects: nova

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/stein
   Status: New => Triaged

** Changed in: cloud-archive/stein
   Importance: Undecided => High

** Changed in: cloud-archive
   Status: New => Invalid

** Changed in: cinder (Ubuntu Disco)
   Status: New => Triaged

** Changed in: cinder (Ubuntu Disco)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Disco)
   Status: New => Triaged

** Changed in: nova (Ubuntu Disco)
   Importance: Undecided => High

** Changed in: nova (Ubuntu)
   Status: New => Invalid

** Changed in: cinder (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1849192

Title:
  [SRU] stein stable releases

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive stein series:
  Triaged
Status in cinder package in Ubuntu:
  Invalid
Status in nova package in Ubuntu:
  Invalid
Status in cinder source package in Disco:
  Triaged
Status in nova source package in Disco:
  Triaged

Bug description:
  [Impact]
  This release sports mostly bug-fixes and we would like to make sure all of 
our supported customers have access to these improvements. The update contains 
the following package updates:

  cinder 14.0.2
  nova 19.0.3

  [Test Case]
  The following SRU process was followed:
  https://wiki.ubuntu.com/OpenStackUpdates

  In order to avoid regression of existing consumers, the OpenStack team
  will run their continuous integration test against the packages that
  are in -proposed. A successful run of all available tests will be
  required before the proposed packages can be let into -updates.

  The OpenStack team will be in charge of attaching the output summary
  of the executed tests. The OpenStack team members will not mark
  ‘verification-done’ until this has happened.

  [Regression Potential]
  In order to mitigate the regression potential, the results of the
  aforementioned tests are attached to this bug.

  [Discussion]
  n/a

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1849192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1849165] Re: _populate_assigned_resources raises TypeError: argument of type 'NoneType' is not iterable

2019-10-21 Thread Matt Riedemann
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22if%20mig.dest_compute%20%3D%3D%20self.host%20and%20'new_resources'%20in%20mig_ctx%3A%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22=7d

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Summary changed:

- _populate_assigned_resources raises TypeError: argument of type 'NoneType' is 
not iterable
+ _populate_assigned_resources raises "TypeError: argument of type 'NoneType' 
is not iterable" during active migration

** Changed in: nova/train
   Importance: Undecided => High

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova/train
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1849165

Title:
  _populate_assigned_resources raises "TypeError: argument of type
  'NoneType' is not iterable" during active migration

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) train series:
  Confirmed

Bug description:
  Seen here:

  
https://zuul.opendev.org/t/openstack/build/2b10b4a240b84245bcee3366db93951d/log/logs/screen-n-cpu.txt.gz?severity=4#2675

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager [None req-
  dd5ddbad-4234-4288-bbab-2c3d20b7f4ad None None] Error updating
  resources for node ubuntu-bionic-rax-iad-0012404623.: TypeError:
  argument of type 'NoneType' is not iterable

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager Traceback (most recent call
  last):

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager   File
  "/opt/stack/new/nova/nova/compute/manager.py", line 8925, in
  _update_available_resource_for_node

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager startup=startup)

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager   File
  "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 883, in
  update_available_resource

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager
  self._update_available_resource(context, resources, startup=startup)

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager   File
  "/usr/local/lib/python2.7/dist-
  packages/oslo_concurrency/lockutils.py", line 328, in inner

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager return f(*args,
  **kwargs)

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager   File
  "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 965, in
  _update_available_resource

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager
  self._populate_assigned_resources(context, instance_by_uuid)

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager   File
  "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 482, in
  _populate_assigned_resources

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager if mig.dest_compute ==
  self.host and 'new_resources' in mig_ctx:

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager TypeError: argument of type
  'NoneType' is not iterable

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager

  This was added late in Train:

  https://review.opendev.org/#/c/678452/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1849165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1849165] [NEW] _populate_assigned_resources raises "TypeError: argument of type 'NoneType' is not iterable" during active migration

2019-10-21 Thread Matt Riedemann
Public bug reported:

Seen here:

https://zuul.opendev.org/t/openstack/build/2b10b4a240b84245bcee3366db93951d/log/logs/screen-n-cpu.txt.gz?severity=4#2675

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager [None req-dd5ddbad-4234-4288
-bbab-2c3d20b7f4ad None None] Error updating resources for node ubuntu-
bionic-rax-iad-0012404623.: TypeError: argument of type 'NoneType' is
not iterable

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager Traceback (most recent call
last):

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager   File
"/opt/stack/new/nova/nova/compute/manager.py", line 8925, in
_update_available_resource_for_node

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager startup=startup)

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager   File
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 883, in
update_available_resource

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager
self._update_available_resource(context, resources, startup=startup)

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py",
line 328, in inner

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager return f(*args, **kwargs)

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager   File
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 965, in
_update_available_resource

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager
self._populate_assigned_resources(context, instance_by_uuid)

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager   File
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 482, in
_populate_assigned_resources

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager if mig.dest_compute ==
self.host and 'new_resources' in mig_ctx:

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager TypeError: argument of type
'NoneType' is not iterable

Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager

This was added late in Train:

https://review.opendev.org/#/c/678452/

** Affects: nova
 Importance: High
 Status: Confirmed

** Affects: nova/train
 Importance: High
 Status: Confirmed


** Tags: resource-tracker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1849165

Title:
  _populate_assigned_resources raises "TypeError: argument of type
  'NoneType' is not iterable" during active migration

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) train series:
  Confirmed

Bug description:
  Seen here:

  
https://zuul.opendev.org/t/openstack/build/2b10b4a240b84245bcee3366db93951d/log/logs/screen-n-cpu.txt.gz?severity=4#2675

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager [None req-
  dd5ddbad-4234-4288-bbab-2c3d20b7f4ad None None] Error updating
  resources for node ubuntu-bionic-rax-iad-0012404623.: TypeError:
  argument of type 'NoneType' is not iterable

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager Traceback (most recent call
  last):

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager   File
  "/opt/stack/new/nova/nova/compute/manager.py", line 8925, in
  _update_available_resource_for_node

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager startup=startup)

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager   File
  "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 883, in
  update_available_resource

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager
  self._update_available_resource(context, resources, startup=startup)

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR nova.compute.manager   File
  "/usr/local/lib/python2.7/dist-
  packages/oslo_concurrency/lockutils.py", line 328, in inner

  Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
  compute[26938]: ERROR 

[Yahoo-eng-team] [Bug 1849164] [NEW] nova policy doc does not include the PUT and Rebuild servers APIs for 'show:host_status' and 'os-extended-server-attributes'

2019-10-21 Thread Ghanshyam Mann
Public bug reported:

In microversion 2.75, host_status and extended-server-attributes were
added in PUT /servers/{server-id} and POST /servers/action {rebuild }
API response with respective policy enforcement[1].

But PUT and rebuild APIs were missed to mentioned in policy doc for
'os_compute_api:servers:show:host_status' 'os_compute_api:os-extended-
server-attributes'

- https://docs.openstack.org/nova/latest/configuration/policy.html

[1]
https://github.com/openstack/nova/blob/964d7dc87989b5765fcc60d34f734963ab8e03e7/nova/api/openstack/compute/servers.py#L854

https://github.com/openstack/nova/blob/964d7dc87989b5765fcc60d34f734963ab8e03e7/nova/api/openstack/compute/servers.py#L1161

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1849164

Title:
  nova policy doc does not include the PUT and Rebuild servers APIs for
  'show:host_status' and 'os-extended-server-attributes'

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In microversion 2.75, host_status and extended-server-attributes were
  added in PUT /servers/{server-id} and POST /servers/action {rebuild }
  API response with respective policy enforcement[1].

  But PUT and rebuild APIs were missed to mentioned in policy doc for
  'os_compute_api:servers:show:host_status' 'os_compute_api:os-extended-
  server-attributes'

  - https://docs.openstack.org/nova/latest/configuration/policy.html

  [1]
  
https://github.com/openstack/nova/blob/964d7dc87989b5765fcc60d34f734963ab8e03e7/nova/api/openstack/compute/servers.py#L854

  
https://github.com/openstack/nova/blob/964d7dc87989b5765fcc60d34f734963ab8e03e7/nova/api/openstack/compute/servers.py#L1161

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1849164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1849154] [NEW] Live Migrations complete but occasionally fail to update the Openstack Database

2019-10-21 Thread Ryan Farrell
Public bug reported:

[Description]
Occasionally when evacuating vms off of nova compute hosts for host reboots, a 
vms migration will be reported as complete in the migration list, but queries 
to the openstack api, such as 'openstack show uuid' will report the host & 
hypervisor-hostname unchanged. The only indication that something is wrong is 
that power_state will be NOSTATE.  We can see that the instance is in fact 
migrated and running on the new host with 'sudo virsh list --all | grep 
$instance_name'. 

In order to resolve this issue we perform a direct database edit such
as:

'update instances
set host="$newhost", node="$newhost.domain", progress="0"
where uuid="" and deleted="0";' 

* In one instance, the 'progress' value was stuck at 99 and I needed to
set that to 0 in the database as well.

[Expected]
Its expected that the live migration completes and that the instance in the 
openstack database correctly reflects the name of the new host, and its power 
state.

[Impact]
Instances that are found to be in power state NOSTATE are blocked from 
performing certain actions; instances in this state do not self recover.


[Environment]
Openstack Queens; Nova 17.0.10
libvirtd/virsh: 4.0.0
ceph: 12.2.8
neutron-openvswitch: 12.0.5

[Logs]
In this particular set of logs (sosreports from the live migration source and 
destination hosts); the instance that was in error had uuid 
67f328d0-cb5e-416a-9af4-c6e47e68a1e0.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1849154

Title:
  Live Migrations complete but occasionally fail to update the Openstack
  Database

Status in OpenStack Compute (nova):
  New

Bug description:
  [Description]
  Occasionally when evacuating vms off of nova compute hosts for host reboots, 
a vms migration will be reported as complete in the migration list, but queries 
to the openstack api, such as 'openstack show uuid' will report the host & 
hypervisor-hostname unchanged. The only indication that something is wrong is 
that power_state will be NOSTATE.  We can see that the instance is in fact 
migrated and running on the new host with 'sudo virsh list --all | grep 
$instance_name'. 

  In order to resolve this issue we perform a direct database edit such
  as:

  'update instances
  set host="$newhost", node="$newhost.domain", progress="0"
  where uuid="" and deleted="0";' 

  * In one instance, the 'progress' value was stuck at 99 and I needed
  to set that to 0 in the database as well.

  [Expected]
  Its expected that the live migration completes and that the instance in the 
openstack database correctly reflects the name of the new host, and its power 
state.

  [Impact]
  Instances that are found to be in power state NOSTATE are blocked from 
performing certain actions; instances in this state do not self recover.

  
  [Environment]
  Openstack Queens; Nova 17.0.10
  libvirtd/virsh: 4.0.0
  ceph: 12.2.8
  neutron-openvswitch: 12.0.5

  [Logs]
  In this particular set of logs (sosreports from the live migration source and 
destination hosts); the instance that was in error had uuid 
67f328d0-cb5e-416a-9af4-c6e47e68a1e0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1849154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1848514] Re: Booting from volume providing an image fails

2019-10-21 Thread Matt Riedemann
Hmm, did something change in Stein on the Cinder side to enforce the
update_volume_admin_metadata policy rule on the os-attach API? I'm not
aware of anything that has changed on the nova side in stein that would
be related to this.

** Also affects: cinder
   Importance: Undecided
   Status: New

** Tags added: policy volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1848514

Title:
  Booting from volume providing an image fails

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Trying to create an instance (booting from volume when specifying an image) 
fails.
  Running Stein (19.0.1)

  ###
  When using:
  ###
  nova boot --flavor FLAVOR_ID --block-device 
source=image,id=IMAGE_ID,dest=volume,size=10,shutdown=preserve,bootindex=0 
INSTANCE_NAME

  ###
  nova-compute logs:
  ###

  Instance failed block device setup Forbidden: Policy doesn't allow
  volume:update_volume_admin_metadata to be performed. (HTTP 403)
  (Request-ID: req-875cc6e1-ffe1-45dd-b942-944166c6040a)

  The full trace:
  http://paste.openstack.org/raw/784535/

  
  Definitely this is a policy issue!
  Our cinder policy: "volume:update_volume_admin_metadata": "rule:admin_api" 
(default)
  Using an user with admin credentials works as expected!

  Is this expected? we didn't identified this behaviour previously
  (before stein) using the same policy for
  "update_volume_admin_metadata"

  Found an old similar report:
  https://bugs.launchpad.net/nova/+bug/1661189

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1848514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1849098] [NEW] ovs agent is stuck with OVSFWTagNotFound when dealing with unbound port

2019-10-21 Thread Oleg Bondarev
Public bug reported:

neutron-openvswitch-agent meets unbound port:

2019-10-17 11:32:21.868 135 WARNING
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-
aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Device
ef34215f-e099-4fd0-935f-c9a42951d166 not defined on plugin or binding
failed

Later when applying firewall rules:

2019-10-17 11:32:21.901 135 INFO neutron.agent.securitygroups_rpc 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Preparing filters for 
devices {'ef34215f-e099-4fd0-935f-c9a42951d166', 
'e9c97cf0-1a5e-4d77-b57b-0ba474d12e29', 'fff1bb24-6423-4486-87c4-1fe17c552cca', 
'2e20f9ee-bcb5-445c-b31f-d70d276d45c9', '03a60047-cb07-42a4-8b49-619d5982a9bd', 
'a452cea2-deaf-4411-bbae-ce83870cbad4', '79b03e5c-9be0-4808-9784-cb4878c3dbd5', 
'9b971e75-3c1b-463d-88cf-3f298105fa6e'}
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-aae68b42-a99f-4bb3-bcf6-a6d3c4ca9e31 - - - - -] Error while processing VIF 
ports: neutron.agent.linux.openvswitch_firewall.exceptions.OVSFWTagNotFound: 
Cannot get tag for port o-hm0 from its other_config: {}
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 530, in get_or_create_ofport
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent of_port = 
self.sg_port_map.ports[port_id]
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 
'ef34215f-e099-4fd0-935f-c9a42951d166'
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 81, in get_tag_from_other_config
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return 
int(other_config['tag'])
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 'tag'
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent During handling 
of the above exception, another exception occurred:
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2280, in rpc_loop
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
provisioning_needed)
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/osprofiler/profiler.py", line 
160, in wrapper
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = 
f(*args, **kwargs)
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1847, in process_network_ports
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/securitygroups_rpc.py",
 line 258, in setup_port_filters
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
2019-10-17 11:32:21.906 135 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/securitygroups_rpc.py",
 line 

[Yahoo-eng-team] [Bug 1849076] [NEW] Keystone Installation Tutorial in keystone

2019-10-21 Thread sajjadjafarib...@gmail.com
Public bug reported:

install keystone in rocky version we need install bellow command:
"apt install keystone apache2 libapache2-mod-wsgi"

but in stein version in online document just install keystone:
"apt install keystone"

but correct command :
"apt install keystone apache2 libapache2-mod-wsgi-py3"


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [* ] I have a fix to the document that I can paste below including example: 
input and output. 


If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release:  on 2017-06-15 12:37:40
SHA: 6da24eb1bea8c43bd6388257bd0d7ced7e3c96bf
Source: https://opendev.org/openstack/keystone/src/doc/source/install/index.rst
URL: https://docs.openstack.org/keystone/stein/install/

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1849076

Title:
  Keystone Installation Tutorial in keystone

Status in OpenStack Identity (keystone):
  New

Bug description:
  install keystone in rocky version we need install bellow command:
  "apt install keystone apache2 libapache2-mod-wsgi"

  but in stein version in online document just install keystone:
  "apt install keystone"

  but correct command :
  "apt install keystone apache2 libapache2-mod-wsgi-py3"


  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [* ] I have a fix to the document that I can paste below including example: 
input and output. 


  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2017-06-15 12:37:40
  SHA: 6da24eb1bea8c43bd6388257bd0d7ced7e3c96bf
  Source: 
https://opendev.org/openstack/keystone/src/doc/source/install/index.rst
  URL: https://docs.openstack.org/keystone/stein/install/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1849076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp