[Yahoo-eng-team] [Bug 1714417] [NEW] Error "Unable to retrieve the Absolute Limits" appeared when create a volume from an image.

2017-08-31 Thread Debo Zhang
Public bug reported:

Open the modal from for creating a volume from an image, an error message 
appeared : "Unable to retrieve the Absolute Limits".
I saw an error in browser console : "SyntaxError: Unexpected token I in JSON at 
position 76".

** Affects: horizon
 Importance: Undecided
 Assignee: Debo Zhang (laun-zhangdebo)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Debo Zhang (laun-zhangdebo)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1714417

Title:
  Error "Unable to retrieve the Absolute Limits" appeared when create a
  volume from an image.

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Open the modal from for creating a volume from an image, an error message 
appeared : "Unable to retrieve the Absolute Limits".
  I saw an error in browser console : "SyntaxError: Unexpected token I in JSON 
at position 76".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1714417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714416] [NEW] Incorrect response returned for invalid Accept header

2017-08-31 Thread Niraj Singh
Public bug reported:

As of now, when user passes 'Accept' header in request other than JSON
and XML using curl command then it returns 200 OK response with json
format data.

In api-ref guide [1] also it's not clearly mentioned about what response
it should return if invalid value for 'Accept' header is specified. IMO
instead of 'HTTP 200 OK' it should return 'HTTP 406 Not Acceptable'
response.

Steps to reproduce:
 
Request:
curl -g -i -X GET 
http://controller/volume/v2/c72e66cc4f1341f381e0c2eb7b28b443/volumes/detail -H 
"User-Agent: python-cinderclient" -H "Accept: application/abc" -H 
"X-Auth-Token: cd85aff745ce4dc0a04f686b52cf7e4f"
 
 
Response:
HTTP/1.1 200 OK
Date: Thu, 31 Aug 2017 07:12:18 GMT
Server: Apache/2.4.18 (Ubuntu)
x-compute-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
Content-Type: application/json
Content-Length: 2681
x-openstack-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
Connection: close
 
[1] 
https://developer.openstack.org/api-ref/block-storage/v2/#list-volumes-with-details

** Affects: cinder
 Importance: Undecided
 Assignee: Niraj Singh (nirajsingh)
 Status: New

** Affects: glance
 Importance: Undecided
 Assignee: Niraj Singh (nirajsingh)
 Status: New

** Affects: heat
 Importance: Undecided
 Status: New

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: masakari
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Changed in: cinder
 Assignee: (unassigned) => Niraj Singh (nirajsingh)

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Niraj Singh (nirajsingh)

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: masakari
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1714416

Title:
  Incorrect response returned for invalid Accept header

Status in Cinder:
  New
Status in Glance:
  New
Status in OpenStack Heat:
  New
Status in OpenStack Identity (keystone):
  New
Status in masakari:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  As of now, when user passes 'Accept' header in request other than JSON
  and XML using curl command then it returns 200 OK response with json
  format data.

  In api-ref guide [1] also it's not clearly mentioned about what
  response it should return if invalid value for 'Accept' header is
  specified. IMO instead of 'HTTP 200 OK' it should return 'HTTP 406 Not
  Acceptable' response.

  Steps to reproduce:
   
  Request:
  curl -g -i -X GET 
http://controller/volume/v2/c72e66cc4f1341f381e0c2eb7b28b443/volumes/detail -H 
"User-Agent: python-cinderclient" -H "Accept: application/abc" -H 
"X-Auth-Token: cd85aff745ce4dc0a04f686b52cf7e4f"
   
   
  Response:
  HTTP/1.1 200 OK
  Date: Thu, 31 Aug 2017 07:12:18 GMT
  Server: Apache/2.4.18 (Ubuntu)
  x-compute-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
  Content-Type: application/json
  Content-Length: 2681
  x-openstack-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
  Connection: close
   
  [1] 
https://developer.openstack.org/api-ref/block-storage/v2/#list-volumes-with-details

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1714416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714402] [NEW] When setting an allocation with multiple resource providers and one of them does not exist the error message can be wrong

2017-08-31 Thread Chris Dent
Public bug reported:


nova master as of 20170831

The _set_allocations method used to write allocations to the placement
API will raise a 400 when a resource class results in a NotFound
exception. We want that 400. The problem is that the message associated
with the error users the resource provider uuid from whatever resource
provider was the last one in a loop, not the one that creates the error.
See:

https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/handlers/allocation.py#L231-L234

and the loop prior.

This is not a huge deal because it's unlikely that people are inspecting
error responses all that much, but it would be nice to fix.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714402

Title:
  When setting an allocation with multiple resource providers and one of
  them does not exist the error message can be wrong

Status in OpenStack Compute (nova):
  New

Bug description:
  
  nova master as of 20170831

  The _set_allocations method used to write allocations to the placement
  API will raise a 400 when a resource class results in a NotFound
  exception. We want that 400. The problem is that the message
  associated with the error users the resource provider uuid from
  whatever resource provider was the last one in a loop, not the one
  that creates the error. See:

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/handlers/allocation.py#L231-L234

  and the loop prior.

  This is not a huge deal because it's unlikely that people are
  inspecting error responses all that much, but it would be nice to fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714389] [NEW] Pecan is missing some quota dirtying and resync calls

2017-08-31 Thread Kevin Benton
Public bug reported:

The legacy API controller would resync and dirty the quotas at
particular intervals that pecan is missing. In particular it would
resync on GET operations[1] and dirty on deletes[2]. The pecan hook is
missing both of these cases.


1. 
https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L338-L339
2. 
https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L589-L591

** Affects: neutron
 Importance: High
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714389

Title:
  Pecan is missing some quota dirtying and resync calls

Status in neutron:
  In Progress

Bug description:
  The legacy API controller would resync and dirty the quotas at
  particular intervals that pecan is missing. In particular it would
  resync on GET operations[1] and dirty on deletes[2]. The pecan hook is
  missing both of these cases.

  
  1. 
https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L338-L339
  2. 
https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L589-L591

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714388] [NEW] Pecan is missing the logic to hide authorization failures as 404s

2017-08-31 Thread Kevin Benton
Public bug reported:

The pecan code is missing the logic to translate some of the
authorization failures into 404s instead of 403's.

https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L575-L585

https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L389-L393

** Affects: neutron
 Importance: High
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714388

Title:
  Pecan is missing the logic to hide authorization failures as 404s

Status in neutron:
  In Progress

Bug description:
  The pecan code is missing the logic to translate some of the
  authorization failures into 404s instead of 403's.

  
https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L575-L585

  
https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L389-L393

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714386] [NEW] Pecan changed delete notification payload

2017-08-31 Thread Kevin Benton
Public bug reported:

The delete notifications under the old API controller used to contain
both the ID and the original copy of the resource being deleted. Pecan
broke that by only including the ID.

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714386

Title:
  Pecan changed delete notification payload

Status in neutron:
  New

Bug description:
  The delete notifications under the old API controller used to contain
  both the ID and the original copy of the resource being deleted. Pecan
  broke that by only including the ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714381] [NEW] pecan is missing some plugin sanity validation for sorting+pagination

2017-08-31 Thread Kevin Benton
Public bug reported:

The legacy controller was validating that enabled, native pagination was
only set when the plugin actually supported native sorting. Pecan needs
to do this same thing for parity.

** Affects: neutron
 Importance: High
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714381

Title:
  pecan is missing some plugin sanity validation for sorting+pagination

Status in neutron:
  In Progress

Bug description:
  The legacy controller was validating that enabled, native pagination
  was only set when the plugin actually supported native sorting. Pecan
  needs to do this same thing for parity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714384] [NEW] pecan is missing the de-dup and empty field stripping logic of the legacy controller

2017-08-31 Thread Kevin Benton
Public bug reported:

When a user passes in duplicate fields and empty fields, the old API
controller would strip these out before passing them to the plugin.

Pecan should do the same thing to preserve parity with the old
controller in case plugins are sensitive to these invalid filters.

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714384

Title:
  pecan is missing the de-dup and empty field stripping logic of the
  legacy controller

Status in neutron:
  New

Bug description:
  When a user passes in duplicate fields and empty fields, the old API
  controller would strip these out before passing them to the plugin.

  Pecan should do the same thing to preserve parity with the old
  controller in case plugins are sensitive to these invalid filters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714378] [NEW] Pecan is missing logic to add project_id to fields when tenant_id is specified

2017-08-31 Thread Kevin Benton
Public bug reported:

Pecan is missing this logic in the old controller code that adds
'tenant_id' to the filters required by the policy engine when the
'project_id' field is specified:
https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L96

This is necessary when tenants request that only the tenant_id field is
returned and we have a new class of resource that has a project_id field
only.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714378

Title:
  Pecan is missing logic to add project_id to fields when tenant_id is
  specified

Status in neutron:
  New

Bug description:
  Pecan is missing this logic in the old controller code that adds
  'tenant_id' to the filters required by the policy engine when the
  'project_id' field is specified:
  
https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L96

  This is necessary when tenants request that only the tenant_id field
  is returned and we have a new class of resource that has a project_id
  field only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714378/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714376] [NEW] unittests: OpenStack DS escape or timeout

2017-08-31 Thread Joshua Powers
Public bug reported:

Similar to LP: #1714117, there appears to be a pair of tests that are
escaping and timing out, resulting in extra time that is not required
for the unit tests:

tests.unittests.test_datasource.test_openstack.TestOpenStackDataSource.test_datasource
tests.unittests.test_datasource.test_openstack.TestOpenStackDataSource.test_bad_datasource_meta

Here are the raw results and methodology:
Python 2 results: https://paste.ubuntu.com/25441590/
Python 3 results: https://paste.ubuntu.com/25441592/

$ git clone https://git.launchpad.net/cloud-init
$ cd cloud-init
# pip[3] install --user -r requirements.txt -r test-requirements.txt nose-timer
$ python[3] -m nose --with-timer --timer-ok 1 --timer-warning 1 --timer-top-n 
10 tests/unittests

** Affects: cloud-init
 Importance: Undecided
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1714376

Title:
  unittests: OpenStack DS escape or timeout

Status in cloud-init:
  Confirmed

Bug description:
  Similar to LP: #1714117, there appears to be a pair of tests that are
  escaping and timing out, resulting in extra time that is not required
  for the unit tests:

  
tests.unittests.test_datasource.test_openstack.TestOpenStackDataSource.test_datasource
  
tests.unittests.test_datasource.test_openstack.TestOpenStackDataSource.test_bad_datasource_meta

  Here are the raw results and methodology:
  Python 2 results: https://paste.ubuntu.com/25441590/
  Python 3 results: https://paste.ubuntu.com/25441592/

  $ git clone https://git.launchpad.net/cloud-init
  $ cd cloud-init
  # pip[3] install --user -r requirements.txt -r test-requirements.txt 
nose-timer
  $ python[3] -m nose --with-timer --timer-ok 1 --timer-warning 1 --timer-top-n 
10 tests/unittests

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1714376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714363] [NEW] docs: versioned notification samples are always shown now and you can't hide them

2017-08-31 Thread Matt Riedemann
Public bug reported:

The versioned notification samples in the docs used to be collapsible
and would be hidden by default, but with the new docs theme it looks
like that isn't working, and the show/hide button doesn't do anything
either:

https://docs.openstack.org/nova/latest/reference/notifications.html
#existing-versioned-notifications

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: docs notifications

** Tags removed: notif
** Tags added: notifications

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714363

Title:
  docs: versioned notification samples are always shown now and you
  can't hide them

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The versioned notification samples in the docs used to be collapsible
  and would be hidden by default, but with the new docs theme it looks
  like that isn't working, and the show/hide button doesn't do anything
  either:

  https://docs.openstack.org/nova/latest/reference/notifications.html
  #existing-versioned-notifications

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703234] Re: API compare-and-swap updates based on revision_number

2017-08-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/499754
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8d2c1bd88b14eefbea74c72f384cb9952e7ee62e
Submitter: Jenkins
Branch:master

commit 8d2c1bd88b14eefbea74c72f384cb9952e7ee62e
Author: Boden R 
Date:   Thu Aug 31 12:38:31 2017 -0600

complete docs for revision number

Today the revision_number exists in documentation CLI output in only
some places. This patch updates the doc CLI output to include the
revision_number in the remaining places.

Change-Id: I805752c4dbaa7cf7fd12d2c281abb855ae19
Closes-Bug: #1703234


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1703234

Title:
  API compare-and-swap updates based on revision_number

Status in neutron:
  Fix Released

Bug description:
  https://review.openstack.org/409577
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 7f17b4759e41aa0a922ac27c2dabc595e7d2f67c
  Author: Kevin Benton 
  Date:   Sun Dec 11 18:24:01 2016 -0800

  API compare-and-swap updates based on revision_number
  
  Allows posting revision number matching in the If-Match header
  so updates/deletes will only be satisfied if the current revision
  number of the object matches.
  
  DocImpact: The Neutron API now supports conditional updates to resources
 that contain the standard 'revision_number' attribute by
 setting the revision_number in an HTTP If-Match header.
  APIImpact
  
  Partial-Bug: #1493714
  Partially-Implements: blueprint push-notifications
  Change-Id: I7d97d6044378eb59cb2c7bdc788dc6c174783299

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1703234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714358] [NEW] ds-identify does not find CloudStack datasource

2017-08-31 Thread Swen Brueseke
Public bug reported:

We are usng CloudStack with XenServer as hypervisor and we are getting
this:

Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-81-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support:https://ubuntu.com/advantage
**
# A new feature in cloud-init identified possible datasources for#
# this system as:#
#   []   #
# However, the datasource used was: CloudStack   #
##
# In the future, cloud-init will only attempt to use datasources that#
# are identified or specifically configured. #
# For more information see   #
#   https://bugs.launchpad.net/bugs/1669675  #
##
# If you are seeing this message, please file a bug against  #
# cloud-init at  #
#https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
# Make sure to include the cloud provider your instance is   #
# running on.#
##
# After you have filed a bug, you can disable this warning by launching  #
# your instance with the cloud-config below, or putting that content #
# into /etc/cloud/cloud.cfg.d/99-warnings.cfg#
##
# #cloud-config  #
# warnings:  #
#   dsid_missing_source: off #
**

Disable the warnings above by:
  touch /home/ubuntu/.cloud-warnings.skip
or
  touch /var/lib/cloud/instance/warnings/.skip


This is our config in /etc/cloud/cloud.cfg.d/99_cloudstack.cfg:
datasource:
  CloudStack: {}
  None: {}
datasource_list: [ CloudStack ]

this is the output of /run/cloud-init/ds-identify.log:
[up 3.77s] ds-identify
policy loaded: mode=report report=false found=all maybe=all notfound=enabled
/etc/cloud/cloud.cfg.d/99_cloudstack.cfg set datasource_list: [ CloudStack ]
DMI_PRODUCT_NAME=HVM domU
DMI_SYS_VENDOR=Xen
DMI_PRODUCT_SERIAL=75c58df9-e2b6-8139-c697-7d93c287a1e7
DMI_PRODUCT_UUID=75C58DF9-E2B6-8139-C697-7D93C287A1E7
PID_1_PRODUCT_NAME=unavailable
DMI_CHASSIS_ASSET_TAG=
FS_LABELS=
KERNEL_CMDLINE=BOOT_IMAGE=/boot/vmlinuz-4.4.0-81-generic 
root=UUID=3f377544-33e0-4408-b498-72fca4233a00 ro vga=0x318 
console=ttyS0,115200n8 console=hvc0 consoleblank=0 elevator=deadline 
biosdevname=0 net.ifnames=0
VIRT=xen
UNAME_KERNEL_NAME=Linux
UNAME_KERNEL_RELEASE=4.4.0-81-generic
UNAME_KERNEL_VERSION=#104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017
UNAME_MACHINE=x86_64
UNAME_NODENAME=swen-test-ubuntu1604
UNAME_OPERATING_SYSTEM=GNU/Linux
DSNAME=
DSLIST=CloudStack
MODE=report
ON_FOUND=all
ON_MAYBE=all
ON_NOTFOUND=enabled
pid=197 ppid=188
is_container=false
single entry in datasource_list (CloudStack) use that.
[up 3.83s] returning 0

this is the output of /run/cloud-init/cloud.cfg:
di_report:
  datasource_list: [ CloudStack, None ]

cloud-init version is: 0.7.9-153-g16a7302f-0ubuntu1~16.04.2

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: dsid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1714358

Title:
  ds-identify does not find CloudStack datasource

Status in cloud-init:
  New

Bug description:
  We are usng CloudStack with XenServer as hypervisor and we are getting
  this:

  Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-81-generic x86_64)

   * Documentation:  https://help.ubuntu.com
   * Management: https://landscape.canonical.com
   * Support:https://ubuntu.com/advantage
  **
  # A new feature in cloud-init identified possible datasources for#
  # this system as:#
  #   []   #
  # However, the datasource used was: CloudStack   #
  ##
  # In the future, cloud-init will only attempt to use datasources that#
  # are identified or specifically configured. #
  # For more 

[Yahoo-eng-team] [Bug 1576840] Re: fullstack OVS agent in native openflow mode sometimes fails to bind socket

2017-08-31 Thread Ihar Hrachyshka
It's long fixed.

** Changed in: neutron
 Assignee: sudhakar kumar srivastava (sudhakar.srivastava) => (unassigned)

** Changed in: neutron
   Status: Confirmed => Won't Fix

** Changed in: neutron
   Status: Won't Fix => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1576840

Title:
  fullstack OVS agent in native openflow mode sometimes fails to bind
  socket

Status in neutron:
  Fix Released

Bug description:
  Plenty of hits in the last few days, currently the top issue affecting
  fullstack stability.

  Example paste:
  http://paste.openstack.org/show/495797/

  Example logs:
  
http://logs.openstack.org/18/276018/21/check/gate-neutron-dsvm-fullstack/c0761dc/logs/TestOvsConnectivitySameNetwork.test_connectivity_VLANs,openflow-native_ovsdb-native_/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1576840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602567] Re: fullstack doesn't work with branch-2.5 ovs

2017-08-31 Thread Ihar Hrachyshka
We currently compile v2.6.1 from sources for fullstack. I consider it's
fixed.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602567

Title:
  fullstack doesn't work with branch-2.5 ovs

Status in neutron:
  Fix Released

Bug description:
  In order to test OVSFirewall, we need a newer version of ovs than one comes 
with Trusty.
  But the combination fails on VXLAN connectivity tests.

  See fullstack log of https://review.openstack.org/#/c/341328/1 for a
  example.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1602567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714355] [NEW] Pecan missing emulated bulk create

2017-08-31 Thread Kevin Benton
Public bug reported:

Pecan is missing the emulated bulk create logic for core plugins that
don't support the bulk methods.

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714355

Title:
  Pecan missing emulated bulk create

Status in neutron:
  New

Bug description:
  Pecan is missing the emulated bulk create logic for core plugins that
  don't support the bulk methods.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627106] Re: TimeoutException while executing tests adding bridge using OVSDB native

2017-08-31 Thread Ihar Hrachyshka
Added ovsdbapp to the list of affected projects because I believe this
error comes from the library.

** Changed in: neutron
Milestone: pike-2 => None

** Also affects: ovsdbapp
   Importance: Undecided
   Status: New

** Changed in: ovsdbapp
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627106

Title:
  TimeoutException while executing tests adding bridge using OVSDB
  native

Status in neutron:
  Confirmed
Status in ovsdbapp:
  Confirmed

Bug description:
  http://logs.openstack.org/91/366291/12/check/gate-neutron-dsvm-
  functional-ubuntu-trusty/a23c816/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 125, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 62, in 
test_post_commit_vswitchd_completed_no_failures
  self._add_br_and_test()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 56, in 
_add_br_and_test
  self._add_br()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 52, in 
_add_br
  tr.add(ovsdb.add_br(self.brname))
File "neutron/agent/ovsdb/api.py", line 76, in __exit__
  self.result = self.commit()
File "neutron/agent/ovsdb/impl_idl.py", line 72, in commit
  'timeout': self.timeout})
  neutron.agent.ovsdb.api.TimeoutException: Commands 
[AddBridgeCommand(name=test-br6925d8e2, datapath_type=None, may_exist=True)] 
exceeded timeout 10 seconds

  
  I suspect this one may hit us because we finally made timeout working with 
Icd745514adc14730b9179fa7a6dd5c115f5e87a5.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673531] Re: fullstack test_controller_timeout_does_not_break_connectivity_sigkill(GRE and l2pop, openflow-native_ovsdb-cli) failure

2017-08-31 Thread Ihar Hrachyshka
No hits in 7 days. I claim it's fixed. If not, reopen.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1673531

Title:
  fullstack
  test_controller_timeout_does_not_break_connectivity_sigkill(GRE and
  l2pop,openflow-native_ovsdb-cli) failure

Status in neutron:
  Fix Released

Bug description:
  Logs for failure: http://logs.openstack.org/98/446598/1/check/gate-
  neutron-dsvm-fullstack-ubuntu-xenial/2e0f93e/logs/dsvm-fullstack-
  logs/TestOvsConnectivitySameNetworkOnOvsBridgeControllerStop
  .test_controller_timeout_does_not_break_connectivity_sigkill_GRE-and-
  l2pop,openflow-native_ovsdb-cli_/

  logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=messge%3A%5C%22in%20test_controller_timeout_does_not_break_connectivity_sigkill%5C%22%20AND%20tags%3Aconsole%20AND%20build_name
  %3Agate-neutron-dsvm-fullstack-ubuntu-xenial

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1673531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714348] [NEW] pecan is missing some body validation logic from legacy API

2017-08-31 Thread Kevin Benton
Public bug reported:

The legacy controller validated the following things that the pecan API
is not.

* delete requests have no body

* the body must contain a resource or resources when POSTing to the
general collection controller

* All POSTed json must be a dict


These gaps were discovered when switching the unit tests to use pecan instead 
of the old legacy API.

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714348

Title:
  pecan is missing some body validation logic from legacy API

Status in neutron:
  New

Bug description:
  The legacy controller validated the following things that the pecan
  API is not.

  * delete requests have no body

  * the body must contain a resource or resources when POSTing to the
  general collection controller

  * All POSTed json must be a dict

  
  These gaps were discovered when switching the unit tests to use pecan instead 
of the old legacy API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1678475] Re: Apply QoS policy on network:router_gateway

2017-08-31 Thread Boden R
Based on the latest contents of the config qos guide [1], this has
already been fixed.


[1] https://github.com/openstack/neutron/blob/master/doc/source/admin
/config-qos.rst

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1678475

Title:
  Apply QoS policy on network:router_gateway

Status in neutron:
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/425218
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2d1ee7add7c08ebbf8de7f9a0dc2aeb5344a4052
  Author: Maxime Guyot 
  Date:   Wed Mar 8 15:14:32 2017 +0100

  Apply QoS policy on network:router_gateway
  
  All router ports (internal and external) used to be excluded from QoS
  policies applied on network. This patch excludes only internal router
  ports from network QoS policies.
  This allows cloud administrators to set an egress QoS policy to a
  public/external network and have the QoS policy applied on all external
  router ports (DVR or not). To the tenant this is also egress traffic so
  no confusion compared to QoS policies applied to VM ports.
  
  DocImpact
  
  Update networking-guide/config-qos, User workflow section:
  - Replace "Network owned ports" with "Internal network owned ports"
  
  Change-Id: I2428c2466f41a022196576f4b14526752543da7a
  Closes-Bug: #1659265
  Related-Bug: #1486039

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1678475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714235] Re: evacuate API does not restrict one from trying to evacuate to the source host

2017-08-31 Thread Matt Riedemann
The evacuate issue was fixed with change
Ic468cd57688b370a18cacfc6e0844a8201eb9ab3 but it's still a problem for
os-migrateLive.

** Changed in: nova
   Status: Invalid => Confirmed

** Summary changed:

- evacuate API does not restrict one from trying to evacuate to the source host
+ os-migrateLive API does not restrict one from trying to migrate to the 
original host

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714235

Title:
  os-migrateLive API does not restrict one from trying to migrate to the
  original host

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  This is purely based on code inspection, but the compute API method
  'evacuate' does not check if the specified host (if there was one) is
  different from instance.host. It checks if the service is up on that
  host, which could be down and you can still specify the instance.host.

  Eventually the compute API will RPC cast to conductor task manager
  which will fail with an RPC error trying to RPC cast to the
  ComputeManager.rebuild_instance method on the compute service, which
  is down.

  The bug here is that instead of getting an obvious 400 error from the
  API, you're left with not much for details when it fails. There should
  be an instance action and finish event, but only the admin can see the
  traceback in the event. Also, the instance.task_state would be left in
  'rebuilding' state, and would require it to be reset to use the
  instance again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1682228] Re: can't cross address scopes with DVR

2017-08-31 Thread Brian Haley
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1682228

Title:
  can't cross address scopes with DVR

Status in neutron:
  Fix Released

Bug description:
  From the devref in https://review.openstack.org/#/c/289794/ there is a
  limitation with address scopes and DVR. Quote:

  Due to the asymmetric route in DVR and the fact that DVR local routers do not
  know the information of the floating IPs that don't reside in the local host,
  there is a limitation in the DVR multiple hosts scenario.  With DVR in
  multiple hosts and the destination of traffic which is an internal fixed IP in
  a different host, the fixed IP with floating IP associated can't cross scope
  to access the internal networks that are in the same address scope of external
  network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1682228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669191] Re: Deprecate gateway_external_network_id option

2017-08-31 Thread Boden R
Today I don't see gateway_external_network_id documented other than a
blank use of it in sample CLI output [1]. That said I don't see any
reason to leave this open as I'm not sure what needs to be documented.

When the deprecated option is removed, that should be marked with a doc
impact tag and we can removed [1] at that point.

[1]
http://codesearch.openstack.org/?q=gateway_external_network_id=nope=doc%2Fsource%2F.*=

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1669191

Title:
  Deprecate gateway_external_network_id option

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/438669
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 391ac43bf3862d67cee3ea0f628bd7958e585c7f
  Author: Ihar Hrachyshka 
  Date:   Thu Feb 23 10:21:12 2017 +

  Deprecate gateway_external_network_id option
  
  This option is used only when external_network_bridge is set to
  non-empty value, and that other option is already marked for removal.
  
  DocImpact The gateway_external_network_id option is deprecated and will
be removed in next releases.
  
  Change-Id: Ie6ea9b8977a0e06d69d735532082e9e094c26534
  Related-Bug: #1511578

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1669191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1683102] Re: Port data plane status extension implementation

2017-08-31 Thread Boden R
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1683102

Title:
  Port data plane status extension implementation

Status in neutron:
  Fix Released

Bug description:
  https://review.openstack.org/424340
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 89de63de05e296af583032cb17a3d76b4b4d6a40
  Author: Carlos Goncalves 
  Date:   Mon Jan 23 19:53:04 2017 +

  Port data plane status extension implementation
  
  Implements the port data plane status extension. Third parties
  can report via Neutron API issues in the underlying data plane
  affecting connectivity from/to Neutron ports.
  
  Supported statuses:
- None: no status being reported; default value
- ACTIVE: all is up and running
- DOWN: no traffic can flow from/to the Neutron port
  
  Setting attribute available to admin or any user with specific role
  (default role: data_plane_integrator).
  
  ML2 extension driver loaded on request via configuration:
  
[ml2]
extension_drivers = data_plane_status
  
  Related-Bug: #1598081
  Related-Bug: #1575146
  
  DocImpact: users can get status of the underlying port data plane;
  attribute writable by admin users and users granted the
  'data-plane-integrator' role.
  APIImpact: port now has data_plane_status attr, set on port update
  
  Implements: blueprint port-data-plane-status
  
  Depends-On: I04eef902b3310f799b1ce7ea44ed7cf77c74da04
  Change-Id: Ic9e1e3ed9e3d4b88a4292114f4cb4192ac4b3502

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1683102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714235] Re: evacuate API does not restrict one from trying to evacuate to the source host

2017-08-31 Thread Matt Riedemann
The REST API handler code checks this, I just missed that since I was
expecting to find it in the nova.compute.api.API.evacuate method:

https://github.com/openstack/nova/blob/2a4ca8bd6aa40ccd26300feaef4267aa71f69abf/nova/api/openstack/compute/evacuate.py#L114

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714235

Title:
  evacuate API does not restrict one from trying to evacuate to the
  source host

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This is purely based on code inspection, but the compute API method
  'evacuate' does not check if the specified host (if there was one) is
  different from instance.host. It checks if the service is up on that
  host, which could be down and you can still specify the instance.host.

  Eventually the compute API will RPC cast to conductor task manager
  which will fail with an RPC error trying to RPC cast to the
  ComputeManager.rebuild_instance method on the compute service, which
  is down.

  The bug here is that instead of getting an obvious 400 error from the
  API, you're left with not much for details when it fails. There should
  be an instance action and finish event, but only the admin can see the
  traceback in the event. Also, the instance.task_state would be left in
  'rebuilding' state, and would require it to be reset to use the
  instance again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714285] Re: Hyper-V: leaked resources after failed spawn

2017-08-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/499684
Committed: 
https://git.openstack.org/cgit/openstack/compute-hyperv/commit/?id=6e715ed580bf0f8ba5ff4c8e79b9ddba45d787c6
Submitter: Jenkins
Branch:master

commit 6e715ed580bf0f8ba5ff4c8e79b9ddba45d787c6
Author: Lucian Petrut 
Date:   Thu Aug 31 18:13:49 2017 +0300

Perform proper cleanup after failed instance spawns

This change ensures that vif ports as well as volume connections
are properly removed after an instance fails to spawn.

In order to avoid having similar issues in the future, the
'block_device_info' and 'network_info' arguments become mandatory
for the VMOps.destroy method.

Side note: for convenience reasons, one redundant unit test has
been squashed.

Closes-Bug: #1714285
Change-Id: Ifa701459b15b5a2046528fa45eca7ab382f1f7e8


** Changed in: compute-hyperv
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714285

Title:
  Hyper-V: leaked resources after failed spawn

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Volume connections as well as vif ports are not cleaned up after a
  failed instance spawn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1714285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649616] Re: Keystone Token Flush job does not complete in HA deployed environment

2017-08-31 Thread Corey Bryant
** Changed in: keystone (Ubuntu)
   Status: In Progress => Invalid

** Changed in: cloud-archive
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649616

Title:
  Keystone Token Flush job does not complete in HA deployed environment

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) newton series:
  In Progress
Status in OpenStack Identity (keystone) ocata series:
  In Progress
Status in puppet-keystone:
  Triaged
Status in tripleo:
  Fix Released
Status in keystone package in Ubuntu:
  Invalid
Status in keystone source package in Xenial:
  Fix Released
Status in keystone source package in Yakkety:
  Fix Released
Status in keystone source package in Zesty:
  Fix Released

Bug description:
  [Impact]

   * The Keystone token flush job can get into a state where it will
  never complete because the transaction size exceeds the mysql galara
  transaction size - wsrep_max_ws_size (1073741824).

  [Test Case]

  1. Authenticate many times
  2. Observe that keystone token flush job runs (should be a very long time 
depending on disk) >20 hours in my environment
  3. Observe errors in mysql.log indicating a transaction that is too large

  Actual results:
  Expired tokens are not actually flushed from the database without any errors 
in keystone.log.  Only errors appear in mysql.log.

  Expected results:
  Expired tokens to be removed from the database

  [Additional info:]

  It is likely that you can demonstrate this with less than 1 million
  tokens as the >1 million token table is larger than 13GiB and the max
  transaction size is 1GiB, my token bench-marking Browbeat job creates
  more than needed.

  Once the token flush job can not complete the token table will never
  decrease in size and eventually the cloud will run out of disk space.

  Furthermore the flush job will consume disk utilization resources.
  This was demonstrated on slow disks (Single 7.2K SATA disk).  On
  faster disks you will have more capacity to generate tokens, you can
  then generate the number of tokens to exceed the transaction size even
  faster.

  Log evidence:
  [root@overcloud-controller-0 log]# grep " Total expired" 
/var/log/keystone/keystone.log
  2016-12-08 01:33:40.530 21614 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1082434
  2016-12-09 09:31:25.301 14120 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1084241
  2016-12-11 01:35:39.082 4223 INFO keystone.token.persistence.backends.sql [-] 
Total expired tokens removed: 1086504
  2016-12-12 01:08:16.170 32575 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1087823
  2016-12-13 01:22:18.121 28669 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1089202
  [root@overcloud-controller-0 log]# tail mysqld.log
  161208  1:33:41 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161208  1:33:41 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161209  9:31:26 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161209  9:31:26 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161211  1:35:39 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161211  1:35:40 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161212  1:08:16 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161212  1:08:17 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161213  1:22:18 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161213  1:22:19 [ERROR] WSREP: rbr write fail, data_len: 0, 2

  Disk utilization issue graph is attached.  The entire job in that
  graph takes from the first spike is disk util(~5:18UTC) and culminates
  in about ~90 minutes of pegging the disk (between 1:09utc to 2:43utc).

  [Regression Potential] 
  * Not identified

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1649616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713783] Re: After failed evacuation the recovered source compute tries to delete the instance

2017-08-31 Thread Jeremy Stanley
Agreeing with Tristan et al, and adding the "security" bug tag to
indicate it's a hardening opportunity (C1).

** Tags added: security

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1713783

Title:
  After failed evacuation the recovered source compute tries to delete
  the instance

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) newton series:
  Triaged
Status in OpenStack Compute (nova) ocata series:
  Triaged
Status in OpenStack Compute (nova) pike series:
  Triaged
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Description
  ===
  In case of a failed evacuation attempt the status of the migration is 
'accepted' instead of 'failed' so when source compute is recovered the compute 
manager tries to delete the instance from the source host. However a secondary 
fault prevents deleting the allocation in placement so the actual deletion of 
the instance fails as well.

  Steps to reproduce
  ==
  The following functional test reproduces the bug: 
https://review.openstack.org/#/c/498482/
  What it does: initiate evacuation when no valid host is available and 
evacuation fails, but nova manager still tries to delete the instance.
  Logs:

  2017-08-29 19:11:15,751 ERROR [oslo_messaging.rpc.server] Exception 
during message handling
  NoValidHost: No valid host was found. There are not enough hosts 
available.
  2017-08-29 19:11:16,103 INFO [nova.tests.functional.test_servers] Running 
periodic for compute1 (host1)
  2017-08-29 19:11:16,115 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,120 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,131 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/allocations" 
status: 200 len: 152 microversion: 1.0
  2017-08-29 19:11:16,138 INFO [nova.compute.resource_tracker] Final 
resource view: name=host1 phys_ram=8192MB used_ram=1024MB phys_disk=1028GB 
used_disk=1GB total_vcpus=10 used_vcpus=1 pci_stats=[]
  2017-08-29 19:11:16,146 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,151 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,152 INFO [nova.tests.functional.test_servers] Running 
periodic for compute2 (host2)
  2017-08-29 19:11:16,163 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,168 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,176 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/allocations" 
status: 200 len: 54 microversion: 1.0
  2017-08-29 19:11:16,184 INFO [nova.compute.resource_tracker] Final 
resource view: name=host2 phys_ram=8192MB used_ram=512MB phys_disk=1028GB 
used_disk=0GB total_vcpus=10 used_vcpus=0 pci_stats=[]
  2017-08-29 19:11:16,192 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,197 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,198 INFO [nova.tests.functional.test_servers] 
Finished with periodics
  2017-08-29 19:11:16,255 INFO [nova.api.openstack.requestlog] 127.0.0.1 
"GET 
/v2.1/6f70656e737461636b20342065766572/servers/5058200c-478e-4449-88c1-906fdd572662"
 status: 200 len: 1875 microversion: 2.53 time: 0.056198
  2017-08-29 19:11:16,262 INFO [nova.api.openstack.requestlog] 127.0.0.1 
"GET /v2.1/6f70656e737461636b20342065766572/os-migrations" status: 200 len: 373 
microversion: 2.53 time: 0.004618
  2017-08-29 19:11:16,280 INFO 

[Yahoo-eng-team] [Bug 1694337] Re: Port information (binding:host_id) not updated for network:router_gateway after qRouter failover

2017-08-31 Thread Corey Bryant
** Changed in: cloud-archive
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694337

Title:
  Port information (binding:host_id) not updated for
  network:router_gateway after qRouter failover

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Won't Fix
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Impact]

  When using l3 ha and a router agent fails over, the interface holding
  the network:router_gateway interface does not get its property
  binding:host_id updated to reflect where the keepalived moved the
  router.

  [Steps to reproduce]

  0) Deploy a cloud with l3ha enabled
    - If familiar with juju, it's possible to use this bundle 
http://paste.ubuntu.com/24707730/ , but the deployment tool is not relevant

  1) Once it's deployed, configure it and create a router see 
https://docs.openstack.org/mitaka/networking-guide/deploy-lb-ha-vrrp.html )
    - This is the script used during the troubleshooting
  -8<--
  #!/bin/bash -x

  source novarc  # admin

  neutron net-create ext-net --router:external True
  --provider:physical_network physnet1 --provider:network_type flat

  neutron subnet-create ext-net 10.5.0.0/16 --name ext-subnet
  --allocation-pool start=10.5.254.100,end=10.5.254.199 --disable-dhcp
  --gateway 10.5.0.1 --dns-nameserver 10.5.0.3

  keystone tenant-create --name demo 2>/dev/null
  keystone user-role-add --user admin --tenant demo --role Admin 2>/dev/null

  export TENANT_ID_DEMO=$(keystone tenant-list | grep demo | awk -F'|'
  '{print $2}' | tr -d ' ' 2>/dev/null )

  neutron net-create demo-net --tenant-id ${TENANT_ID_DEMO}
  --provider:network_type vxlan

  env OS_TENANT_NAME=demo neutron subnet-create demo-net 192.168.1.0/24 --name 
demo-subnet --gateway 192.168.1.1
  env OS_TENANT_NAME=demo neutron router-create demo-router
  env OS_TENANT_NAME=demo neutron router-interface-add demo-router demo-subnet
  env OS_TENANT_NAME=demo neutron router-gateway-set demo-router ext-net

  # verification
  neutron net-list
  neutron l3-agent-list-hosting-router demo-router
  neutron router-port-list demo-router
  - 8< ---

  2) Kill the associated master keepalived process for the router
  ps aux | grep keepalived | grep $ROUTER_ID
  kill $PID

  3) Wait until "neutron l3-agent-list-hosting-router demo-router" shows the 
other host as active
  4) Check the binding:host_id property for the interfaces of the router
  for ID in `neutron port-list --device-id $ROUTER_ID | tail -n +4 | head 
-n -1| awk -F' ' '{print $2}' `; do neutron port-show $ID ; done

  Expected results:

  The interface where the device_owner is network:router_gateway has its
  property binding:host_id set to where the keepalived process is master

  Actual result:

  The binding:host_id is never updated, it stays set with the value
  obtainer during the creation of the port.

  [Regression Potential]
  - This patch changes the UPDATE query to the port bindings in the database, a 
possible regression will express as failures in the query or binding:host_id 
property outdated.

  [Other Info]

  The patches for zesty and yakkety are a direct backport from
  stable/ocata and stable/newton respectively. The patch for xenial is
  NOT merged in stable/xenial because it's already EOL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1694337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708444] Re: Angular role table stays stale after editing a role

2017-08-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/490457
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a58aa26450837bfb84ea3cfbc015a83d8541befd
Submitter: Jenkins
Branch:master

commit a58aa26450837bfb84ea3cfbc015a83d8541befd
Author: Bence Romsics 
Date:   Wed Aug 2 11:16:34 2017 +0200

Refresh role table after editing role

By using the track-by feature of hz-resource-table.

Closes-Bug: #1708444
Change-Id: I782aa4671f5f1bc23a1aa8535b86751ffe712c0b


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1708444

Title:
  Angular role table stays stale after editing a role

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In the angularized role panel if I edit a role (eg. change its name)
  the actual update happens in Keystone, but the role table is not
  refreshed and shows the old state until I reload the page.

  devstack b79531a
  horizon 53dd2db

  ANGULAR_FEATURES={
  'roles_panel': True,
  ...
  }

  A proposed fix is on the way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1708444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668410] Re: [SRU] Infinite loop trying to delete deleted HA router

2017-08-31 Thread Corey Bryant
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

** Changed in: neutron (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: cloud-archive/mitaka
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Xenial)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu)
   Status: Triaged => Invalid

** Changed in: cloud-archive
   Status: New => Invalid

** Description changed:

- [Impact]
+ [Descriptoin]
  
  When deleting a router the logfile is filled up. See full log -
  http://paste.ubuntu.com/25429257/
  
  I can see the error 'Error while deleting router
  c0dab368-5ac8-4996-88c9-f5d345a774a6' occured 3343386 times from
  _safe_router_removed() [1]:
  
  $ grep -r 'Error while deleting router c0dab368-5ac8-4996-88c9-f5d345a774a6' 
|wc -l
  3343386
  
  This _safe_router_removed() is invoked by L488 [2], if
  _safe_router_removed() goes wrong it will return False, then
  self._resync_router(update) [3] will make the code _safe_router_removed
  be run again and again. So we saw so many errors 'Error while deleting
  router X'.
  
  [1] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L361
  [2] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L488
  [3] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L457
  
  [Test Case]
  
  That's because race condition between neutron server and L3 agent, after
  neutron server deletes HA interfaces the L3 agent may sync a HA router
  without HA interface info (just need to trigger L708[1] after deleting
  HA interfaces and before deleting HA router). If we delete HA router at
  this time, this problem will happen. So test case we design is as below:
  
  1, First update fixed package, and restart neutron-server by 'sudo
  service neutron-server restart'
  
  2, Create ha_router
  
  neutron router-create harouter --ha=True
  
  3, Delete ports associated with ha_router before deleting ha_router
  
  neutron router-port-list harouter |grep 'HA port' |awk '{print $2}' |xargs -l 
neutron port-delete
  neutron router-port-list harouter
  
  4, Update ha_router to trigger l3-agent to update ha_router info without
  ha_port into self.router_info
  
  neutron router-update harouter --description=test
  
  5, Delete ha_router this time
  
  neutron router-delete harouter
  
  [1] https://github.com/openstack/neutron/blob/mitaka-
  eol/neutron/db/l3_hamode_db.py#L708
  
  [Regression Potential]
  
  The fixed patch [1] for neutron-server will no longer return ha_router
  which is missing ha_ports, so L488 will no longer have chance to call
  _safe_router_removed() for a ha_router, so the problem has been
  fundamentally fixed by this patch and no regression potential.
  
  Besides, this fixed patch has been in mitaka-eol branch now, and
  neutron-server mitaka package is based on neutron-8.4.0, so we need to
  backport it to xenial and mitaka.
  
  $ git tag --contains 8c77ee6b20dd38cc0246e854711cb91cffe3a069
  mitaka-eol
  
  [1] https://review.openstack.org/#/c/440799/2/neutron/db/l3_hamode_db.py
  [2] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L488

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668410

Title:
  [SRU] Infinite loop trying to delete deleted HA router

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in neutron:
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix
Status in neutron package in Ubuntu:
  Invalid
Status in neutron source package in Xenial:
  Triaged

Bug description:
  [Descriptoin]

  When deleting a router the logfile is filled up. See full log -
  http://paste.ubuntu.com/25429257/

  I can see the error 'Error while deleting router
  c0dab368-5ac8-4996-88c9-f5d345a774a6' occured 3343386 times from
  _safe_router_removed() [1]:

  $ grep -r 'Error while deleting router c0dab368-5ac8-4996-88c9-f5d345a774a6' 
|wc -l
  3343386

  This _safe_router_removed() is invoked by L488 [2], if
  _safe_router_removed() goes wrong it will return False, then
  self._resync_router(update) [3] will make the code
  _safe_router_removed be run again and again. So we saw so many errors
  'Error while deleting router X'.

  [1] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L361
  [2] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L488
  [3] 
https://github.com/openstack/neutron/blob/mitaka-eol/neutron/agent/l3/agent.py#L457

  [Test Case]

  That's because race condition between neutron server and L3 agent,
  after neutron server deletes HA interfaces the L3 agent may 

[Yahoo-eng-team] [Bug 1714311] [NEW] Incorrect stylesheet link for the serial_console template

2017-08-31 Thread Pierre Riteau
Public bug reported:

The stylesheet referenced in
openstack_dashboard/templates/serial_console.html is incorrect, as the
CSS file has been changed to SCSS in
I0d421d931d252d821a7ecf19a750f73b8241c249:



Instead, the template needs to reference the SCSS file and compile it.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1714311

Title:
  Incorrect stylesheet link for the serial_console template

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The stylesheet referenced in
  openstack_dashboard/templates/serial_console.html is incorrect, as the
  CSS file has been changed to SCSS in
  I0d421d931d252d821a7ecf19a750f73b8241c249:

  

  Instead, the template needs to reference the SCSS file and compile it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1714311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581415] Re: No previous pagination tag in admin instances page

2017-08-31 Thread Ying Zuo
*** This bug is a duplicate of bug 1274427 ***
https://bugs.launchpad.net/bugs/1274427

** This bug is no longer a duplicate of bug 1514678
   There is no Previous Hyperlink in Horizon when creating 250 instances
** This bug has been marked a duplicate of bug 1274427
   Instance list pagination

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1581415

Title:
  No previous pagination tag in admin instances page

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  When I set the Items Per page in settings to 1(or a number less than the page 
size) and then
  navigate to admin Instances, I can't find "prev" pagination tag in Instances 
page.
  This makes navigating through multiple instances tedious.

  Expected Behaviour: Instances page should have "prev" tag which allows users 
to view
  previous results

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1581415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540193] Re: Pagination is not working properly

2017-08-31 Thread Ying Zuo
*** This bug is a duplicate of bug 1274427 ***
https://bugs.launchpad.net/bugs/1274427

** This bug is no longer a duplicate of bug 1514678
   There is no Previous Hyperlink in Horizon when creating 250 instances
** This bug has been marked a duplicate of bug 1274427
   Instance list pagination

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540193

Title:
  Pagination is not working properly

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Through these steps to reproduce the bug:
1. Set `Items Per Page` parameter to 2 in (Settings->User settings).
2.Create 3 instances.
3.Click the "next" to see the next page.Then I can not see the "prev".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573431] Re: No previous button to view the list of items in the instance page

2017-08-31 Thread Ying Zuo
*** This bug is a duplicate of bug 1274427 ***
https://bugs.launchpad.net/bugs/1274427

** This bug is no longer a duplicate of bug 1514678
   There is no Previous Hyperlink in Horizon when creating 250 instances
** This bug has been marked a duplicate of bug 1274427
   Instance list pagination

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1573431

Title:
  No previous button to view the list of items in the instance page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:

  1. Set items per page to 1(Easier to reproduce the behavior)
  2. Create 2 or more instances.
  3. Now in the instances page you will find a instance listed and a next 
button.
  4. If you click the next button you will find the next instance but no button 
to view the previous instance.

  Expected behavior:

  A "prev" button should be available to view the previous instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1573431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616921] Re: Instances pagination does not have prev link

2017-08-31 Thread Ying Zuo
*** This bug is a duplicate of bug 1274427 ***
https://bugs.launchpad.net/bugs/1274427

** This bug is no longer a duplicate of bug 1514678
   There is no Previous Hyperlink in Horizon when creating 250 instances
** This bug has been marked a duplicate of bug 1274427
   Instance list pagination

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1616921

Title:
  Instances pagination does not have prev link

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Steps:
  - Login as admin
  - Launch 3 instances
  - In user settings set **items per page** as 1
  - Go to instances page
  - Click **next** link

  Expected result:
  - **prev** link to navigate to previous page is present

  Actual result:
  - **prev** link is absent

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1616921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714283] Re: Placement API reference: GET /traits query parameter starts_with should be startswith

2017-08-31 Thread Eric Fried
Meh, since I opened it, might as well use it.

** Changed in: nova
   Status: Invalid => New

** Changed in: nova
 Assignee: (unassigned) => Eric Fried (efried)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714283

Title:
  Placement API reference: GET /traits query parameter starts_with
  should be startswith

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In the Placement API reference, the GET /traits [1] query parameter
  'name' says it accepts a key called 'starts_with'.  The actual API
  accepts 'startswith' (no underscore).

  [1] https://developer.openstack.org/api-ref/placement/#list-traits

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714285] [NEW] Hyper-V: leaked resources after failed spawn

2017-08-31 Thread Lucian Petrut
Public bug reported:

Volume connections as well as vif ports are not cleaned up after a
failed instance spawn.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714285

Title:
  Hyper-V: leaked resources after failed spawn

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Volume connections as well as vif ports are not cleaned up after a
  failed instance spawn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1714285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714283] Re: Placement API reference: GET /traits query parameter starts_with should be startswith

2017-08-31 Thread Eric Fried
Sorry, opened the wrong bug.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714283

Title:
  Placement API reference: GET /traits query parameter starts_with
  should be startswith

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In the Placement API reference, the GET /traits [1] query parameter
  'name' says it accepts a key called 'starts_with'.  The actual API
  accepts 'startswith' (no underscore).

  [1] https://developer.openstack.org/api-ref/placement/#list-traits

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714283] [NEW] Placement API reference: GET /traits query parameter starts_with should be startswith

2017-08-31 Thread Eric Fried
Public bug reported:

In the Placement API reference, the GET /traits [1] query parameter
'name' says it accepts a key called 'starts_with'.  The actual API
accepts 'startswith' (no underscore).

[1] https://developer.openstack.org/api-ref/placement/#list-traits

** Affects: nova
 Importance: Undecided
 Assignee: Eric Fried (efried)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714283

Title:
  Placement API reference: GET /traits query parameter starts_with
  should be startswith

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In the Placement API reference, the GET /traits [1] query parameter
  'name' says it accepts a key called 'starts_with'.  The actual API
  accepts 'startswith' (no underscore).

  [1] https://developer.openstack.org/api-ref/placement/#list-traits

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714284] [NEW] Placement user doc: add link to API reference

2017-08-31 Thread Eric Fried
Public bug reported:

The Placement API user doc [1] says:

  API Reference
  A full API reference is forthcoming, but until then ...

That reference has since been published [2].

[1] https://docs.openstack.org/nova/pike/user/placement.html#api-reference
[2] https://developer.openstack.org/api-ref/placement/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714284

Title:
  Placement user doc: add link to API reference

Status in OpenStack Compute (nova):
  New

Bug description:
  The Placement API user doc [1] says:

API Reference
A full API reference is forthcoming, but until then ...

  That reference has since been published [2].

  [1] https://docs.openstack.org/nova/pike/user/placement.html#api-reference
  [2] https://developer.openstack.org/api-ref/placement/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714275] [NEW] GET /resource_providers: "links" doesn't include "allocations"

2017-08-31 Thread Eric Fried
Public bug reported:

GET /resource_providers returns:
{
  "resource_providers": [
{
  "generation": 39, 
  "uuid": "213fd7f8-1e9f-466b-87bf-0902b0b3bc13", 
  "links": [
{
  "href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13", 
  "rel": "self"
}, 
{
  "href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13/inventories",
 
  "rel": "inventories"
}, 
{
  "href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13/usages", 
  "rel": "usages"
}, 
{
  "href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13/aggregates",
 
  "rel": "aggregates"
}, 
{
  "href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13/traits", 
  "rel": "traits"
}
  ], 
  "name": "p8-100-neo"
}
  ]
}

The link for "/resource_providers/213fd7f8-1e9f-466b-87bf-
0902b0b3bc13/allocations" is missing.

For reference: https://review.openstack.org/#/c/366789/ added the
/resource_providers//allocations target; and
https://review.openstack.org/#/c/468923/ did the per-microversion
splitup of which links were reported.  They were dropped in that order,
by the same author (cdent), so maybe there's a reason for this...

Placement microversion 1.10

Devstack on PowerVM

Nova master branch at commit 4579d2e5573ae1bbabb51ee46ef26598d9410b15
(Aug 11)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714275

Title:
  GET /resource_providers: "links" doesn't include "allocations"

Status in OpenStack Compute (nova):
  New

Bug description:
  GET /resource_providers returns:
  {
"resource_providers": [
  {
"generation": 39, 
"uuid": "213fd7f8-1e9f-466b-87bf-0902b0b3bc13", 
"links": [
  {
"href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13", 
"rel": "self"
  }, 
  {
"href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13/inventories",
 
"rel": "inventories"
  }, 
  {
"href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13/usages", 
"rel": "usages"
  }, 
  {
"href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13/aggregates",
 
"rel": "aggregates"
  }, 
  {
"href": 
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13/traits", 
"rel": "traits"
  }
], 
"name": "p8-100-neo"
  }
]
  }

  The link for "/resource_providers/213fd7f8-1e9f-466b-87bf-
  0902b0b3bc13/allocations" is missing.

  For reference: https://review.openstack.org/#/c/366789/ added the
  /resource_providers//allocations target; and
  https://review.openstack.org/#/c/468923/ did the per-microversion
  splitup of which links were reported.  They were dropped in that
  order, by the same author (cdent), so maybe there's a reason for
  this...

  Placement microversion 1.10

  Devstack on PowerVM

  Nova master branch at commit 4579d2e5573ae1bbabb51ee46ef26598d9410b15
  (Aug 11)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707819] Re: Allowed address pairs allows update with invalid cidr

2017-08-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
 Assignee: Dongcan Ye (hellochosen) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707819

Title:
  Allowed address pairs allows update with invalid cidr

Status in neutron:
  Invalid

Bug description:
  Subnet info:
  $ neutron subnet-show 68a42a05-2024-44b3-9086-e97704452724
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "10.20.0.2", "end": "10.20.0.254"} |
  | cidr  | 10.20.0.0/24 |
  | created_at| 2017-04-21T07:08:39Z |
  | description   |  |
  | dns_nameservers   |  |
  | enable_dhcp   | False|
  | gateway_ip| 10.20.0.1|
  | host_routes   |  |
  | id| 68a42a05-2024-44b3-9086-e97704452724 |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | test_subnet  |
  | network_id| 9cd01eb4-906a-4c68-b705-0520bfe1b1e6 |
  | project_id| 6d0a93fb8cfc4c2f84e3936d95a17bad |
  | revision_number   | 2|
  | service_types |  |
  | subnetpool_id |  |
  | tags  |  |
  | tenant_id | 6d0a93fb8cfc4c2f84e3936d95a17bad |
  | updated_at| 2017-04-21T07:08:39Z |
  +---+--+

  $ neutron port-update 31250c3c-69ec-462c-8ec8-195beeeff3f2  
--allowed-address-pairs type=dict list=true ip_address=10.20.0.201/8
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Updated port: 31250c3c-69ec-462c-8ec8-195beeeff3f2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1707819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714195] Re: nova task shutdown or deleted instance when instance status in the databse differs from hypervisor

2017-08-31 Thread Sean Dague
This is pretty much working as designed. Nova is the owner of that
state, and will drive services to that state.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714195

Title:
  nova task shutdown or deleted instance when instance status in  the
  databse differs from  hypervisor

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  1.hypervisor is vCenter or kvm
  2.the instance status is start in vCenter or kvm
  3.the instance status is stop in nova database

     Shut down or delete is high-risk operation, bcs instance was
  started by vcenter and running businesse, but nova task shutdown
  instance or delete instance .so we should synchronized vCenter status
  to nova database

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713162] Re: Kernel panic when configuring APIC timers in an instance

2017-08-31 Thread Sean Dague
Kernel panics in guests are unlikely to be Nova bugs. This is probably
an underlying kvm / libvirt issue.

** Changed in: nova
   Status: New => Incomplete

** Also affects: openstack-gate
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1713162

Title:
  Kernel panic when configuring APIC timers in an instance

Status in OpenStack-Gate:
  New

Bug description:
  I am not sure Nova is the best candidate for this, but reporting
  nevertheless. Feel free to move to another project that is a better
  fit.

  This happened in gate in Queens.

  http://logs.openstack.org/74/495974/4/check/gate-tempest-dsvm-neutron-
  full-ubuntu-xenial/c61f703/logs/testr_results.html.gz

  In instance console log, we can see:

  [0.732045] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
  [0.736045] ..MP-BIOS bug: 8254 timer not connected to IO-APIC
  [0.736045] ...trying to set up timer (IRQ0) through the 8259A ...
  [0.736045] . (found apic 0 pin 2) ...
  [0.736045] ... failed.
  [0.736045] ...trying to set up timer as Virtual Wire IRQ...
  [0.744045] . failed.
  [0.744045] ...trying to set up timer as ExtINT IRQ...
  [0.752046] . failed :(.
  [0.752046] Kernel panic - not syncing: IO-APIC + timer doesn't work!  
Boot with apic=debug and send a report.  Then try booting with the 'noapic' 
option.
  [0.752046] 
  [0.752046] Pid: 1, comm: swapper/0 Not tainted 3.2.0-80-virtual 
#116-Ubuntu
  [0.752046] Call Trace:
  [0.752046]  [] panic+0x91/0x1a4
  [0.752046]  [] setup_IO_APIC+0x651/0x693
  [0.752046]  [] native_smp_prepare_cpus+0x1c4/0x207
  [0.752046]  [] kernel_init+0x8c/0x169
  [0.752046]  [] kernel_thread_helper+0x4/0x10
  [0.752046]  [] ? start_kernel+0x3c7/0x3c7
  [0.752046]  [] ? gs_change+0x13/0x13

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-gate/+bug/1713162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713466] Re: Initialize connection failed for volume

2017-08-31 Thread Sean Dague
For questions like this, please engage in IRC or the mailing list

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1713466

Title:
   Initialize connection failed for volume

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  i want to add a new driver to  libvert .

  this is my code in path /nova/virt/libvirt/volume/portworx.py

  ```

  
  """Libvirt volume driver for PX."""
  from oslo_log import log as logging

  import nova.conf

  from nova.virt.libvirt.volume import volume as libvirt_volume

  LOG = logging.getLogger(__name__)

  CONF = nova.conf.CONF

  
  class LibvirtPXVolumeDriver(libvirt_volume.LibvirtBaseVolumeDriver):
  """Class PX Libvirt volume Driver

  Implements Libvirt part of volume driver for PX cinder driver.
  Uses the PX connector from the os-brick projects
  """
  def __init__(self, host):
  super(LibvirtPXVolumeDriver, self).__init__(host,
  is_block_dev=True)

  def get_config(self, connection_info, disk_info):
  conf = super(LibvirtPXVolumeDriver, self).get_config(
  connection_info, disk_info)

  conf.source_type = 'block'
  conf.source_path = connection_info['data']['device_path']
  return conf

  def connect_volume(self, connection_info, disk_info):
  LOG.warning("connect_volume_step1")
  LOG.warning(connection_info)
  LOG.warning(disk_info)

  def disconnect_volume(self, connection_info, disk_dev):

  super(LibvirtPXVolumeDriver, self).disconnect_volume(
  connection_info, disk_dev)
  ```
  /nova/virt/libvirt/driver.py
  libvirt_volume_drivers = [
  'iscsi=nova.virt.libvirt.volume.iscsi.LibvirtISCSIVolumeDriver',
  'iser=nova.virt.libvirt.volume.iser.LibvirtISERVolumeDriver',
  'local=nova.virt.libvirt.volume.volume.LibvirtVolumeDriver',
  'fake=nova.virt.libvirt.volume.volume.LibvirtFakeVolumeDriver',
  'rbd=nova.virt.libvirt.volume.net.LibvirtNetVolumeDriver',
  'sheepdog=nova.virt.libvirt.volume.net.LibvirtNetVolumeDriver',
  'nfs=nova.virt.libvirt.volume.nfs.LibvirtNFSVolumeDriver',
  'smbfs=nova.virt.libvirt.volume.smbfs.LibvirtSMBFSVolumeDriver',
  'aoe=nova.virt.libvirt.volume.aoe.LibvirtAOEVolumeDriver',
  'glusterfs='
  'nova.virt.libvirt.volume.glusterfs.LibvirtGlusterfsVolumeDriver',
  'fibre_channel='
  'nova.virt.libvirt.volume.fibrechannel.'
  'LibvirtFibreChannelVolumeDriver',
  'scality=nova.virt.libvirt.volume.scality.LibvirtScalityVolumeDriver',
  'gpfs=nova.virt.libvirt.volume.gpfs.LibvirtGPFSVolumeDriver',
  'quobyte=nova.virt.libvirt.volume.quobyte.LibvirtQuobyteVolumeDriver',
  'hgst=nova.virt.libvirt.volume.hgst.LibvirtHGSTVolumeDriver',
  'scaleio=nova.virt.libvirt.volume.scaleio.LibvirtScaleIOVolumeDriver',
  'disco=nova.virt.libvirt.volume.disco.LibvirtDISCOVolumeDriver',
  'vzstorage='
  'nova.virt.libvirt.volume.vzstorage.LibvirtVZStorageVolumeDriver',
  'px=nova.virt.libvirt.volume.portworx.LibvirtPXVolumeDriver',
  ]
  ```
  when i attempt to attach px type volume
  in nova-compute.log
  2017-08-28 18:36:42.678 30277 ERROR nova.volume.cinder 
[req-e4e24ac5-0503-49d5-ba7e-0f988f3f6e8a ac2829767bb4425595686664d1e87963 
d4ebf82a1c8a43e1a08a264bb272a7f1 - - -] Initialize connection failed for volume 
0bf12c1f-d153-4ece-b06d-53feedda6b99 on host alex-openstack-1. Error: The 
server could not comply with the request since it is either malformed or 
otherwise incorrect. (HTTP 400) (Request-ID: 
req-36215f7e-d25d-4f2b-b8b0-eca21ff15279) Code: 400. Attempting to terminate 
connection.
  2017-08-28 18:36:42.873 30277 ERROR nova.compute.manager 
[req-e4e24ac5-0503-49d5-ba7e-0f988f3f6e8a ac2829767bb4425595686664d1e87963 
d4ebf82a1c8a43e1a08a264bb272a7f1 - - -] [instance: 
20c33d4d-d73b-4980-b9d8-be74cce859f5] Failed to attach 
0bf12c1f-d153-4ece-b06d-53feedda6b99 at /dev/vdc
  2017-08-28 18:36:42.873 30277 ERROR nova.compute.manager [instance: 
20c33d4d-d73b-4980-b9d8-be74cce859f5] Traceback (most recent call last):
  2017-08-28 18:36:42.873 30277 ERROR nova.compute.manager [instance: 
20c33d4d-d73b-4980-b9d8-be74cce859f5]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4801, in 
_attach_volume
  2017-08-28 18:36:42.873 30277 ERROR nova.compute.manager [instance: 
20c33d4d-d73b-4980-b9d8-be74cce859f5] do_check_attach=False, 
do_driver_attach=True)
  2017-08-28 18:36:42.873 30277 ERROR nova.compute.manager [instance: 
20c33d4d-d73b-4980-b9d8-be74cce859f5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 48, in 
wrapped
  2017-08-28 18:36:42.873 30277 ERROR nova.compute.manager [instance: 
20c33d4d-d73b-4980-b9d8-be74cce859f5]   

[Yahoo-eng-team] [Bug 1713731] Re: SSL setup for multiple projects is broken

2017-08-31 Thread Sean Dague
This is really only a devstack fix

** Changed in: nova
   Status: New => Confirmed

** No longer affects: nova

** No longer affects: neutron

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1713731

Title:
  SSL setup for multiple projects is broken

Status in devstack:
  In Progress

Bug description:
  When running a devstack environment with "enable_plugin tls-proxy",
  the unversioned cinder endpoint is returning incorrect links. E.g.
  when we have the cinder v1 endpoint https://192.168.1.4/volume/v1, a
  curl at https://192.168.1.4/volume/ instead shows
  https://192.168.1.4/v1/.

  The fix is to add the "/volume/" path to the "public_endpoint"
  variable in cinder.conf, see https://review.openstack.org/498435

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1713731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714251] [NEW] router_centralized_snat not removed when router migrated from DVR to HA

2017-08-31 Thread venkata anil
Public bug reported:

When a router is migrated from DVR to HA, all ports related to DVR
should be removed. But I still see port with device_owner
router_centralized_snat not removed.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714251

Title:
  router_centralized_snat not removed when router migrated from DVR to
  HA

Status in neutron:
  New

Bug description:
  When a router is migrated from DVR to HA, all ports related to DVR
  should be removed. But I still see port with device_owner
  router_centralized_snat not removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714248] [NEW] Compute node HA for ironic doesn't work due to the name duplication of Resource Provider

2017-08-31 Thread Hironori Shiina
Public bug reported:

Description
===
In an environment where there are multiple compute nodes with ironic driver, 
when a compute node goes down, another compute node cannot take over ironic 
nodes.

Steps to reproduce
==
1. Start multiple compute nodes with ironic driver.
2. Register one node to ironic.
2. Stop a compute node which manages the ironic node.
3. Create an instance.

Expected result
===
The instance creation is failed.

Actual result
=
The instance is created.

Environment
===
1. Exact version of OpenStack you are running.
openstack-nova-scheduler-15.0.6-2.el7.noarch
openstack-nova-console-15.0.6-2.el7.noarch
python2-novaclient-7.1.0-1.el7.noarch
openstack-nova-common-15.0.6-2.el7.noarch
openstack-nova-serialproxy-15.0.6-2.el7.noarch
openstack-nova-placement-api-15.0.6-2.el7.noarch
python-nova-15.0.6-2.el7.noarch
openstack-nova-novncproxy-15.0.6-2.el7.noarch
openstack-nova-api-15.0.6-2.el7.noarch
openstack-nova-conductor-15.0.6-2.el7.noarch

2. Which hypervisor did you use?
ironic

Details
===
When a nova-compute goes down, another nova-compute will take over ironic nodes 
managed by the failed nova-compute by re-balancing a hash-ring. Then the active 
nova-compute tries creating a
new resource provider with a new ComputeNode object UUID and the hypervisor 
name (ironic node name)[1][2][3]. This creation fails with a conflict(409) 
since there is a resource provider with the same name created by the failed 
nova-compute.

When a new instance is requested, the scheduler gets only an old
resource provider for the ironic node[4]. Then, the ironic node is not
selected:

WARNING nova.scheduler.filters.compute_filter [req-
a37d68b5-7ab1-4254-8698-502304607a90 7b55e61a07304f9cab1544260dcd3e41
e21242f450d948d7af2650ac9365ee36 - - -] (compute02, 8904aeeb-a35b-4ba3
-848a-73269fdde4d3) ram: 4096MB disk: 849920MB io_ops: 0 instances: 0
has not been heard from in a while

[1] 
https://github.com/openstack/nova/blob/stable/ocata/nova/compute/resource_tracker.py#L464
[2] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L630
[3] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L410
[4] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/filter_scheduler.py#L183

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714248

Title:
  Compute node HA for ironic doesn't work due to the name duplication of
  Resource Provider

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  In an environment where there are multiple compute nodes with ironic driver, 
  when a compute node goes down, another compute node cannot take over ironic 
nodes.

  Steps to reproduce
  ==
  1. Start multiple compute nodes with ironic driver.
  2. Register one node to ironic.
  2. Stop a compute node which manages the ironic node.
  3. Create an instance.

  Expected result
  ===
  The instance creation is failed.

  Actual result
  =
  The instance is created.

  Environment
  ===
  1. Exact version of OpenStack you are running.
  openstack-nova-scheduler-15.0.6-2.el7.noarch
  openstack-nova-console-15.0.6-2.el7.noarch
  python2-novaclient-7.1.0-1.el7.noarch
  openstack-nova-common-15.0.6-2.el7.noarch
  openstack-nova-serialproxy-15.0.6-2.el7.noarch
  openstack-nova-placement-api-15.0.6-2.el7.noarch
  python-nova-15.0.6-2.el7.noarch
  openstack-nova-novncproxy-15.0.6-2.el7.noarch
  openstack-nova-api-15.0.6-2.el7.noarch
  openstack-nova-conductor-15.0.6-2.el7.noarch

  2. Which hypervisor did you use?
  ironic

  Details
  ===
  When a nova-compute goes down, another nova-compute will take over ironic 
nodes managed by the failed nova-compute by re-balancing a hash-ring. Then the 
active nova-compute tries creating a
  new resource provider with a new ComputeNode object UUID and the hypervisor 
name (ironic node name)[1][2][3]. This creation fails with a conflict(409) 
since there is a resource provider with the same name created by the failed 
nova-compute.

  When a new instance is requested, the scheduler gets only an old
  resource provider for the ironic node[4]. Then, the ironic node is not
  selected:

  WARNING nova.scheduler.filters.compute_filter [req-
  a37d68b5-7ab1-4254-8698-502304607a90 7b55e61a07304f9cab1544260dcd3e41
  e21242f450d948d7af2650ac9365ee36 - - -] (compute02, 8904aeeb-a35b-4ba3
  -848a-73269fdde4d3) ram: 4096MB disk: 849920MB io_ops: 0 instances: 0
  has not been heard from in a while

  [1] 
https://github.com/openstack/nova/blob/stable/ocata/nova/compute/resource_tracker.py#L464
  [2] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L630
  [3] 

[Yahoo-eng-team] [Bug 1714240] [NEW] glance re-spawns a child when terminating

2017-08-31 Thread Bernhard M. Wiedemann
Public bug reported:

When sending a SIGTERM to the main glance-api process,
api.log shows
2017-08-31 13:10:30.996 10618 INFO glance.common.wsgi [-] Removed dead child 
10628
2017-08-31 13:10:31.004 10618 INFO glance.common.wsgi [-] Started child 10642
2017-08-31 13:10:31.006 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
starting up on https://10.162.184.83:5510
2017-08-31 13:10:31.008 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
exited, is_accepting=True
2017-08-31 13:10:31.009 10642 INFO glance.common.wsgi [-] Child 10642 exiting 
normally

This is because kill_children sends a SIGTERM to all children
and wait_on_children restarts one, when it notices a dead child

We noticed this, because this triggered a fencing in our cloud's
pacemaker setup because systemd seems to have a race condition in the
cgroup code that should detect that all related services have
terminated.


# systemctl status openstack-glance-api
● openstack-glance-api.service - OpenStack Image Service API server
   Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; 
disabled; vendor preset: disabled)
   Active: deactivating (final-sigterm) since Thu 2017-08-31 10:13:48 UTC; 1min 
14s ago
 Main PID: 25077 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 512)
   CGroup: /system.slice/openstack-glance-api.service
Aug 31 10:13:48 d08-9e-01-b4-9e-42 systemd[1]: Stopping OpenStack Image Service 
API server...
Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
State 'stop-final-sigterm' timed out. Killing.
Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: Stopped OpenStack Image Service 
API server.
Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
Unit entered failed state.
Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
Failed with result 'timeout'.

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  When sending a SIGTERM to the main glance-api process,
  api.log shows
  2017-08-31 13:10:30.996 10618 INFO glance.common.wsgi [-] Removed dead child 
10628
  2017-08-31 13:10:31.004 10618 INFO glance.common.wsgi [-] Started child 10642
  2017-08-31 13:10:31.006 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
starting up on https://10.162.184.83:5510
  2017-08-31 13:10:31.008 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
exited, is_accepting=True
  2017-08-31 13:10:31.009 10642 INFO glance.common.wsgi [-] Child 10642 exiting 
normally
  
  This is because kill_children sends a SIGTERM to all children
  and wait_on_children restarts one, when it notices a dead child
+ 
+ We noticed this, because this triggered a fencing in our cloud's
+ pacemaker setup because systemd seems to have a race condition in the
+ cgroup code that should detect that all related services have
+ terminated.
+ 
+ 
+ # systemctl status openstack-glance-api
+ ● openstack-glance-api.service - OpenStack Image Service API server
+Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; 
disabled; vendor preset: disabled)
+Active: deactivating (final-sigterm) since Thu 2017-08-31 10:13:48 UTC; 
1min 14s ago
+  Main PID: 25077 (code=exited, status=0/SUCCESS)
+ Tasks: 0 (limit: 512)
+CGroup: /system.slice/openstack-glance-api.service
+ Aug 31 10:13:48 d08-9e-01-b4-9e-42 systemd[1]: Stopping OpenStack Image 
Service API server...
+ Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
State 'stop-final-sigterm' timed out. Killing.
+ Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: Stopped OpenStack Image 
Service API server.
+ Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
Unit entered failed state.
+ Aug 31 10:15:21 d08-9e-01-b4-9e-42 systemd[1]: openstack-glance-api.service: 
Failed with result 'timeout'.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1714240

Title:
  glance re-spawns a child when terminating

Status in Glance:
  New

Bug description:
  When sending a SIGTERM to the main glance-api process,
  api.log shows
  2017-08-31 13:10:30.996 10618 INFO glance.common.wsgi [-] Removed dead child 
10628
  2017-08-31 13:10:31.004 10618 INFO glance.common.wsgi [-] Started child 10642
  2017-08-31 13:10:31.006 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
starting up on https://10.162.184.83:5510
  2017-08-31 13:10:31.008 10642 INFO eventlet.wsgi.server [-] (10642) wsgi 
exited, is_accepting=True
  2017-08-31 13:10:31.009 10642 INFO glance.common.wsgi [-] Child 10642 exiting 
normally

  This is because kill_children sends a SIGTERM to all children
  and wait_on_children restarts one, when it notices a dead child

  We noticed this, because this triggered a fencing in our cloud's
  pacemaker setup because systemd seems to have a race condition in the
  cgroup code that should detect that all related services have
  terminated.

[Yahoo-eng-team] [Bug 1714247] [NEW] Cleaning up deleted instances leaks resources

2017-08-31 Thread Lucian Petrut
Public bug reported:

When the nova-compute service cleans up an instance that still exists on
the host although being deleted from the DB, the according network info
is not properly retrieved.

For this reason, vif ports will not be cleaned up.

In this situation there may also be stale volume connections. Those will
be leaked as well as os-brick attempts to flush those inaccessible
devices, which will fail. As per a recent os-brick change, a 'force'
flag must be set in order to ignore flush errors.

Log: http://paste.openstack.org/raw/620048/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714247

Title:
  Cleaning up deleted instances leaks resources

Status in OpenStack Compute (nova):
  New

Bug description:
  When the nova-compute service cleans up an instance that still exists
  on the host although being deleted from the DB, the according network
  info is not properly retrieved.

  For this reason, vif ports will not be cleaned up.

  In this situation there may also be stale volume connections. Those
  will be leaked as well as os-brick attempts to flush those
  inaccessible devices, which will fail. As per a recent os-brick
  change, a 'force' flag must be set in order to ignore flush errors.

  Log: http://paste.openstack.org/raw/620048/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713783] Re: After failed evacuation the recovered source compute tries to delete the instance

2017-08-31 Thread Matt Riedemann
We should backport this as it actually leads to deleting things in the
source node...

** Changed in: nova/ocata
   Importance: Undecided => High

** Changed in: nova/pike
   Importance: Undecided => High

** Changed in: nova
   Status: New => Triaged

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Information type changed from Public to Private Security

** Changed in: nova/newton
   Importance: Undecided => High

** Changed in: nova/newton
   Status: New => Triaged

** Changed in: nova/ocata
   Status: New => Triaged

** Changed in: nova/pike
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1713783

Title:
  After failed evacuation the recovered source compute tries to delete
  the instance

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) newton series:
  Triaged
Status in OpenStack Compute (nova) ocata series:
  Triaged
Status in OpenStack Compute (nova) pike series:
  Triaged

Bug description:
  Description
  ===
  In case of a failed evacuation attempt the status of the migration is 
'accepted' instead of 'failed' so when source compute is recovered the compute 
manager tries to delete the instance from the source host. However a secondary 
fault prevents deleting the allocation in placement so the actual deletion of 
the instance fails as well.

  Steps to reproduce
  ==
  The following functional test reproduces the bug: 
https://review.openstack.org/#/c/498482/
  What it does: initiate evacuation when no valid host is available and 
evacuation fails, but nova manager still tries to delete the instance.
  Logs:

  2017-08-29 19:11:15,751 ERROR [oslo_messaging.rpc.server] Exception 
during message handling
  NoValidHost: No valid host was found. There are not enough hosts 
available.
  2017-08-29 19:11:16,103 INFO [nova.tests.functional.test_servers] Running 
periodic for compute1 (host1)
  2017-08-29 19:11:16,115 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,120 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,131 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/allocations" 
status: 200 len: 152 microversion: 1.0
  2017-08-29 19:11:16,138 INFO [nova.compute.resource_tracker] Final 
resource view: name=host1 phys_ram=8192MB used_ram=1024MB phys_disk=1028GB 
used_disk=1GB total_vcpus=10 used_vcpus=1 pci_stats=[]
  2017-08-29 19:11:16,146 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,151 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,152 INFO [nova.tests.functional.test_servers] Running 
periodic for compute2 (host2)
  2017-08-29 19:11:16,163 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,168 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,176 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/allocations" 
status: 200 len: 54 microversion: 1.0
  2017-08-29 19:11:16,184 INFO [nova.compute.resource_tracker] Final 
resource view: name=host2 phys_ram=8192MB used_ram=512MB phys_disk=1028GB 
used_disk=0GB total_vcpus=10 used_vcpus=0 pci_stats=[]
  2017-08-29 19:11:16,192 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,197 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,198 INFO [nova.tests.functional.test_servers] 
Finished with periodics
  2017-08-29 19:11:16,255 INFO [nova.api.openstack.requestlog] 127.0.0.1 
"GET 

[Yahoo-eng-team] [Bug 1713783] Re: After failed evacuation the recovered source compute tries to delete the instance

2017-08-31 Thread Matt Riedemann
It looks like the _destroy_evacuated_instances method in the compute
manager has always filtered migrations on the 'accepted' status since
originally this code was just meant to cleanup local resources once an
evacuation from the source host has started, which is fine. The problem
is with removing the source node allocations if the evacuation failed,
but if it failed in the conductor, we can fix that here:

https://review.openstack.org/#/c/499237/

If it failed in the destination compute service, the migration status
should be set to 'failed' and the migration filter in
_destroy_evacuated_instances would filter it out.

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1713783

Title:
  After failed evacuation the recovered source compute tries to delete
  the instance

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) newton series:
  Triaged
Status in OpenStack Compute (nova) ocata series:
  Triaged
Status in OpenStack Compute (nova) pike series:
  Triaged

Bug description:
  Description
  ===
  In case of a failed evacuation attempt the status of the migration is 
'accepted' instead of 'failed' so when source compute is recovered the compute 
manager tries to delete the instance from the source host. However a secondary 
fault prevents deleting the allocation in placement so the actual deletion of 
the instance fails as well.

  Steps to reproduce
  ==
  The following functional test reproduces the bug: 
https://review.openstack.org/#/c/498482/
  What it does: initiate evacuation when no valid host is available and 
evacuation fails, but nova manager still tries to delete the instance.
  Logs:

  2017-08-29 19:11:15,751 ERROR [oslo_messaging.rpc.server] Exception 
during message handling
  NoValidHost: No valid host was found. There are not enough hosts 
available.
  2017-08-29 19:11:16,103 INFO [nova.tests.functional.test_servers] Running 
periodic for compute1 (host1)
  2017-08-29 19:11:16,115 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,120 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,131 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/allocations" 
status: 200 len: 152 microversion: 1.0
  2017-08-29 19:11:16,138 INFO [nova.compute.resource_tracker] Final 
resource view: name=host1 phys_ram=8192MB used_ram=1024MB phys_disk=1028GB 
used_disk=1GB total_vcpus=10 used_vcpus=1 pci_stats=[]
  2017-08-29 19:11:16,146 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,151 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,152 INFO [nova.tests.functional.test_servers] Running 
periodic for compute2 (host2)
  2017-08-29 19:11:16,163 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,168 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,176 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/allocations" 
status: 200 len: 54 microversion: 1.0
  2017-08-29 19:11:16,184 INFO [nova.compute.resource_tracker] Final 
resource view: name=host2 phys_ram=8192MB used_ram=512MB phys_disk=1028GB 
used_disk=0GB total_vcpus=10 used_vcpus=0 pci_stats=[]
  2017-08-29 19:11:16,192 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,197 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,198 INFO 

[Yahoo-eng-team] [Bug 1714237] [NEW] After deleting a migration the allocations are not ceased from the destination host

2017-08-31 Thread Lajos Katona
Public bug reported:

After deleting a live migration there are allocations on both the source and 
the destination hosts.
Reproduction:
- Boot a VM, on host1
- Start live migrating it to host2
- Delete the migration
- The allocations are both on host1 & host2.
This situation doesn't change after running the periodics.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: live-migration placement

** Tags added: live-migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714237

Title:
  After deleting a migration the allocations are not ceased from the
  destination host

Status in OpenStack Compute (nova):
  New

Bug description:
  After deleting a live migration there are allocations on both the source and 
the destination hosts.
  Reproduction:
  - Boot a VM, on host1
  - Start live migrating it to host2
  - Delete the migration
  - The allocations are both on host1 & host2.
  This situation doesn't change after running the periodics.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714235] [NEW] evacuate API does not restrict one from trying to evacuate to the source host

2017-08-31 Thread Matt Riedemann
Public bug reported:

This is purely based on code inspection, but the compute API method
'evacuate' does not check if the specified host (if there was one) is
different from instance.host. It checks if the service is up on that
host, which could be down and you can still specify the instance.host.

Eventually the compute API will RPC cast to conductor task manager which
will fail with an RPC error trying to RPC cast to the
ComputeManager.rebuild_instance method on the compute service, which is
down.

The bug here is that instead of getting an obvious 400 error from the
API, you're left with not much for details when it fails. There should
be an instance action and finish event, but only the admin can see the
traceback in the event. Also, the instance.task_state would be left in
'rebuilding' state, and would require it to be reset to use the instance
again.

** Affects: nova
 Importance: Low
 Status: Confirmed


** Tags: api evacuate

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
   Status: New => Confirmed

** Tags added: api evacuate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714235

Title:
  evacuate API does not restrict one from trying to evacuate to the
  source host

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  This is purely based on code inspection, but the compute API method
  'evacuate' does not check if the specified host (if there was one) is
  different from instance.host. It checks if the service is up on that
  host, which could be down and you can still specify the instance.host.

  Eventually the compute API will RPC cast to conductor task manager
  which will fail with an RPC error trying to RPC cast to the
  ComputeManager.rebuild_instance method on the compute service, which
  is down.

  The bug here is that instead of getting an obvious 400 error from the
  API, you're left with not much for details when it fails. There should
  be an instance action and finish event, but only the admin can see the
  traceback in the event. Also, the instance.task_state would be left in
  'rebuilding' state, and would require it to be reset to use the
  instance again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714212] [NEW] neutron-db-manage on subproject neutron-fwaas fails in Pike

2017-08-31 Thread Jens Offenbach
Public bug reported:

I have set up OpenStack Pike on Ubuntu 16.04.

Running:
$ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini --subproject neutron_fwaas 
upgrade head" neutron

fails with the following error:

INFO  [alembic.runtime.migration] Running upgrade f83a0b2964d0 -> fd38cd995cc0, 
change shared attribute for firewall resource
Traceback (most recent call last):
  File "/usr/bin/neutron-db-manage", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
687, in main
return_val |= bool(CONF.command.func(config, CONF.command.name))
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
208, in do_upgrade
desc=branch, sql=CONF.command.sql)
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
109, in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/alembic/command.py", line 174, in 
upgrade
script.run_env()
  File "/usr/lib/python2.7/dist-packages/alembic/script/base.py", line 416, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in 
load_python_file
module = load_module_py(module_id, path)
  File "/usr/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in 
load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
"/usr/lib/python2.7/dist-packages/neutron_fwaas/db/migration/alembic_migrations/env.py",
 line 86, in 
run_migrations_online()
  File 
"/usr/lib/python2.7/dist-packages/neutron_fwaas/db/migration/alembic_migrations/env.py",
 line 77, in run_migrations_online
context.run_migrations()
  File "", line 8, in run_migrations
  File "/usr/lib/python2.7/dist-packages/alembic/runtime/environment.py", line 
807, in run_migrations
self.get_context().run_migrations(**kw)
  File "/usr/lib/python2.7/dist-packages/alembic/runtime/migration.py", line 
321, in run_migrations
step.migration_fn(**kw)
  File 
"/usr/lib/python2.7/dist-packages/neutron_fwaas/db/migration/alembic_migrations/versions/pike/contract/fd38cd995cc0_shared_attribute_for_firewall_resources.py",
 line 33, in upgrade
existing_type=sa.Boolean)
  File "", line 8, in alter_column
  File "", line 3, in alter_column
  File "/usr/lib/python2.7/dist-packages/alembic/operations/ops.py", line 1420, 
in alter_column
return operations.invoke(alt)
  File "/usr/lib/python2.7/dist-packages/alembic/operations/base.py", line 318, 
in invoke
return fn(self, operation)
  File "/usr/lib/python2.7/dist-packages/alembic/operations/toimpl.py", line 
53, in alter_column
**operation.kw
  File "/usr/lib/python2.7/dist-packages/alembic/ddl/mysql.py", line 48, in 
alter_column
else existing_autoincrement
  File "/usr/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 118, in 
_exec
return conn.execute(construct, *multiparams, **params)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 945, 
in execute
return meth(self, multiparams, params)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py", line 68, in 
_execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1002, 
in _execute_ddl
compiled
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1189, 
in _execute_context
context)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1398, 
in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 203, 
in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1182, 
in _execute_context
context)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
470, in do_execute
cursor.execute(statement, parameters)
  File "/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 166, in 
execute
result = self._query(query)
  File "/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 322, in 
_query
conn.query(q)
  File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 852, in 
query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1053, in 
_read_query_result
result.read()
  File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1336, in 
read
first_packet = self.connection._read_packet()
  File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1010, in 
_read_packet
packet.check_error()
  File "/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 393, in 
check_error

[Yahoo-eng-team] [Bug 1714208] [NEW] Router/network creation in HA mode fails in Pike

2017-08-31 Thread Jens Offenbach
Public bug reported:

I have set up OpenStack Pike on Ubuntu 16.04 in HA mode (2 controllers,
3 compute node). In the current Pike release, router and network
creation in HA mode fails. Whereas, creating routers in non-HA mode
succeeds.

The neutron-server.log gives me the following:
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
[req-5e467987-dce5-4379-a5e9-2192a3a43702 3a5eebf84f7543fc832ef095a581c9bf 
e02e5f2794154037b756aaf366a4f80d - default default] DBAPIError exception 
wrapped from (pymysql.err.InternalError) (4025, u'CONSTRAINT `CONSTRAINT_3` 
failed for `neutron`.`networks`') [SQL: u'INSERT INTO networks (project_id, id, 
name, status, admin_state_up, vlan_transparent, availability_zone_hints, 
standard_attr_id) VALUES (%(project_id)s, %(id)s, %(name)s, %(status)s, 
%(admin_state_up)s, %(vlan_transparent)s, %(availability_zone_hints)s, 
%(standard_attr_id)s)'] [parameters: {'status': 'ACTIVE', 
'availability_zone_hints': None, 'name': u'HA network tenant 
e02e5f2794154037b756aaf366a4f80d', 'admin_state_up': 1, 'vlan_transparent': 
None, 'standard_attr_id': 43, 'project_id': '', 'id': 
'fb3f515d-26de-4872-aa77-28f9aebecedb'}]: InternalError: (4025, u'CONSTRAINT 
`CONSTRAINT_3` failed for `neutron`.`networks`')
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1182, in 
_execute_context
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters context)
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 470, in 
do_execute
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 166, in execute
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/cursors.py", line 322, in _query
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 852, in query
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1053, in 
_read_query_result
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1336, in read
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 1010, in 
_read_packet
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/connections.py", line 393, in 
check_error
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters raise 
errorclass(errno, errval)
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
InternalError: (4025, u'CONSTRAINT `CONSTRAINT_3` failed for 
`neutron`.`networks`')
2017-08-31 11:05:38.736 4158 ERROR oslo_db.sqlalchemy.exc_filters 
2017-08-31 11:05:38.800 4158 ERROR neutron.api.v2.resource 
[req-5e467987-dce5-4379-a5e9-2192a3a43702 3a5eebf84f7543fc832ef095a581c9bf 
e02e5f2794154037b756aaf366a4f80d - default default] create failed: No details.: 
CallbackFailure: Callback 
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin._before_router_create--9223372036853416976
 failed with "(pymysql.err.InternalError) (4025, u'CONSTRAINT `CONSTRAINT_3` 
failed for `neutron`.`networks`') [SQL: u'INSERT INTO networks (project_id, id, 
name, status, admin_state_up, vlan_transparent, availability_zone_hints, 
standard_attr_id) VALUES (%(project_id)s, %(id)s, %(name)s, %(status)s, 
%(admin_state_up)s, %(vlan_transparent)s, %(availability_zone_hints)s, 
%(standard_attr_id)s)'] [parameters: {'status': 'ACTIVE', 
'availability_zone_hints': None, 

[Yahoo-eng-team] [Bug 1714195] [NEW] nova task shutdown or deleted instance when instance status in the databse differs from hypervisor

2017-08-31 Thread 曾永明
Public bug reported:

1.hypervisor is vCenter or kvm
2.the instance status is start in vCenter or kvm
3.the instance status is stop in nova database

   this is High-risk operation,beause when instance is started by
vCenter ,it is is running business, but nova task shutdown instance or
delete instance .so we should synchronized vCenter status to nova
database

** Affects: nova
 Importance: Undecided
 Assignee: 曾永明 (zengyongming)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => 曾永明 (zengyongming)

** Description changed:

  1.hypervisor is vCenter or kvm
  2.the instance status is start in vCenter or kvm
  3.the instance status is stop in nova database
  
-this is High-risk operation,beause when instance is started by
+    this is High-risk operation,beause when instance is started by
  vCenter ,it is is running business, but nova task shutdown instance or
- .so we should synchronized vCenter status to nova database
+ delete instance .so we should synchronized vCenter status to nova
+ database

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714195

Title:
  nova task shutdown or deleted instance when instance status in  the
  databse differs from  hypervisor

Status in OpenStack Compute (nova):
  New

Bug description:
  1.hypervisor is vCenter or kvm
  2.the instance status is start in vCenter or kvm
  3.the instance status is stop in nova database

     this is High-risk operation,beause when instance is started by
  vCenter ,it is is running business, but nova task shutdown instance or
  delete instance .so we should synchronized vCenter status to nova
  database

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714179] [NEW] keystone project can not update or search extra filed

2017-08-31 Thread 曾永明
Public bug reported:

keystone project can not update or search extra filed

** Affects: keystone
 Importance: Undecided
 Assignee: 曾永明 (zengyongming)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => 曾永明 (zengyongming)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1714179

Title:
  keystone project can not update or search extra filed

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  keystone project can not update or search extra filed

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1714179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714072] Re: PUT /allocations/{consumer_id} fails with a 500 if "resources: {}"

2017-08-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/499270
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=10f8a9aa127cfaecab368e26c3b896e203e301bc
Submitter: Jenkins
Branch:master

commit 10f8a9aa127cfaecab368e26c3b896e203e301bc
Author: Chris Dent 
Date:   Wed Aug 30 20:30:19 2017 +0100

[placement] Require at least one resource class in allocation

If an allocation was made with an empty resources object, a 500 response
code would result. This change adjusts the schema to use minProperties
of 1 to say there must be at least one resource class and value pair in
the allocation. If there is not a 400 response is returned.

As this is fixing 500 response to a useful error, no microversion is
required. A previous gabbi file which demonstrated the problem has been
updated to demonstrate that the problem has been fixed.

Change-Id: I7d9c64c77586564fb3bdbe92c693bd2a1bc06f24
Closes-Bug: #1714072


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714072

Title:
  PUT /allocations/{consumer_id} fails with a 500 if "resources: {}"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  I hit this while writing a functional recreate test for bug 1713786
  where the destination node during an evacuate doesn't have it's
  allocations created by the scheduler. When the source node comes up
  after the evacuation, it tries to remove the allocations on the source
  node, which is the only one because of bug 1713786, but that results
  in sending a request like this:

  2017-08-30 14:45:13,495 INFO [nova.scheduler.client.report] Sending
  updated allocation [{'resource_provider': {'uuid':
  '7ab9dab7-65c6-4961-9403-c8fc50dedb6b'}, 'resources': {}}] for
  instance dc8a686c-ad92-48f3-8594-d00c6e671a1c after removing resources
  for 7ab9dab7-65c6-4961-9403-c8fc50dedb6b.

  And you get this stacktrace in the Placement API:

  2017-08-30 14:45:13,502 ERROR [nova.api.openstack.placement.handler] Uncaught 
exception
  Traceback (most recent call last):
File "nova/api/openstack/placement/handler.py", line 217, in __call__
  return dispatch(environ, start_response, self._map)
File "nova/api/openstack/placement/handler.py", line 144, in dispatch
  return handler(environ, start_response)
File 
"/home/user/git/nova/.tox/functional/local/lib/python2.7/site-packages/webob/dec.py",
 line 131, in __call__
  resp = self.call_func(req, *args, **self.kwargs)
File "nova/api/openstack/placement/wsgi_wrapper.py", line 29, in 
call_func
  super(PlacementWsgify, self).call_func(req, *args, **kwargs)
File 
"/home/user/git/nova/.tox/functional/local/lib/python2.7/site-packages/webob/dec.py",
 line 196, in call_func
  return self.func(req, *args, **kwargs)
File "nova/api/openstack/placement/microversion.py", line 268, in 
decorated_func
  return _find_method(f, version)(req, *args, **kwargs)
File "nova/api/openstack/placement/util.py", line 138, in 
decorated_function
  return f(req)
File "nova/api/openstack/placement/handlers/allocation.py", line 286, 
in set_allocations
  return _set_allocations(req, ALLOCATION_SCHEMA_V1_8)
File "nova/api/openstack/placement/handlers/allocation.py", line 252, 
in _set_allocations
  allocations.create_all()
File "nova/objects/resource_provider.py", line 1877, in create_all
  self._set_allocations(self._context, self.objects)
File 
"/home/user/git/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 150, in wrapper
  ectxt.value = e.inner_exc
File 
"/home/user/git/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/user/git/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/user/git/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 138, in wrapper
  return f(*args, **kwargs)
File 
"/home/user/git/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 979, in wrapper
  return fn(*args, **kwargs)
File "nova/objects/resource_provider.py", line 1835, in _set_allocations
  consumer_id = allocs[0].consumer_id
  IndexError: list index out of range

  
  The schema validation on PUT /allocations requires a minimum of one provider 
in the request, but it doesn't validate that there is at least one resource for 
that provider: