[Yahoo-eng-team] [Bug 1778945] Re: Complexity in token provider APIs

2018-08-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/577567
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=140a34b439d9aa60712a164b21622e721a60dfc2
Submitter: Zuul
Branch:master

commit 140a34b439d9aa60712a164b21622e721a60dfc2
Author: Lance Bragstad 
Date:   Fri Jun 22 22:02:04 2018 +

Remove KeystoneToken object

This commit removes the original KeystoneToken object in favor of the
new TokenModel object. Since we have a token provider that knows how
to deal with TokenModel object, we don't really need another object
that uses reflection at all.

Closes-Bug: 1778945
Change-Id: I778cab0a6449184ecf7d5ccfbfa12791be139236


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1778945

Title:
  Complexity in token provider APIs

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The authentication APIs of keystone (GET|HEAD|POST|DELETE
  /v3/auth/tokens) leverage the token provider API to deal with token
  operations. This has traditionally been one of the most complicated
  areas of keystone for multiple reasons.

  First, tokens are represented in different ways depending on how the
  user authenticates, making it non-trivial to build accurate
  representations of tokens.

  Second, in the past keystone supported different token providers and
  different token storage systems which were eventually decoupled. This
  increased the various paths through the token provider code,
  complicating the business logic.

  Third, original implementation of the token provider interface relied
  on a storage mechanism to hold the entire token reference. This meant
  that the pluggable token provider needed to understand the complex
  data structure that is a token response. This was traditionally passed
  into token provider via the token provider interface as something like
  ``token_data`` or ``token_ref``. The dictionary was essentially what
  would be passed to the user over the API, but responsibility of that
  reference would get passed to pluggable parts of keystone. This meant
  that it was possible to introduce breaking API changes by using
  different token providers. This also made it hard for people to
  implement token providers because they need to understand the
  intricacies of keystone authentication API just to implement a new
  token format.

  These are but a few of the reasons why the token provider API has
  become increasingly complex and fragile over time.

  We can improve the situation for maintainers and external developers
  providing their own token providers by doing a couple things. First,
  we can consolidate all token logic into a token object that is
  responsible for maintaining the business logic for tokens. Second, we
  can transition parts of keystone to use this new object instead of
  actual dictionaries that represent token API responses. Third, we can
  introduce a method that converts a token model to a token response,
  and keep that in a single place close to the edge of the
  authentication API. This will force us to use the object
  representation within keystone and be explicit in what we give to
  users, as opposed to opening ourselves up to breaking APIs by passing
  token references across interfaces outside of our control. Finally, we
  can redefine the token provider APIs to be extremely explicit since
  following the above steps will result in less responsibility for
  people implementing token provider APIs since keystone is handling in.
  A redefined interface will allow us to add and remove token providers
  faster and easier.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1778945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784992] [NEW] Install and configure controller node for Ubuntu in nova

2018-08-01 Thread jaya
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: __

in part below "
5. Create the cell1 cell:

# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650"

word "109e1d4b-536a-40d0-83c6-5f121b82b650" is an output and should be
omitted, based on how this document is written.

---
Release: 16.1.5.dev49 on 2018-07-30 19:25
SHA: b58c7f033771e3ea228e4b40c796d1bc95a087f5
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/controller-install-ubuntu.rst
URL: https://docs.openstack.org/nova/pike/install/controller-install-ubuntu.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784992

Title:
  Install and configure controller node for Ubuntu in nova

Status in OpenStack Compute (nova):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: __

  in part below "
  5. Create the cell1 cell:

  # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" 
nova
  109e1d4b-536a-40d0-83c6-5f121b82b650"

  word "109e1d4b-536a-40d0-83c6-5f121b82b650" is an output and should be
  omitted, based on how this document is written.

  ---
  Release: 16.1.5.dev49 on 2018-07-30 19:25
  SHA: b58c7f033771e3ea228e4b40c796d1bc95a087f5
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/controller-install-ubuntu.rst
  URL: 
https://docs.openstack.org/nova/pike/install/controller-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784994] [NEW] Install and configure (Ubuntu) in glance

2018-08-01 Thread jaya
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: __

In below part of document: 
"Use the database access client to connect to the database server as the root 
user:
$ mysql -u root -p"

Compared with other project documentation (e.g. initial openstack
installation, identity, nova) and based on wording before "mysql"
command, mysql is executed with root shell user. By executing using root
shell user, mysql will not ask for password. and it is inline with other
documentation and avoid confusion for new user installing openstack.

so abova part should be written. 
"Use the database access client to connect to the database server as the root 
user:
# mysql"


---
Release: 15.0.2.dev4 on 'Wed Jun 27 16:17:38 2018, commit a4562ab'
SHA: a4562abeb13b47f8bc765f792794f6d214df96cd
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst
URL: https://docs.openstack.org/glance/pike/install/install-ubuntu.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1784994

Title:
  Install and configure (Ubuntu) in glance

Status in Glance:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: __

  In below part of document: 
  "Use the database access client to connect to the database server as the root 
user:
  $ mysql -u root -p"

  Compared with other project documentation (e.g. initial openstack
  installation, identity, nova) and based on wording before "mysql"
  command, mysql is executed with root shell user. By executing using
  root shell user, mysql will not ask for password. and it is inline
  with other documentation and avoid confusion for new user installing
  openstack.

  so abova part should be written. 
  "Use the database access client to connect to the database server as the root 
user:
  # mysql"

  
  ---
  Release: 15.0.2.dev4 on 'Wed Jun 27 16:17:38 2018, commit a4562ab'
  SHA: a4562abeb13b47f8bc765f792794f6d214df96cd
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst
  URL: https://docs.openstack.org/glance/pike/install/install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1784994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784983] [NEW] we should not set instance to ERROR state when rebuild_claim faild

2018-08-01 Thread Tao Li
Public bug reported:

Description
===
When a compute node is down, we evacaute the instances which locate in this 
compute. In concurrent scenario, serveral instances selecte the same 
destination node. And unfortunately,the memory is not enough for some instance, 
then the destination node raise the ComputeResourcesUnavailable exception, and 
set the instance to error state finally. But I think in 
ComputeResourcesUnavailable excepton, we should not set the instance to error 
state. In fact the instance remains in the source node.

Steps to reproduce
==
* Create many instances in on source node, and the destination have little 
resource such memory.
* Power off the compute or stop the compute service in this node.
* Concurrently evacuate all instances in source node with specifying the 
destination node. 
* Fortunately, you will find one or more instance in error state.


Expected result
===
I wonder no instance is in error state when no enough resources.

Actual result
=
Some instance is in error state .

Environment
===
P release,But I found the issue also exists in main branch.


Logs & Configs
==
2018-08-01 16:21:45.739 41514 DEBUG nova.notifications.objects.base 
[req-1710e7e5-9073-47f1-8ae8-1e68c65272c9 855c20651d244348b10c91d907aa59ca - - 
- -] Defaulting the value of the field 'projects' to None in FlavorPayload due 
to 'Cannot call _load_projects on orphaned Flavor object' populate_schema 
/usr/lib/python2.7/site-packages/nova/notifications/objects/base.py:125
2018-08-01 16:21:45.747 41514 ERROR nova.compute.manager 
[req-1710e7e5-9073-47f1-8ae8-1e68c65272c9 855c20651d244348b10c91d907aa59ca - - 
- -] [instance: 5b8ae80d-7e33-4099-8732-905355cee045] Setting instance vm_state 
to ERROR: BuildAbortException: Build of instance 
5b8ae80d-7e33-4099-8732-905355cee045 aborted: Insufficient compute resources: 
Free memory 1141.00 MB < requested 2048 MB.
2018-08-01 16:21:45.747 41514 ERROR nova.compute.manager [instance: 
5b8ae80d-7e33-4099-8732-905355cee045] Traceback (most recent call last):
2018-08-01 16:21:45.747 41514 ERROR nova.compute.manager [instance: 
5b8ae80d-7e33-4099-8732-905355cee045]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7142, in 
_error_out_instance_on_exception
2018-08-01 16:21:45.747 41514 ERROR nova.compute.manager [instance: 
5b8ae80d-7e33-4099-8732-905355cee045] yield
2018-08-01 16:21:45.747 41514 ERROR nova.compute.manager [instance: 
5b8ae80d-7e33-4099-8732-905355cee045]   File 
"/usr/lib/python2.7/site-packages/nova/fh/compute/manager.py", line 700, in 
rebuild_instance
2018-08-01 16:21:45.747 41514 ERROR nova.compute.manager [instance: 
5b8ae80d-7e33-4099-8732-905355cee045] instance_uuid=instance.uuid, 
reason=e.format_message())
2018-08-01 16:21:45.747 41514 ERROR nova.compute.manager [instance: 
5b8ae80d-7e33-4099-8732-905355cee045] BuildAbortException: Build of instance 
5b8ae80d-7e33-4099-8732-905355cee045 aborted: Insufficient compute resources: 
Free memory 1141.00 MB < requested 2048 MB.
2018-08-01 16:21:45.747 41514 ERROR nova.compute.manager [instance: 
5b8ae80d-7e33-4099-8732-905355cee045]

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784983

Title:
  we should not set instance to ERROR state when rebuild_claim faild

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When a compute node is down, we evacaute the instances which locate in this 
compute. In concurrent scenario, serveral instances selecte the same 
destination node. And unfortunately,the memory is not enough for some instance, 
then the destination node raise the ComputeResourcesUnavailable exception, and 
set the instance to error state finally. But I think in 
ComputeResourcesUnavailable excepton, we should not set the instance to error 
state. In fact the instance remains in the source node.

  Steps to reproduce
  ==
  * Create many instances in on source node, and the destination have little 
resource such memory.
  * Power off the compute or stop the compute service in this node.
  * Concurrently evacuate all instances in source node with specifying the 
destination node. 
  * Fortunately, you will find one or more instance in error state.

  
  Expected result
  ===
  I wonder no instance is in error state when no enough resources.

  Actual result
  =
  Some instance is in error state .

  Environment
  ===
  P release,But I found the issue also exists in main branch.

  
  Logs & Configs
  ==
  2018-08-01 16:21:45.739 41514 DEBUG nova.notifications.objects.base 
[req-1710e7e5-9073-47f1-8ae8-1e68c65272c9 855c20651d244348b10c91d907aa59ca - - 
- -] Defaulting the value of the field 'projects' to None in F

[Yahoo-eng-team] [Bug 1775758] Re: Deprecated auth_url entries in Neutron Queen's install guide

2018-08-01 Thread Mathew Vensel
Brian,

While i understand the comment i don't believe all the documentation
links were updated or committed as i recently took the current Queens
guide for Ubuntu today and it still has the information listed that is
identical to Nick's report above.

https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-queens
https://docs.openstack.org/neutron/queens/install/
https://docs.openstack.org/neutron/queens/install/install-ubuntu.html
https://docs.openstack.org/neutron/queens/install/compute-install-ubuntu.html#configure-the-common-component

-Mathew

** Changed in: neutron
   Status: Invalid => Confirmed

** Changed in: neutron
   Status: Confirmed => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775758

Title:
  Deprecated auth_url entries in Neutron Queen's install guide

Status in neutron:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way:

  The Neutron installation guides use old auth_uri / auth_url values
  pointing at two different keystone endpoints for authentication of the
  network service and the compute service. This occurs within the
  controller node and compute node parts of the installation guide.
  Following the current guide then fails at the Verify step with error
  "Failed to retrieve extensions list from Network API"

  - [x] I have a fix to the document that I can paste below including
  example: input and output.

  input 1 (neutron.conf): 
  auth_uri = http://controller:5000
  auth_url = http://controller:35357 

  input 2 (nova.conf):
  auth_url = http://controller:35357

  output:
  auth_url = http://controller:5000

  I changed the values to the above output and the Verify step
  succeeded.


  ---
  Release: 12.0.3.dev24 on 2018-06-07 22:47
  SHA: 2206636feca043a9ab958010a00641f92957e8a5
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/controller-install-ubuntu.rst
  URL: 
https://docs.openstack.org/neutron/queens/install/controller-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777422] Re: Resource tracker periodic task taking a very long time

2018-08-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/587636
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b5b7d86bb04f92d21cf954cd6b3463c9fcc637e6
Submitter: Zuul
Branch:master

commit b5b7d86bb04f92d21cf954cd6b3463c9fcc637e6
Author: Matt Riedemann 
Date:   Tue Jul 31 17:26:47 2018 -0400

Make ResourceTracker.stats node-specific

As of change I6827137f35c0cb4f9fc4c6f753d9a035326ed01b in
Ocata, the ResourceTracker manages multiple compute nodes
via its "compute_nodes" variable, but the "stats" variable
was still being shared across all nodes, which leads to
leaking stats across nodes in an ironic deployment where
a single nova-compute service host is managing multiple
ironic instances (nodes).

This change makes ResourceTracker.stats node-specific
which fixes the ironic leak but also allows us to remove
the stats deepcopy while iterating over instances which
should improve performance for single-node deployments with
potentially a large number of instances, i.e. vCenter.

Change-Id: I0b9e5b711878fa47ba90e43c0b41437b57cf8ef6
Closes-Bug: #1784705
Closes-Bug: #1777422


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777422

Title:
  Resource tracker periodic task taking a very long time

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We have 250 instances on a compute node and the resource tracker
  periodic task is taking very long:

  2018-06-17 10:30:56.194 1658 DEBUG oslo_concurrency.lockutils [req-
  fb2573f9-3862-45db-b546-7a00fdd9a871 - - - - -] Lock
  "compute_resources" released by
  "nova.compute.resource_tracker._update_available_resource" :: held
  10.666s inner /usr/lib/python2.7/dist-
  packages/oslo_concurrency/lockutils.py:288

  This is due to the deepcopy. This copies the structure N times per
  iteration, once for each instance. This is very costly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784705] Re: ResourceTracker.stats can leak across multiple ironic nodes

2018-08-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/587636
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b5b7d86bb04f92d21cf954cd6b3463c9fcc637e6
Submitter: Zuul
Branch:master

commit b5b7d86bb04f92d21cf954cd6b3463c9fcc637e6
Author: Matt Riedemann 
Date:   Tue Jul 31 17:26:47 2018 -0400

Make ResourceTracker.stats node-specific

As of change I6827137f35c0cb4f9fc4c6f753d9a035326ed01b in
Ocata, the ResourceTracker manages multiple compute nodes
via its "compute_nodes" variable, but the "stats" variable
was still being shared across all nodes, which leads to
leaking stats across nodes in an ironic deployment where
a single nova-compute service host is managing multiple
ironic instances (nodes).

This change makes ResourceTracker.stats node-specific
which fixes the ironic leak but also allows us to remove
the stats deepcopy while iterating over instances which
should improve performance for single-node deployments with
potentially a large number of instances, i.e. vCenter.

Change-Id: I0b9e5b711878fa47ba90e43c0b41437b57cf8ef6
Closes-Bug: #1784705
Closes-Bug: #1777422


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784705

Title:
  ResourceTracker.stats can leak across multiple ironic nodes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  A single nova-compute service host can manage multiple ironic nodes,
  which creates multiple ComputeNode records per compute service host,
  and ironic instances are 1:1 with each compute node.

  Before change https://review.openstack.org/#/c/398473/ in Ocata, the
  ComputeManager would manage multiple ResourceTracker instances, one
  per compute node (so one per ironic instance managed by that host). As
  a result of that change, the ComputeManager manages a single
  ResourceTracker instance, and the ResourceTracker's compute_node entry
  was changed to a dict, so the RT could manage multiple compute nodes
  (one per ironic instance).

  The problem is the ResourceTracker.stats variable was left to be
  "shared" across all compute nodes being managed by the single RT,
  which can cause problems in the
  ResourceTracker._update_usage_from_instance() method which updates the
  stats and then assigns it to a compute node record, so it could
  leak/accumulate information about the stats for a different node.

  The compute node stats are used by the ComputeCapabilitiesFilter in
  the scheduler so it could be possible for a compute node B to be
  reporting node capabilities which only apply to node A.

  This was discovered during code review of this change:

  
https://review.openstack.org/#/c/576099/2/nova/compute/resource_tracker.py@1130

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784953] [NEW] Useful image properties in glance - os_shutdown_timeout is not documented

2018-08-01 Thread Matt Riedemann
Public bug reported:

- [x] This is a doc addition request.

The "os_shutdown_timeout" image property, used by nova, is not
documented. It's in the metadefs though:

https://github.com/openstack/glance/blob/48ee8ef4793ed40397613193f09872f474c11abe/etc/metadefs
/compute-guest-shutdown.json#L13

"By default, guests will be given 60 seconds to perform a graceful
shutdown. After that, the VM is powered off.  This property allows
overriding the amount of time (unit: seconds) to allow a guest OS to
cleanly shut down before power off. A value of 0 (zero) means the guest
will be powered off immediately with no opportunity for guest OS clean-
up."

---
Release:  on 2018-08-01 16:52
SHA: 0b24dbd620f88b4d36bf6e0f8975f10aa8709b86
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/admin/useful-image-properties.rst
URL: https://docs.openstack.org/glance/latest/admin/useful-image-properties.html

** Affects: glance
 Importance: Medium
 Status: Triaged


** Tags: documentation low-hanging-fruit

** Changed in: glance
   Status: New => Triaged

** Changed in: glance
   Importance: Undecided => Medium

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1784953

Title:
  Useful image properties in glance - os_shutdown_timeout is not
  documented

Status in Glance:
  Triaged

Bug description:
  - [x] This is a doc addition request.

  The "os_shutdown_timeout" image property, used by nova, is not
  documented. It's in the metadefs though:

  
https://github.com/openstack/glance/blob/48ee8ef4793ed40397613193f09872f474c11abe/etc/metadefs
  /compute-guest-shutdown.json#L13

  "By default, guests will be given 60 seconds to perform a graceful
  shutdown. After that, the VM is powered off.  This property allows
  overriding the amount of time (unit: seconds) to allow a guest OS to
  cleanly shut down before power off. A value of 0 (zero) means the
  guest will be powered off immediately with no opportunity for guest OS
  clean-up."

  ---
  Release:  on 2018-08-01 16:52
  SHA: 0b24dbd620f88b4d36bf6e0f8975f10aa8709b86
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/admin/useful-image-properties.rst
  URL: 
https://docs.openstack.org/glance/latest/admin/useful-image-properties.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1784953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784715] Re: Configure resize in nova

2018-08-01 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/queens
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784715

Title:
  Configure resize in nova

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  ---
  Release: 17.0.6.dev55 on 2018-07-30 19:28
  SHA: 922b32a5de8e23473832cd70f5734a54633e91b8
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/configuration/resize.rst
  URL: https://docs.openstack.org/nova/queens/admin/configuration/resize.html

  The documentation for Openstack Queens on how to resize an instance
  points KVM users to: -

  https://docs.openstack.org/user-
  guide/cli_change_the_size_of_your_server.html

  However this URL redirects to the general page: -

  https://docs.openstack.org/queens/user/

  which doesn't reveal how to resize instances. I believe this qualifies
  as a dead-link that needs revising.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784950] [NEW] get_device_details RPC fails if host not specified

2018-08-01 Thread Matthew Edmonds
Public bug reported:

An optional (defaults to None) host argument was added to the
get_device_details RPC method a long time ago [1] but a recent change
[2] to the master branch has made that no longer really optional, at
least for the pvm_sea agent from openstack/networking-powervm, since not
passing it will cause VIF plugging to timeout with an error in the
neutron logs stating "Device %s has no active binding in host None".

This can easily be fixed in openstack/networking-powervm by passing the
host argument, but I expect that neutron also needs to bump the version
for neutron.plugins.ml2.rpc.RpcCallbacks to reflect that host is no
longer optional by removing the "=None" default (since it doesn't work
anymore).

[1] f7064f2b6c6ba1d0ab5f9872b2d5ad7969a64e7b
[2] 01bdb47199468805b714ce4c00c7492951267585

** Affects: networking-powervm
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Also affects: networking-powervm
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1784950

Title:
  get_device_details RPC fails if host not specified

Status in networking-powervm:
  New
Status in neutron:
  New

Bug description:
  An optional (defaults to None) host argument was added to the
  get_device_details RPC method a long time ago [1] but a recent change
  [2] to the master branch has made that no longer really optional, at
  least for the pvm_sea agent from openstack/networking-powervm, since
  not passing it will cause VIF plugging to timeout with an error in the
  neutron logs stating "Device %s has no active binding in host None".

  This can easily be fixed in openstack/networking-powervm by passing
  the host argument, but I expect that neutron also needs to bump the
  version for neutron.plugins.ml2.rpc.RpcCallbacks to reflect that host
  is no longer optional by removing the "=None" default (since it
  doesn't work anymore).

  [1] f7064f2b6c6ba1d0ab5f9872b2d5ad7969a64e7b
  [2] 01bdb47199468805b714ce4c00c7492951267585

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-powervm/+bug/1784950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784782] Re: API: flavors - Cannot list all public and private flavors by default

2018-08-01 Thread Matt Riedemann
Marked as incomplete since I'm not sure what you're saying is the bug.
Please clarify. I'll fix the API reference docs in the meantime.

** Changed in: nova
   Status: Opinion => Incomplete

** Changed in: nova
   Importance: Wishlist => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784782

Title:
  API: flavors - Cannot list all public and private flavors by default

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The API doesn't return all public and private flavors by default.
  Effectively only public flavors are listed even though the default policy 
rule authorize it.

  Here I'm using 'admin' user/project along with no explicit policy
  therefore relying on the default 'built-in' policy which I believe
  translate to "compute_extension:flavor_access:addTenantAccess":
  "rule:admin_api".

  $ openstack flavor list --all
  
+--+-+-+--+---+---+---+
  | ID   | Name| RAM | Disk | Ephemeral 
| VCPUs | Is Public |
  
+--+-+-+--+---+---+---+
  | 1| flavor-tiny |  64 |0 | 0 
| 1 | True  |
  | a1fec2c4-2f18-422b-977d-c7e2046cfaec | test1   |   1 |1 | 0 
| 1 | False |
  
+--+-+-+--+---+---+---+

  
  # The default flavors list returns only the public ones:
  $ curl -s -H "X-Auth-Token: $OS_TOKEN" -H "Content-Type: application/json" 
http://${OS_HOST}:8774/v2/flavors| python -mjson.tool
  {
  "flavors": [
  {
  "id": "1",
  "links": [
  {
  "href": "http://192.0.2.6:8774/v2/flavors/1";,
  "rel": "self"
  },
  {
  "href": "http://192.0.2.6:8774/flavors/1";,
  "rel": "bookmark"
  }
  ],
  "name": "flavor-tiny"
  }
  ]
  }

  $ curl -s -H "X-Auth-Token: $OS_TOKEN" -H "Content-Type: application/json" 
http://${OS_HOST}:8774/v2/flavors?is_public=false | python -mjson.tool  

  
  {
  "flavors": [
  {
  "id": "a1fec2c4-2f18-422b-977d-c7e2046cfaec",
  "links": [
  {
  "href": 
"http://192.0.2.6:8774/v2/flavors/a1fec2c4-2f18-422b-977d-c7e2046cfaec";,
  "rel": "self"
  },
  {
  "href": 
"http://192.0.2.6:8774/flavors/a1fec2c4-2f18-422b-977d-c7e2046cfaec";,
  "rel": "bookmark"
  }
  ],
  "name": "test1"
  }
  ]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784782] Re: API: flavors - Cannot list all public and private flavors by default

2018-08-01 Thread Matt Riedemann
Is your bug really about saying that admins shouldn't have to pass
is_public=None *by default* and is_public=None should just be the
default behavior for admins if the is_public query parameter isn't
provided? If so, that's not a bug, and would require a microversion
since it's a behavior change to the API.

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784782

Title:
  API: flavors - Cannot list all public and private flavors by default

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The API doesn't return all public and private flavors by default.
  Effectively only public flavors are listed even though the default policy 
rule authorize it.

  Here I'm using 'admin' user/project along with no explicit policy
  therefore relying on the default 'built-in' policy which I believe
  translate to "compute_extension:flavor_access:addTenantAccess":
  "rule:admin_api".

  $ openstack flavor list --all
  
+--+-+-+--+---+---+---+
  | ID   | Name| RAM | Disk | Ephemeral 
| VCPUs | Is Public |
  
+--+-+-+--+---+---+---+
  | 1| flavor-tiny |  64 |0 | 0 
| 1 | True  |
  | a1fec2c4-2f18-422b-977d-c7e2046cfaec | test1   |   1 |1 | 0 
| 1 | False |
  
+--+-+-+--+---+---+---+

  
  # The default flavors list returns only the public ones:
  $ curl -s -H "X-Auth-Token: $OS_TOKEN" -H "Content-Type: application/json" 
http://${OS_HOST}:8774/v2/flavors| python -mjson.tool
  {
  "flavors": [
  {
  "id": "1",
  "links": [
  {
  "href": "http://192.0.2.6:8774/v2/flavors/1";,
  "rel": "self"
  },
  {
  "href": "http://192.0.2.6:8774/flavors/1";,
  "rel": "bookmark"
  }
  ],
  "name": "flavor-tiny"
  }
  ]
  }

  $ curl -s -H "X-Auth-Token: $OS_TOKEN" -H "Content-Type: application/json" 
http://${OS_HOST}:8774/v2/flavors?is_public=false | python -mjson.tool  

  
  {
  "flavors": [
  {
  "id": "a1fec2c4-2f18-422b-977d-c7e2046cfaec",
  "links": [
  {
  "href": 
"http://192.0.2.6:8774/v2/flavors/a1fec2c4-2f18-422b-977d-c7e2046cfaec";,
  "rel": "self"
  },
  {
  "href": 
"http://192.0.2.6:8774/flavors/a1fec2c4-2f18-422b-977d-c7e2046cfaec";,
  "rel": "bookmark"
  }
  ],
  "name": "test1"
  }
  ]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1783512] Re: Record user context when perform service actions

2018-08-01 Thread melanie witt
This sounds like an RFE, not a bug. This would be something to propose
as a small spec or a specless blueprint if it's simple enough.

** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1783512

Title:
  Record user context when perform service actions

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Change 13e73cb7490fd74583c91b99dad288cd891dce59 added granularity to 
os-services API.
  So that different service actions can be controlled for different users. As 
in usecase,
  I could be possible that the user with largest capability would like to know 
who did
  what kind of action to which service. So it might be good to save who did the 
action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1783512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1770712] Re: It would be nice if cloud-init provides full version in logs

2018-08-01 Thread Chad Smith
won't fixed artful as it is EOL as of June 20th,

** Tags removed: verification-needed verification-needed-artful 
verification-needed-xenial
** Tags added: verification-done verification-done-artful 
verification-done-xenial

** Tags removed: verification-done-artful verification-needed-bionic
** Tags added: verification-done-bionic

** Description changed:

+ === Begin SRU Template ===
+ [Impact]
+ Cloud-init logs should now contain full packaged version of cloud-init on 
xenial, artful and bionic.
+ 
+ [Test Case]
+ # We should see specific version and patch information
+ for series in xenial artful bionic;
+ do
+echo '=== BEGIN ' $series ' ==='
+ref=$series-proposed;
+lxc delete test-$series --force;
+lxc-proposed-snapshot -p -P $series $ref | egrep 'Creating|cloud-init';
+lxc init $ref test-$series;
+lxc start test-$series;
+packaged_version=`lxc exec test-$series -- dpkg-query --show -f 
'${version}' cloud-init`;
+lxc exec test-$series -- grep $packaged_version /var/log/cloud-init.log;
+lxc exec test-$series -- cloud-init --version;
+ done
+ 
+ # Also, cloud-init --version should show the packaged version
+ # it should contain a -0ubuntu portion.
+ 
+ $ cloud-init --version
+ /usr/bin/cloud-init 18.3-9-g2e62cb8a-0ubuntu1
+ 
+ [Regression Potential]
+ This really should be low chance of regression.  The chance would be
+ if something is running 'cloud-init --version' and parsing the output,
+ or parsing the output of /var/log/cloud-init.log (or the console log).
+ 
+ Such specific parsing of a log seems brittle anyway. Parsing output
+ of --version that expected to not have a -0ubuntuX in it would need to
+ be updated.
+ 
+ [Other Info]
+ Upstream commit at
+   https://git.launchpad.net/cloud-init/commit/?id=525a9e8f
+ 
+ === End SRU Template ===
+ 
+ 
  [Test Case]
  # We should see specific version and patch information
  $ packaged_version=$(dpkg-query --show -f '${version}' cloud-init)
  $ grep $packaged_version /var/log/cloud-init.log  # Expect to stage header 
logs
  ...
  2018-07-10 19:33:16,406 - util.py[DEBUG]: Cloud-init v. 
18.3-9-g2e62cb8a-0ubuntu1 running 'init-local' at Tue, 10 Jul 2018 19:33:16 
+. Up 1.00 seconds.
  
  # Also, cloud-init --version should show the packaged version
  # it should contain a -0ubuntu portion.
  
  $ cloud-init --version
  /usr/bin/cloud-init 18.3-9-g2e62cb8a-0ubuntu1
  
  [Regression Potential]
  This really should be low chance of regression.  The chance would be
  if something is running 'cloud-init --version' and parsing the output,
  or parsing the output of /var/log/cloud-init.log (or the console log).
  
  Such specific parsing of a log seems brittle anyway. Parsing output
  of --version that expected to not have a -0ubuntuX in it would need to
  be updated.
  
  [Other Info]
  Upstream commit at
-   https://git.launchpad.net/cloud-init/commit/?id=525a9e8f
+   https://git.launchpad.net/cloud-init/commit/?id=525a9e8f
  
  === End SRU Template ===
  
  
+ === Original Description ===
  Cloud-init rsyslog has the major version of cloud-init:
  
  May 11 17:40:51 maas-enlisting-node cloud-init[550]: Cloud-init v. 18.2
  running 'init-local' at Fri, 11 May 2018 17:40:47 +. Up 15.63
  seconds.
  
  However, it would be nice if it places the whole version, so that we can
  now exactly what version of cloud-init its running, e.g:
  
  May 11 17:40:51 maas-enlisting-node cloud-init[550]: Cloud-init v. 18.2
  (27-g6ef92c98-0ubuntu1~18.04.1) running 'init-local' at Fri, 11 May 2018
  17:40:47 +. Up 15.63 seconds.

** Changed in: cloud-init (Ubuntu Artful)
   Status: Fix Committed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1770712

Title:
  It would be nice if cloud-init provides full version in logs

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Committed
Status in cloud-init source package in Artful:
  Won't Fix
Status in cloud-init source package in Bionic:
  Fix Committed
Status in cloud-init source package in Cosmic:
  Fix Released

Bug description:
  === Begin SRU Template ===
  [Impact]
  Cloud-init logs should now contain full packaged version of cloud-init on 
xenial, artful and bionic.

  [Test Case]
  # We should see specific version and patch information
  for series in xenial artful bionic;
  do
 echo '=== BEGIN ' $series ' ==='
 ref=$series-proposed;
 lxc delete test-$series --force;
 lxc-proposed-snapshot -p -P $series $ref | egrep 'Creating|cloud-init';
 lxc init $ref test-$series;
 lxc start test-$series;
 packaged_version=`lxc exec test-$series -- dpkg-query --show -f 
'${version}' cloud-init`;
 lxc exec test-$series -- grep $packaged_version /var/log/cloud-init.log;
 lxc exec test-$series -- cloud-init --version;
  done

  # Also, cl

[Yahoo-eng-team] [Bug 1784074] Re: Instances end up with no cell assigned in instance_mappings

2018-08-01 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/queens
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784074

Title:
  Instances end up with no cell assigned in instance_mappings

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  There has been situations where due to an unrelated issue such as an
  RPC or DB problem, the nova_api instance_mappings table can end up
  with instances that have cell_id set to NULL which can cause annoying
  and weird behaviour such as undeletable instances, etc.

  This seems to be an issue only during times where these external
  infrastructure components had issues.  I have come up with the
  following script which loops over all cells and checks where they are,
  and spits out a mysql query to run to fix.

  This would be nice to have as a nova-manage cell_v2 command to help if
  any other users run into this, unfortunately I'm a bit short on time
  so I don't have time to nova-ify it, but it's here:

  
  #!/usr/bin/env python

  import urlparse

  import pymysql

  
  # Connect to databases
  api_conn = pymysql.connect(host='', port=3306, user='nova_api', 
passwd='xxx', db='nova_api')
  api_cur = api_conn.cursor()

  def _get_conn(db):
parsed_url = urlparse.urlparse(db)
conn = pymysql.connect(host=parsed_url.hostname, user=parsed_url.username, 
passwd=parsed_url.password, db=parsed_url.path[1:])
return conn.cursor()

  # Get list of all cells
  api_cur.execute("SELECT uuid, name, database_connection FROM cell_mappings")
  CELLS = [{'uuid': uuid, 'name': name, 'db': _get_conn(db)} for uuid, name, db 
in api_cur.fetchall()]

  # Get list of all unmapped instances
  api_cur.execute("SELECT instance_uuid FROM instance_mappings WHERE cell_id IS 
NULL")
  print "Number of unmapped instances: %s" % api_cur.rowcount

  # Go over all unmapped instances
  for (instance_uuid,) in api_cur.fetchall():
instance_cell = None

# Check which cell contains this instance
for cell in CELLS:
  cell['db'].execute("SELECT id FROM instances WHERE uuid = %s", 
(instance_uuid,))

  if cell['db'].rowcount != 0:
instance_cell = cell
break

# Update to the correct cell
if instance_cell:
  print "UPDATE instance_mappings SET cell_id = '%s' WHERE instance_uuid = 
'%s'" % (instance_cell['uuid'], instance_uuid)
  continue

# If we reach this point, it's not in any cell?!
print "%s: not found in any cell" % (instance_uuid)
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737854] Re: Wrong content in "paginated collections" API guide page

2018-08-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/528180
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=22ab1b62b311f238e91c48253b2c8fa9ad9fa328
Submitter: Zuul
Branch:master

commit 22ab1b62b311f238e91c48253b2c8fa9ad9fa328
Author: chenxing 
Date:   Fri Dec 15 15:23:09 2017 +0800

Fix the incorrect description and sample

This patch fixes the following:
 * The description says, "Here, a subset of metadata items
   are presented within the image." This is a server sample,
   not an image.

 * The sample itself has the wrong id in the "self" link and
   there are no metadata_links with a server object (probably
   a carry over from an image docs sample).

Change-Id: Idb1e5243d6d072e020e1532bec603e5cd219702e
Closes-Bug: #1737854


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737854

Title:
  Wrong content in "paginated collections" API guide page

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There are a few issues in this doc:

  https://developer.openstack.org/api-
  guide/compute/paginated_collections.html

  1. The bug link doesn't work, it redirects back to the same page (not
  sure why).

  2. The description of the example says:

  "The following examples illustrate three pages in a collection of
  images."

  But it's clearly showing examples of paging through servers, not
  images (likely this doc was copied from some glance docs and not fully
  updated).

  3. The sample at the bottom has a few issues:

  a) The description says, "Here, a subset of metadata items are
  presented within the image." - again, this is a server sample, not an
  image.

  b) The sample itself has the (1) wrong id in the "self" link and (2)
  there are no metadata_links with a server object (again, probably a
  carry over from an image docs sample).

  {
  "server": {
  "id": "52415800-8b69-11e0-9b19-734f6f006e54",
  "name": "Elastic",
  "metadata": {
  "Version": "1.3",
  "ServiceType": "Bronze"
  },
  "metadata_links": [
  {
  "rel": "next",
  "href": 
"https://servers.api.openstack.org/v2.1/servers/fc55acf4-3398-447b-8ef9-72a42086d775/meta?marker=ServiceType";
  }
  ],
  "links": [
  {
  "rel": "self",
  "href": 
"https://servers.api.openstack.org/v2.1/servers/fc55acf4-3398-447b-8ef9-72a42086d775";
  }
  ]
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1737854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784577] Re: Some allocation candidate tests for sharing providers fail in python 3.6 (and work in python 3.5)

2018-08-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/587700
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0b7946f5a3dd62003bdfead068dd58bd58ed6d18
Submitter: Zuul
Branch:master

commit 0b7946f5a3dd62003bdfead068dd58bd58ed6d18
Author: Tetsuro Nakamura 
Date:   Tue Jul 31 19:10:56 2018 +0900

Ensure the order of AllocationRequestResources

Getting allocation candidates with sharing providers, placement
creates a list of AllocationRequestResources to get all the
possible combinations of resource providers in the same aggregate.

However, the order of the list was arbitrary, which could cause
a bug later in duplicate check of the combination.

This patch ensures that the list is ordered by the resource
class id.

Note:
  This bug is only exposed when it is tested with python3.6,
  where order-preserving aspect is added to the dict object.

Change-Id: I2e236fbbc3a4cfd3bd66d50198de643e06d62331
Closes-Bug: #1784577


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784577

Title:
  Some allocation candidate tests for sharing providers fail in python
  3.6 (and work in python 3.5)

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When running the nova functional tests under python 3.6 the
  
nova.tests.functional.api.openstack.placement.db.test_allocation_candidates.AllocationCandidatesTestCase.test_all_sharing_providers.*
  tests (there are 3) all fail because incorrect results are produced on
  the call to rp_obj.AllocationCandidates.get_by_requests:

  b"reference = [[('ss1', 'DISK_GB', 1500),"
  b"  ('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss1', 'SRIOV_NET_VF', 1)],"
  b" [('ss1', 'DISK_GB', 1500),"
  b"  ('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss2', 'SRIOV_NET_VF', 1)],"
  b" [('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss1', 'SRIOV_NET_VF', 1),"
  b"  ('ss2', 'DISK_GB', 1500)],"
  b" [('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss2', 'DISK_GB', 1500),"
  b"  ('ss2', 'SRIOV_NET_VF', 1)]]"
  b"actual= [[('ss1', 'DISK_GB', 1500),"
  b"  ('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss1', 'SRIOV_NET_VF', 1)],"
  b" [('ss1', 'DISK_GB', 1500),"
  b"  ('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss2', 'SRIOV_NET_VF', 1)],"
  b" [('ss1', 'DISK_GB', 1500),"
  b"  ('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss2', 'SRIOV_NET_VF', 1)],"
  b" [('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss1', 'SRIOV_NET_VF', 1),"
  b"  ('ss2', 'DISK_GB', 1500)],"
  b" [('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss2', 'DISK_GB', 1500),"
  b"  ('ss2', 'SRIOV_NET_VF', 1)],"
  b" [('ss1', 'IPV4_ADDRESS', 2),"
  b"  ('ss2', 'DISK_GB', 1500),"
  b"  ('ss2', 'SRIOV_NET_VF', 1)]]"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784879] [NEW] Neutron doesn't update Designate with some use cases

2018-08-01 Thread Kobi Samoray
Public bug reported:

Neutron and Designate integration covers use cases for ports which are exposed 
via floating IPs, or reside on provider networks.
However, the following use cases aren't being covered:
1. Ports reside on a no-NAT network, which is routable from outside the 
Openstack deployment.
2. Ports on any network which need exposure via DNS: e.g an app uses FQDNs to 
intercommunicate between app components.

As the no-NAT attribute belongs to the router, and not to the network, it might 
be tricky to detect port exposure via this attribute: a user could attach a 
network with some ports on it to a no-NAT network and so they're exposed even 
though they weren't during creation.
Or a router might be changed from NAT to no-NAT and vice versa.
To simplify I would suggest adding an attribute to the network via an 
extension, which would indicate that this network's ports should be published 
on the DNS.
So for networks which need exposure via DNS, we could flag these networks and 
force the DNS publishing.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1784879

Title:
  Neutron doesn't update Designate with some use cases

Status in neutron:
  New

Bug description:
  Neutron and Designate integration covers use cases for ports which are 
exposed via floating IPs, or reside on provider networks.
  However, the following use cases aren't being covered:
  1. Ports reside on a no-NAT network, which is routable from outside the 
Openstack deployment.
  2. Ports on any network which need exposure via DNS: e.g an app uses FQDNs to 
intercommunicate between app components.

  As the no-NAT attribute belongs to the router, and not to the network, it 
might be tricky to detect port exposure via this attribute: a user could attach 
a network with some ports on it to a no-NAT network and so they're exposed even 
though they weren't during creation.
  Or a router might be changed from NAT to no-NAT and vice versa.
  To simplify I would suggest adding an attribute to the network via an 
extension, which would indicate that this network's ports should be published 
on the DNS.
  So for networks which need exposure via DNS, we could flag these networks and 
force the DNS publishing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1784879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784874] [NEW] ResourceTracker doesn't clean up compute_nodes or stats entries

2018-08-01 Thread Matt Riedemann
Public bug reported:

This was noted in review:

https://review.openstack.org/#/c/587636/4/nova/compute/resource_tracker.py@141

That the ResourceTracker.compute_nodes and ResourceTracker.stats (and
old_resources) entries only grow and are never cleaned up as we
rebalance nodes or nodes are deleted, which means it just takes up
memory over time.

When we cleanup compute nodes here:

https://github.com/openstack/nova/blob/47ef500f4492c731ebfa33a12822ef6b5db4e7e2/nova/compute/manager.py#L7759

We should probably call a cleanup hook into the ResourceTracker to
cleanup those entries as well.

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: ironic performance resource-tracker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784874

Title:
  ResourceTracker doesn't clean up compute_nodes or stats entries

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  This was noted in review:

  https://review.openstack.org/#/c/587636/4/nova/compute/resource_tracker.py@141

  That the ResourceTracker.compute_nodes and ResourceTracker.stats (and
  old_resources) entries only grow and are never cleaned up as we
  rebalance nodes or nodes are deleted, which means it just takes up
  memory over time.

  When we cleanup compute nodes here:

  
https://github.com/openstack/nova/blob/47ef500f4492c731ebfa33a12822ef6b5db4e7e2/nova/compute/manager.py#L7759

  We should probably call a cleanup hook into the ResourceTracker to
  cleanup those entries as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784837] [NEW] Test tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_in_tenant_traffic fails in neutron-tempest-dvr-ha-multinode-full job

2018-08-01 Thread Slawek Kaplonski
Public bug reported:

I found it failed at least 3 times during last week:

http://logs.openstack.org/63/577463/4/check/neutron-tempest-dvr-ha-multinode-full/612982d/logs/testr_results.html.gz
http://logs.openstack.org/88/555088/21/check/neutron-tempest-dvr-ha-multinode-full/48acc8b/logs/testr_results.html.gz
http://logs.openstack.org/14/529814/25/check/neutron-tempest-dvr-ha-multinode-full/499a621/logs/testr_results.html.gz

It looks like some problems with connecting to FIP but in each of those
examples failure error is different.

In one of cases there is no console-log from instances logged
(http://logs.openstack.org/63/577463/4/check/neutron-tempest-dvr-ha-
multinode-full/612982d/logs/testr_results.html.gz) and would be good to
improve that in tempest also.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: gate-failure l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1784837

Title:
  Test
  
tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_in_tenant_traffic
  fails in neutron-tempest-dvr-ha-multinode-full job

Status in neutron:
  Confirmed

Bug description:
  I found it failed at least 3 times during last week:

  
http://logs.openstack.org/63/577463/4/check/neutron-tempest-dvr-ha-multinode-full/612982d/logs/testr_results.html.gz
  
http://logs.openstack.org/88/555088/21/check/neutron-tempest-dvr-ha-multinode-full/48acc8b/logs/testr_results.html.gz
  
http://logs.openstack.org/14/529814/25/check/neutron-tempest-dvr-ha-multinode-full/499a621/logs/testr_results.html.gz

  It looks like some problems with connecting to FIP but in each of
  those examples failure error is different.

  In one of cases there is no console-log from instances logged
  (http://logs.openstack.org/63/577463/4/check/neutron-tempest-dvr-ha-
  multinode-full/612982d/logs/testr_results.html.gz) and would be good
  to improve that in tempest also.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1784837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784836] [NEW] Functional tests from neutron.tests.functional.db.migrations fails randomly

2018-08-01 Thread Slawek Kaplonski
Public bug reported:

Functional test from neutron.tests.functional.db.migrations module are
sometimes failing in Neutron check queue.

Example of such failure: http://logs.openstack.org/50/533850/34/check
/neutron-functional/30ee01c/logs/testr_results.html.gz

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: db functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1784836

Title:
  Functional tests from neutron.tests.functional.db.migrations fails
  randomly

Status in neutron:
  Confirmed

Bug description:
  Functional test from neutron.tests.functional.db.migrations module are
  sometimes failing in Neutron check queue.

  Example of such failure: http://logs.openstack.org/50/533850/34/check
  /neutron-functional/30ee01c/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1784836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700863] Re: Bug in rebuild function

2018-08-01 Thread Sylvain Bauza
There is already a Launchpad blueprint describing this feature request
https://blueprints.launchpad.net/nova/+spec/volume-backed-server-rebuild
and a spec is proposed https://review.openstack.org/#/c/532407/

Closing the bug as Wishlist/Invalid.

** Changed in: nova
   Status: In Progress => Won't Fix

** Changed in: nova
   Status: Won't Fix => Invalid

** Changed in: nova
   Importance: High => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1700863

Title:
  Bug in rebuild function

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Nova would not recreate root volume when rebuilding an instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1700863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784826] [NEW] Guest remain in origin host after evacuate and unset force-down nova-compute

2018-08-01 Thread huanhongda
Public bug reported:

Description
===
If evacuate an instance from a nova-compute forced down host, after unset 
force-down the instance's guest will remain in origin host.
That's because while unset force-down nova-compute service, nova-compute will 
not destroy evacuated instances. 
And it will do destroy while restart nova-compute service.

Steps to reproduce
==
1) Boot an instance on node-1.
2) Force down the nova-compute service on node-1.
   nova service-force-down node-1 nova-compute
3) Evacuate the instance to other host.
4) Unset force-down for node-1 nova-compute.
   nova service-force-down --unset node-1 nova-compute
5) Check the guest on node-1.
   virsh list

Expected result
===
The guest should be deleted on node-1.

Actual result
=
The guest still remain on node-1.

Environment
===
This bug was found in Newton. I think it also exist in master as the code of 
unset force-down service is the same as Newton.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: evacuate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784826

Title:
  Guest remain in origin host after evacuate and unset force-down nova-
  compute

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  If evacuate an instance from a nova-compute forced down host, after unset 
force-down the instance's guest will remain in origin host.
  That's because while unset force-down nova-compute service, nova-compute will 
not destroy evacuated instances. 
  And it will do destroy while restart nova-compute service.

  Steps to reproduce
  ==
  1) Boot an instance on node-1.
  2) Force down the nova-compute service on node-1.
 nova service-force-down node-1 nova-compute
  3) Evacuate the instance to other host.
  4) Unset force-down for node-1 nova-compute.
 nova service-force-down --unset node-1 nova-compute
  5) Check the guest on node-1.
 virsh list

  Expected result
  ===
  The guest should be deleted on node-1.

  Actual result
  =
  The guest still remain on node-1.

  Environment
  ===
  This bug was found in Newton. I think it also exist in master as the code of 
unset force-down service is the same as Newton.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782386] Re: compute node local_gb_used does not include swap disks

2018-08-01 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/585928
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e3df049cad2b7a264acbad4d6baecbc813546e5d
Submitter: Zuul
Branch:master

commit e3df049cad2b7a264acbad4d6baecbc813546e5d
Author: XiaohanZhang <15809181...@qq.com>
Date:   Thu Jul 26 10:07:50 2018 +0800

compute node local_gb_used include swap disks

The ComputeNode.local_gb_used value is set in the 
ResourceTracker._update_usage() method, based on

1. root_gb in the flavor
2. any disk overhead from the virt driver
3. ephemeral_gb in the flavor

The consideration of swap disk in the flavor was ignored.

This patch adds swap disk to the consideration.

Closes-bug: #1782386

Change-Id: I880e9daa6b97d73a0e33ac9a5bdae9bacfa89aaa


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782386

Title:
  compute node local_gb_used does not include swap disks

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The ComputeNode.local_gb_used value is set in the
  ResourceTracker._update_usage() method:

  
https://github.com/openstack/nova/blob/eb4f65a7951e921b1cd8d05713e144e72f2f254f/nova/compute/resource_tracker.py#L934

  Based on:

  1. root_gb in the flavor
  2. any disk overhead from the virt driver
  3. ephemeral_gb in the flavor

  It was added in this change in Grizzly:
  https://review.openstack.org/#/c/13182/

  However, the RT does not take into account swap disk, which was fixed
  in the DiskFilter in Icehouse:

  https://review.openstack.org/#/c/51323/

  But is still missing the swap (MB) calculation for local_gb_used in
  the ResourceTracker (after all these years).

  Given how latent this bug is, it's low priority to fix it, and really
  long-term we should probably change the ResourceTracker to report
  usage from the placement API since that correctly accounts for
  root_gb/ephemeral_gb/swap (mb) when posting DISK_GB allocations to
  placement.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1782386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784798] [NEW] openstack mitaka evacuate fails and no clean up the evacuated instance xml in libvirt

2018-08-01 Thread Rock_Zhao
Public bug reported:

Description
===
For example:
Theres compute Nodes: A and B and C
instance "vmtest" runs on A now.

Steps to reproduce
==
1.nova host-evacuate A
2.Assumed instance "vmtest" evacuated failed on B with libvirt error. And found 
the instance xml of vmtest is still in /etc/libvirt/qemu
3.Then try again "nova host-evacuate A".If vmtest is scheduled on B again, 
there will throw exception: Instance instance-vmtest already exists.

Expected result
===
When vmtest first evacuated failed on B, remove instance xml of vmtest on B

Actual result
=
If vmtest is scheduled on B again, there will throw exception: Instance 
instance-vmtest already exists.

Whether it should be clean up the instance xml of vmtest on the failed
evacuated node B?

Environment
===
openstack mitaka

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: evacuate mitaka

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784798

Title:
  openstack mitaka evacuate fails and no clean up the evacuated instance
  xml in libvirt

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  For example:
  Theres compute Nodes: A and B and C
  instance "vmtest" runs on A now.

  Steps to reproduce
  ==
  1.nova host-evacuate A
  2.Assumed instance "vmtest" evacuated failed on B with libvirt error. And 
found the instance xml of vmtest is still in /etc/libvirt/qemu
  3.Then try again "nova host-evacuate A".If vmtest is scheduled on B again, 
there will throw exception: Instance instance-vmtest already exists.

  Expected result
  ===
  When vmtest first evacuated failed on B, remove instance xml of vmtest on B

  Actual result
  =
  If vmtest is scheduled on B again, there will throw exception: Instance 
instance-vmtest already exists.

  Whether it should be clean up the instance xml of vmtest on the failed
  evacuated node B?

  Environment
  ===
  openstack mitaka

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1784798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp