[Yahoo-eng-team] [Bug 1648714] [NEW] hook speedup for trunk details

2016-12-08 Thread Armando Migliaccio
Public bug reported:

http://lists.openstack.org/pipermail/openstack-
dev/2016-December/108460.html

** Affects: neutron
 Importance: Low
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: In Progress


** Tags: newton-backport-potential trunk

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
   Status: New => Confirmed

** Tags added: trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648714

Title:
  hook speedup for trunk details

Status in neutron:
  In Progress

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2016-December/108460.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1648714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641642] Re: users that are blacklisted for PCI support should not have failed login attempts counted

2016-12-08 Thread Steve Martinelli
*** This bug is a duplicate of bug 1642348 ***
https://bugs.launchpad.net/bugs/1642348

** This bug has been marked a duplicate of bug 1642348
   Attack could lockout a service account

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1641642

Title:
  users that are blacklisted for PCI support should not have failed
  login attempts counted

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  The main idea behind the user ID blacklist for PCI was to allow
  service accounts to not have to change their password. As noted in
  [1], a by-product of any PCI implementation is a vulnerability to a
  DoS (a malicious user attempting to login X times and locking out a
  user). This case is worsened by the fact that openstack uses a few
  very common usernames: "nova", "admin", "service", etc.

  Since blacklisted users are already exempt from changing their
  password every Y days, then they should be equally exempt from the
  consequences of too many logins.

  [1] http://www.mattfischer.com/blog/?p=769

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1641642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647800] Re: keystone-manage bootstrap isn't completely idempotent

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/408719
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=90f2f96e69b8bfd5058628b50c9f0083e3f293e9
Submitter: Jenkins
Branch:master

commit 90f2f96e69b8bfd5058628b50c9f0083e3f293e9
Author: Lance Bragstad 
Date:   Thu Dec 8 17:01:22 2016 +

Make bootstrap idempotent when it needs to be

This commit makes `keystone-manage bootstrap` completely idempotent
when configuration values or environment variables haven't changed
between runs. If they have changed, then `bootstrap` shouldn't be
as idempotent becuase it's changing the state of the deployment.

This commit addresses these issues and adds tests to ensure the
proper behavior is tested.

Change-Id: I053b27e881f5bb67db1ace01e6d06aead10b1e47
Closes-Bug: 1647800


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1647800

Title:
  keystone-manage bootstrap isn't completely idempotent

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) mitaka series:
  In Progress
Status in OpenStack Identity (keystone) newton series:
  In Progress

Bug description:
  The keystone-manage bootstrap command was designed to be idempotent.
  Most everything in the bootstrap command is wrapped with a try/except
  to handle cases where specific entities already exist (i.e. there is
  already an admin project or an admin user from a previous bootstrap
  run). This is important because bootstrap handles the creation of
  administrator-like things in order to "bootstrap" a deployment. If
  bootstrap wasn't idempotent, the side-effect of running it multiple
  times would be catastrophic.

  During an upgrade scenario, using OpenStack Ansible's rolling upgrade
  support [0], from stable/newton to master, I noticed a very specific
  case where bootstrap was not idempotent. Even if the admin user passed
  to bootstrap already exists, the command will still attempt to update
  it's password [1], even if the admin password hasn't changed. It does
  the same thing with the user's enabled property. This somehow creates
  a revocation event to be stored for that specific user [2]. As a
  result, all tokens for the user specified in the bootstrap command
  will be invalid once the upgrade happens, since OpenStack Ansible
  relies on `keystone-manage bootstrap` during the upgrade.

  This only affects the bootstrap user, but it can be considered a
  service interruption since it is being done during an upgrade. We
  could look into only updating the user's password, or enabled field,
  if and only if they have changed. In that case, a revocation event
  *should* be persisted since the bootstrap command is changing
  something about the account. In the case where there is no change in
  password or enabled status, tokens should still be able to be
  validated across releases.

  I have documented the upgrade procedure and process in a separate
  repository [3]

  [0] https://review.openstack.org/#/c/384269/
  [1] 
https://github.com/openstack/keystone/blob/1c60b1539cf63bba79711e237df496dfa094b2c5/keystone/cmd/cli.py#L226-L232
  [2] http://cdn.pasteraw.com/9gz9964mwufyw3f98rv1mv1hqxezpis
  [3] https://github.com/lbragstad/keystone-performance-upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1647800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581635] Re: combobox no value default or none

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/319514
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=16b58d2fe6d1e57fa803c51a7551b90022eed597
Submitter: Jenkins
Branch:master

commit 16b58d2fe6d1e57fa803c51a7551b90022eed597
Author: Adriano Fialho 
Date:   Fri May 20 23:39:00 2016 -0300

modified condition variable project_choices

the error was in condition "if > 1", which was allowing
the default value only when the list had two or more values.

Change-Id: If2ce5a5a3cb5f6411df0c023a818f20fa2130768
Closes-Bug: #1581635


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1581635

Title:
  combobox no value default or none

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  how to reproduce the error:

  > go url horizon ---> http://localhost/dashboard/identity/users/
  > view -> any user
  > Verify Project ID  = None 
  > Edit any Project ID = None 
  > Go combobox "Primary Project"
  > Click combobox

  In combobox not have an option "None" or "default" Value, causing
  confusion for the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1581635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648663] [NEW] Nova instance or other resource tags are allowed as empty string

2016-12-08 Thread Ghanshyam Mann
Public bug reported:

Nova support tagging of instance or other resource like device etc. But
those tags are allowed to be as empty string which does not make much
sense or any use case.

[root@faydevnt ~] # curl -g -i -X PUT 
http://9.60.18.229:8774/v2.1/78649a9e795f4c42b1975a1f1d923c64/servers/57c948ae-753a-4549-b62e-b763e775e50f/tags
 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H 
"OpenStack-API-Version: compute 2.37" -H "X-OpenStack-Nova-API-Version: 2.37" 
-H "X-Auth-Token: 4451f4afd5e54914984b621a2ffe2e68" -H "Content-Type: 
application/json" -d '{"tags": ["", ""]}'
HTTP/1.1 200 OK
Content-Length: 14
Content-Type: application/json
Openstack-Api-Version: compute 2.37
X-Openstack-Nova-Api-Version: 2.37
Vary: OpenStack-API-Version
Vary: X-OpenStack-Nova-API-Version
X-Compute-Request-Id: req-ae45d4fc-ba91-4863-ab35-590b287de631
Date: Thu, 08 Dec 2016 11:12:51 GMT

{"tags": [""]}[root@nova server-tag-list 57c948ae-753a-4549-b62e-b763e775e50f
+-+
| Tag |
+-+
| |
+-+

Nova should 400 if any tag is requested with empty string. that can be
done easily visa json schema with minLength as 1.

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Ghanshyam Mann (ghanshyammann)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648663

Title:
  Nova instance or other resource tags are allowed as empty string

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova support tagging of instance or other resource like device etc.
  But those tags are allowed to be as empty string which does not make
  much sense or any use case.

  [root@faydevnt ~] # curl -g -i -X PUT 
http://9.60.18.229:8774/v2.1/78649a9e795f4c42b1975a1f1d923c64/servers/57c948ae-753a-4549-b62e-b763e775e50f/tags
 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H 
"OpenStack-API-Version: compute 2.37" -H "X-OpenStack-Nova-API-Version: 2.37" 
-H "X-Auth-Token: 4451f4afd5e54914984b621a2ffe2e68" -H "Content-Type: 
application/json" -d '{"tags": ["", ""]}'
  HTTP/1.1 200 OK
  Content-Length: 14
  Content-Type: application/json
  Openstack-Api-Version: compute 2.37
  X-Openstack-Nova-Api-Version: 2.37
  Vary: OpenStack-API-Version
  Vary: X-OpenStack-Nova-API-Version
  X-Compute-Request-Id: req-ae45d4fc-ba91-4863-ab35-590b287de631
  Date: Thu, 08 Dec 2016 11:12:51 GMT

  {"tags": [""]}[root@nova server-tag-list 57c948ae-753a-4549-b62e-b763e775e50f
  +-+
  | Tag |
  +-+
  | |
  +-+

  Nova should 400 if any tag is requested with empty string. that can be
  done easily visa json schema with minLength as 1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648656] [NEW] Angular template cache preloading makes developers cry

2016-12-08 Thread Richard Jones
Public bug reported:

It is difficult to convince angular to reload a changed HTML file with
preloaded template caching turned on. It should be turned off when DEBUG
is on.

** Affects: horizon
 Importance: Medium
 Assignee: Richard Jones (r1chardj0n3s)
 Status: In Progress

** Changed in: horizon
   Status: New => Triaged

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648656

Title:
  Angular template cache preloading makes developers cry

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  It is difficult to convince angular to reload a changed HTML file with
  preloaded template caching turned on. It should be turned off when
  DEBUG is on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643944] Re: l3_agent.ini is missing [agent] extensions option

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/400852
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fc251bf72cb962972d6237d5c2b7e50e3dac377a
Submitter: Jenkins
Branch:master

commit fc251bf72cb962972d6237d5c2b7e50e3dac377a
Author: Ihar Hrachyshka 
Date:   Sat Nov 12 07:58:06 2016 +

Expose [agent] extensions option into l3_agent.ini

This option was added in Newton as part of blueprint l3-agent-extensions
but was not exposed into the sample config file.

Change-Id: Ifdb60032223d37858f41e6c3efc18f0de72912db
Closes-Bug: #1643944


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643944

Title:
  l3_agent.ini is missing [agent] extensions option

Status in neutron:
  Fix Released

Bug description:
  The option was added in Newton as part of
  https://blueprints.launchpad.net/neutron/+spec/l3-agent-extensions but
  we have not exposed it in the sample conf file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1643944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644097] Re: qos tempest tests should not try to use unsupported rule types

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/406610
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9f19b3e7e16d946b0770710a428606462cf217f9
Submitter: Jenkins
Branch:master

commit 9f19b3e7e16d946b0770710a428606462cf217f9
Author: Sławek Kapłoński 
Date:   Sun Dec 4 15:00:54 2016 +

Tempest tests uses only supported QoS rule types

If rule type required for QoS tempest test is not supported by current
configuration of Neutron then such test will be skipped.
For example if neutron-server is running with ML2 plugin only with 
openvswitch
mechanism driver loaded then tests related to MINIMUM_BANDWIDTH rule type 
will
be skipped because openvswitch mechanism driver don't support this kind of 
rule.

Change-Id: I88e59cdbd79afb5337052ba3e5aecb96c7c8ea1c
Closes-Bug: 1644097


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644097

Title:
  qos tempest tests should not try to use unsupported rule types

Status in neutron:
  Fix Released

Bug description:
  qos tempest tests should not try to use unsupported rule types on the 
deployment.
  currently they seem to assume all rule types are available, regardless of 
whatever
  list_qos_rule_types returns.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1644097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613900] Re: Unable to use 'Any' availability zone when spawning instance

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/362230
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=be6bcef45289ce1a934db478e5c9c7705890af18
Submitter: Jenkins
Branch:master

commit be6bcef45289ce1a934db478e5c9c7705890af18
Author: Matt Borland 
Date:   Mon Aug 29 09:54:56 2016 -0600

Let Nova to pick availability zone if more than 1

In the Angular Launch Instance, if there is more than one availability zone,
default to the option for the Nova scheduler to pick.  This is a regression
from the legacy Launch Instance feature.

If you want to simulate testing the logic with multiple avaiability zones,
go to:  .../launch-instance/launch-instance-model.service.js line 785 and 
add:

model.availabilityZones.push({label: 'another one', value: 'another one'});

Change-Id: Ib81447382bc9d43e33ce97f78c085d2a94ff2018
Closes-Bug: 1613900


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1613900

Title:
  Unable to use 'Any' availability zone when spawning instance

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  While using Mitaka, we found that by default, using js backend, it is
  not possible to choose 'any' availability zone. The issue is not fixed
  in master branch.

  For python implementation the logic is:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L390

  The JS implementation miss the logic if number of AZs is >1
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js#L321

  Also, JS implementation looks ugly if you have lot of subnets per
  network...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1613900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648430] Re: lbaasv2 loadbalancer stuck in pending_update

2016-12-08 Thread Brian Haley
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648430

Title:
  lbaasv2 loadbalancer stuck in pending_update

Status in octavia:
  New

Bug description:
  I have a loadbalancer configured which is stuck in pending_update. I
  would like to delete it but it returns a message that it cannot delete
  it because it is pending_update.

  (neutron) lbaas-loadbalancer-list 
  
+--+-+-+-+--+
  | id   | name| vip_address | 
provisioning_status | provider |
  
+--+-+-+-+--+
  | a5416558-23b5-4c15-943a-e0cc0ed1743b | yw-lb02 | 172.16.10.6 | ACTIVE   
   | haproxy  |
  | c3b48167-9924-493e-a6c5-c4df5a8fd643 | yw-lb01 | 172.16.10.4 | 
PENDING_UPDATE  | haproxy  |
  
+--+-+-+-+--+
  (neutron) lbaas-loadbalancer-delete c3b48167-9924-493e-a6c5-c4df5a8fd643
  listener 5b9dd62f-e815-4d55-8554-c2f1a617c477 is using this loadbalancer
  Neutron server returns request_ids: 
['req-4a48664c-5358-4c51-b861-3d8e9072c919']
  (neutron) lbaas-listener-list
  
+--+--+--+--+---++
  | id   | default_pool_id  
| name | protocol | protocol_port | admin_state_up |
  
+--+--+--+--+---++
  | f7a49f34-d375-430f-92b7-0377cd906178 | 8632a63c-9ffa-431c-820a-9705a99bee58 
| web-list | HTTP |80 | True   |
  | 5b9dd62f-e815-4d55-8554-c2f1a617c477 | 8691087b-e694-4daa-8475-da0a55fadb40 
| HTTP | HTTP |80 | True   |
  
+--+--+--+--+---++
  (neutron) lbaas-listener-delete 5b9dd62f-e815-4d55-8554-c2f1a617c477
  Invalid state PENDING_UPDATE of loadbalancer resource 
c3b48167-9924-493e-a6c5-c4df5a8fd643
  Neutron server returns request_ids: 
['req-83ca7c98-4745-40b2-8810-d22d1ab6ba61']

  
  I have tried to reboot all services but it keeps stuck in pending_update and 
I'm not able to remove the loadbalancer or any options configurered on it.

  Is it possible to fix this?

  Thanks advance for helping

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1648430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648645] [NEW] Cannot specify version of salt-minion

2016-12-08 Thread hemebond
Public bug reported:

When installing salt-minion via cloud-init it always installs the latest
version, providing no way to specify a specific version of the package.

The file https://git.launchpad.net/cloud-
init/tree/cloudinit/config/cc_salt_minion.py installs the package and
this can't even be overridden using the arbitrary package install
(packages property).

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1648645

Title:
  Cannot specify version of salt-minion

Status in cloud-init:
  New

Bug description:
  When installing salt-minion via cloud-init it always installs the
  latest version, providing no way to specify a specific version of the
  package.

  The file https://git.launchpad.net/cloud-
  init/tree/cloudinit/config/cc_salt_minion.py installs the package and
  this can't even be overridden using the arbitrary package install
  (packages property).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1648645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648574] Re: ImageNotFound should not trace exception in delete_image_on_error decorator

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/408771
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2bb70e7b15e6cfef4652e2e49c4e02d151d2dbdf
Submitter: Jenkins
Branch:master

commit 2bb70e7b15e6cfef4652e2e49c4e02d151d2dbdf
Author: Matt Riedemann 
Date:   Thu Dec 8 13:44:54 2016 -0500

Don't trace on ImageNotFound in delete_image_on_error

The point of the delete_image_on_error decorator is to
cleanup an image used during snapshot operations, so it
makes little sense to log an exception trace if the image
delete fails because the image no longer exists, which it
might not since _snapshot_instance method will proactively
delete non-active images in certain situations.

So let's just handle the ImageNotFound and ignore it.

Change-Id: I14e061a28678ad28e38bd185e3d0a35cae41a9cf
Closes-Bug: #1648574


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648574

Title:
  ImageNotFound should not trace exception in delete_image_on_error
  decorator

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress

Bug description:
  As seen here:

  http://logs.openstack.org/69/405969/4/check/gate-tempest-dsvm-neutron-
  src-neutron-lib-ubuntu-
  xenial/04c26f3/logs/screen-n-cpu.txt?level=TRACE#_2016-12-07_06_42_10_616

  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager 
[req-fdc419c6-edf0-4b3c-89ac-1e376a5b64e5 tempest-ImagesTestJSON-2099199377 
tempest-ImagesTestJSON-2099199377] [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] Error while trying to clean up image 
dd2d2646-16a5-4135-8ff7-a3b255e01cd9
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] Traceback (most recent call last):
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 238, in decorated_function
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] self.image_api.delete(context, 
image_id)
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d]   File 
"/opt/stack/new/nova/nova/image/api.py", line 141, in delete
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] return session.delete(context, 
image_id)
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d]   File 
"/opt/stack/new/nova/nova/image/glance.py", line 765, in delete
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] raise 
exception.ImageNotFound(image_id=image_id)
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] ImageNotFound: Image 
dd2d2646-16a5-4135-8ff7-a3b255e01cd9 could not be found.
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] 

  The snapshot_instance method in the nova compute manager is decorated
  with the delete_image_on_error method which is meant to delete an
  image snapshot in glance if something fails during the snapshot/image
  upload process. The thing is it's a cleanup decorator, and if glance
  raises ImageNotFound, then we don't care, we shouldn't emit a
  stacktrace in that case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648643] [NEW] nova-api-metadata ignores firewall driver

2016-12-08 Thread Sam Morrison
Public bug reported:

In my nova.conf I have

firewall_driver = nova.virt.firewall.NoopFirewallDriver

When I start nova-api-metadata it installs some iptables rules (and
blows away what is already there)

I want to make it not manage any iptables rules by using the noop driver
however it has no affect on nova-api-metadata.

I'm using stable/mitaka although a look at the code in master would
indicate this affects master too.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648643

Title:
  nova-api-metadata ignores firewall driver

Status in OpenStack Compute (nova):
  New

Bug description:
  In my nova.conf I have

  firewall_driver = nova.virt.firewall.NoopFirewallDriver

  When I start nova-api-metadata it installs some iptables rules (and
  blows away what is already there)

  I want to make it not manage any iptables rules by using the noop
  driver however it has no affect on nova-api-metadata.

  I'm using stable/mitaka although a look at the code in master would
  indicate this affects master too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648643/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648629] Re: ScaleIO Volumes Created before Newton, do not connect or disconnect after Newton upgrade

2016-12-08 Thread Scott DAngelo
Added Brick and deleted Nova.

** Also affects: os-brick
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648629

Title:
  ScaleIO Volumes Created before Newton, do not connect or disconnect
  after Newton upgrade

Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  New

Bug description:
  Description
  ===
  ScaleIO volumes that were created pre stable/newton ( stable/mitaka ) in my 
case, 
  do not connect or disconnect to/from the hypervisor when nova-compute 
attempts the 
  connection after the upgrade to stable/newton.

  This appears to be because the os_brick 1.2 to 1.6 change includes a new 
attribute 
  'scaleIO_volume_id'
  This field is not created if the volumes were created at stable/mitaka or 
before.
  In this case, this field is not found in nova.block_device_mappings 
'connection_info'
  column. This field is created for volumes created after stable/newton upgrade.

  
  Steps to reproduce
  ==
  1. Install ScaleIO
  2. Install openstack via openstack-ansible stable/mitaka
  3. Create and connect ScaleIO volumes to instances
  4. Upgrade to stable/newton
  5. Shutdown a instance
  6. Attempt start of a instance

  Expected result
  ===
  The instance should start

  Actual result
  =
  The instance would not start.
  http://pastebin.com/Pvd6HrcL

  --after manually adding the 'scaleIO_volume_if' parameter to 
  nova.block_device_mapping.connection_info the connect / disconnect
  works as expected.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
stable/newton deployed via openstack-ansible 14.0.3

  2. Which hypervisor did you use?
 libvirt+kvm

  2. Which storage type did you use?
 scaleIO 2.0.1

  3. Which networking type did you use?
 neutron with linux bridge

  Logs & Configs
  ==
  http://pastebin.com/Pvd6HrcL

  os_brick version with OSA stable/mitaka = 1.2
  os_brick version with OSA stable/newton = 1.6

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648629] [NEW] ScaleIO Volumes Created before Newton, do not connect or disconnect after Newton upgrade

2016-12-08 Thread Wade Holler
Public bug reported:

Description
===
ScaleIO volumes that were created pre stable/newton ( stable/mitaka ) in my 
case, 
do not connect or disconnect to/from the hypervisor when nova-compute attempts 
the 
connection after the upgrade to stable/newton.

This appears to be because the os_brick 1.2 to 1.6 change includes a new 
attribute 
'scaleIO_volume_id'
This field is not created if the volumes were created at stable/mitaka or 
before.
In this case, this field is not found in nova.block_device_mappings 
'connection_info'
column. This field is created for volumes created after stable/newton upgrade.


Steps to reproduce
==
1. Install ScaleIO
2. Install openstack via openstack-ansible stable/mitaka
3. Create and connect ScaleIO volumes to instances
4. Upgrade to stable/newton
5. Shutdown a instance
6. Attempt start of a instance

Expected result
===
The instance should start

Actual result
=
The instance would not start.
http://pastebin.com/Pvd6HrcL

--after manually adding the 'scaleIO_volume_if' parameter to 
nova.block_device_mapping.connection_info the connect / disconnect
works as expected.

Environment
===
1. Exact version of OpenStack you are running. See the following
  stable/newton deployed via openstack-ansible 14.0.3

2. Which hypervisor did you use?
   libvirt+kvm

2. Which storage type did you use?
   scaleIO 2.0.1

3. Which networking type did you use?
   neutron with linux bridge

Logs & Configs
==
http://pastebin.com/Pvd6HrcL

os_brick version with OSA stable/mitaka = 1.2
os_brick version with OSA stable/newton = 1.6

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648629

Title:
  ScaleIO Volumes Created before Newton, do not connect or disconnect
  after Newton upgrade

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  ScaleIO volumes that were created pre stable/newton ( stable/mitaka ) in my 
case, 
  do not connect or disconnect to/from the hypervisor when nova-compute 
attempts the 
  connection after the upgrade to stable/newton.

  This appears to be because the os_brick 1.2 to 1.6 change includes a new 
attribute 
  'scaleIO_volume_id'
  This field is not created if the volumes were created at stable/mitaka or 
before.
  In this case, this field is not found in nova.block_device_mappings 
'connection_info'
  column. This field is created for volumes created after stable/newton upgrade.

  
  Steps to reproduce
  ==
  1. Install ScaleIO
  2. Install openstack via openstack-ansible stable/mitaka
  3. Create and connect ScaleIO volumes to instances
  4. Upgrade to stable/newton
  5. Shutdown a instance
  6. Attempt start of a instance

  Expected result
  ===
  The instance should start

  Actual result
  =
  The instance would not start.
  http://pastebin.com/Pvd6HrcL

  --after manually adding the 'scaleIO_volume_if' parameter to 
  nova.block_device_mapping.connection_info the connect / disconnect
  works as expected.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
stable/newton deployed via openstack-ansible 14.0.3

  2. Which hypervisor did you use?
 libvirt+kvm

  2. Which storage type did you use?
 scaleIO 2.0.1

  3. Which networking type did you use?
 neutron with linux bridge

  Logs & Configs
  ==
  http://pastebin.com/Pvd6HrcL

  os_brick version with OSA stable/mitaka = 1.2
  os_brick version with OSA stable/newton = 1.6

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648625] [NEW] DataTable column level policy

2016-12-08 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/164010
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/horizon" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit d04fcc41aac131263f4d12ebde436a848e853115
Author: Romain Hardouin 
Date:   Thu Mar 12 23:44:52 2015 +0100

DataTable column level policy

Add the ability to filter DataTable's columns depending on policy.
Allow to easily display admin only columns.

Useful when some data are not available to regular users
in API responses. When the policy checks fail for a user,
the DataTable instance ignore the columns as if they were not defined.

Change-Id: I4bf41fb63550725fe2bbb03c51909d22ee202725
Implements: blueprint datatable-column-level-policy-checks
DocImpact Document how to filter DataTable columns thanks to policy_rules 
argument

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: doc horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648625

Title:
  DataTable column level policy

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://review.openstack.org/164010
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/horizon" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit d04fcc41aac131263f4d12ebde436a848e853115
  Author: Romain Hardouin 
  Date:   Thu Mar 12 23:44:52 2015 +0100

  DataTable column level policy
  
  Add the ability to filter DataTable's columns depending on policy.
  Allow to easily display admin only columns.
  
  Useful when some data are not available to regular users
  in API responses. When the policy checks fail for a user,
  the DataTable instance ignore the columns as if they were not defined.
  
  Change-Id: I4bf41fb63550725fe2bbb03c51909d22ee202725
  Implements: blueprint datatable-column-level-policy-checks
  DocImpact Document how to filter DataTable columns thanks to policy_rules 
argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593571] Re: [Mitaka] 'AttributeError: name' when using multiple domain with LDAP driver

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/358048
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=cb664ecf18bc68f03bc0a2ccff66d285872cde59
Submitter: Jenkins
Branch:master

commit cb664ecf18bc68f03bc0a2ccff66d285872cde59
Author: Edmund Rhudy 
Date:   Fri Aug 19 14:50:12 2016 -0400

Fixes traceback if group name attribute is missing

This works around an issue where certain backends (e.g. LDAP)
did not provide a group name, only ID, which would cause most
identity management tasks in Horizon to fail. If no name is
provided, the ID is duplicated as the group name.

Change-Id: Iea87abf38d26cb2baff43521c7dd2ae0a00e9997
Closes-Bug: #1593571


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1593571

Title:
  [Mitaka] 'AttributeError: name' when using multiple domain with LDAP
  driver

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  I get 'AttributeError: name' when using multiple domain with LDAP driver.
  Specifically, when I click 'Create Project', 'Manage Members', 'Modify 
Groups', 'Edit Project' and 'Modify Quotas' button on the 'Projects' page of 
'Identify' menu, horizon makes those error messages below.

  It doesn't seem to be related to keystone since there is no error
  message from keystone node and I can do those operations using CLI
  without any problem.

  ==> /var/log/apache2/error.log <==


[97/7522]
  [Fri Jun 17 14:44:57.440997 2016] [wsgi:error] [pid 93971:tid 
140574614624000] Problem instantiating action class.
  [Fri Jun 17 14:44:57.441049 2016] [wsgi:error] [pid 93971:tid 
140574614624000] Traceback (most recent call last):
  [Fri Jun 17 14:44:57.441060 2016] [wsgi:error] [pid 93971:tid 
140574614624000]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../horizon/workflows/base.py",
 line 370, in action
  [Fri Jun 17 14:44:57.441083 2016] [wsgi:error] [pid 93971:tid 
140574614624000] context)
  [Fri Jun 17 14:44:57.441123 2016] [wsgi:error] [pid 93971:tid 
140574614624000]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/identity/projects/workflows.py",
 line 323, in __init__
  [Fri Jun 17 14:44:57.441141 2016] [wsgi:error] [pid 93971:tid 
140574614624000] groups_list = [(group.id, group.name) for group in 
all_groups]
  [Fri Jun 17 14:44:57.441157 2016] [wsgi:error] [pid 93971:tid 
140574614624000]   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 491, in 
__getattr__
  [Fri Jun 17 14:44:57.441182 2016] [wsgi:error] [pid 93971:tid 
140574614624000] raise AttributeError(k)
  [Fri Jun 17 14:44:57.441199 2016] [wsgi:error] [pid 93971:tid 
140574614624000] AttributeError: name
  [Fri Jun 17 14:44:57.442367 2016] [wsgi:error] [pid 93971:tid 
140574614624000] Internal Server Error: 
/horizon/identity/04e7fe0cfb19418a9ec2eacfe1d334d5/update/
  [Fri Jun 17 14:44:57.442389 2016] [wsgi:error] [pid 93971:tid 
140574614624000] Traceback (most recent call last):
  [Fri Jun 17 14:44:57.442420 2016] [wsgi:error] [pid 93971:tid 
140574614624000]   File 
"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 164, in 
get_response
  [Fri Jun 17 14:44:57.442439 2016] [wsgi:error] [pid 93971:tid 
140574614624000] response = response.render()
  [Fri Jun 17 14:44:57.442455 2016] [wsgi:error] [pid 93971:tid 
140574614624000]   File 
"/usr/lib/python2.7/dist-packages/django/template/response.py", line 158, in 
render
  [Fri Jun 17 14:44:57.442480 2016] [wsgi:error] [pid 93971:tid 
140574614624000] self.content = self.rendered_content
  [Fri Jun 17 14:44:57.442497 2016] [wsgi:error] [pid 93971:tid 
140574614624000]   File 
"/usr/lib/python2.7/dist-packages/django/template/response.py", line 135, in 
rendered_content
  [Fri Jun 17 14:44:57.442514 2016] [wsgi:error] [pid 93971:tid 
140574614624000] content = template.render(context, self._request)
  [Fri Jun 17 14:44:57.442530 2016] [wsgi:error] [pid 93971:tid 
140574614624000]   File 
"/usr/lib/python2.7/dist-packages/django/template/backends/django.py", line 74, 
in render
  [Fri Jun 17 14:44:57.442555 2016] [wsgi:error] [pid 93971:tid 
140574614624000] return self.template.render(context)
  [Fri Jun 17 14:44:57.442579 2016] [wsgi:error] [pid 93971:tid 
140574614624000]   File 
"/usr/lib/python2.7/dist-packages/django/template/base.py", line 210, in render
  [Fri Jun 17 14:44:57.442596 2016] [wsgi:error] [pid 93971:tid 
140574614624000] return self._render(context)
  [Fri Jun 17 

[Yahoo-eng-team] [Bug 1384462] Re: angular humanizeNumbers utility is not internationalized

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/17
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=89d0304d19e5789bc5b42b30e39e1e71a7f2bdfe
Submitter: Jenkins
Branch:master

commit 89d0304d19e5789bc5b42b30e39e1e71a7f2bdfe
Author: Albert Tu 
Date:   Thu Oct 20 15:31:23 2016 +0800

Add i18n support to Quota.humanizeNumbers

Show localized number format via ECMAScript i18n API.

Closes-Bug: #1384462

Change-Id: Ia8ca3c93c157bfb791f711529d60ce610b3686fe


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384462

Title:
  angular humanizeNumbers utility is not internationalized

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  While browsing code I ran across 
  horizon/static/horizon/js/angular/services/horizon.utils.js#L25

   humanizeNumbers: function (number) {
  return number.toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",");
  },

  which is not a proper way to group numbers in all locales.

  http://en.wikipedia.org/wiki/Decimal_mark#Other_numeral_systems ,
  Examples of use shows various internationalized examples.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625230] Re: Role Assignment Incorrectly Reports Inheritance when --name is Used

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/380973
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=b9890f81205f461da82c19fd39cf3f50a3aa945d
Submitter: Jenkins
Branch:master

commit b9890f81205f461da82c19fd39cf3f50a3aa945d
Author: Kanika Singh 
Date:   Mon Oct 3 13:49:37 2016 +0530

Get assignments with names honors inheritance flag

When listing role assignments with the ?include_names option,
the inheritance flag was not honored.

This change fixes that behavior and enables the test that was
submitted in the parent patch.

Co-Authored-By: Samuel de Medeiros Queiroz 

Closes-Bug: #1625230

Change-Id: Ic0d32f3e47ee82015d86cec8b7502a440b66c021


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1625230

Title:
  Role Assignment Incorrectly Reports Inheritance when --name is Used

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When retrieving role assignments via the openstack client, passing the
  --name flag will cause Keystone to not return the value of inherited,
  so openstack client always reports false.

  My test environment is an OSA AIO using OSA 13.1.3, which is using Keystone 
commit 87d67946e75db2ec2a6af72447211ca1ee291940.
   
  Steps to reproduce:
  * assign a role to a user on a domain and pass --inherited, so the role will 
be inherited to the domain's projects
  * run "openstack role assignment list --user  --name"

  Example output with debug request response without --name:

  :~# openstack --debug role assignment list --user 
14bc7c6869374b33bd5125f6c61d44b9
  ...
  REQ: curl -g -i -X GET 
http://172.29.236.100:35357/v3/role_assignments?user.id=14bc7c6869374b33bd5125f6c61d44b9
 -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}65c4fb6823ecccbf9441b041c2764e9eb2424fca"
  "GET /v3/role_assignments?user.id=14bc7c6869374b33bd5125f6c61d44b9 HTTP/1.1" 
200 586
  RESP: [200] Content-Length: 586 Vary: X-Auth-Token Server: Apache Date: Mon, 
19 Sep 2016 15:07:23 GMT Content-Type: application/json x-openstack-request-id: 
req-0ace9479-bb24-423c-8269-83da8a57ff6f
  RESP BODY: {"role_assignments": [{"scope": {"domain": {"id": 
"c000bbc3b52f41fe99e9f560666b36f1"}, "OS-INHERIT:inherited_to": "projects"}, 
"role": {"id": "9fe2ff9ee4384b1894a90878d3e92bab"}, "user": {"id": 
"14bc7c6869374b33bd5125f6c61d44b9"}, "links": {"assignment": 
"http://172.29.236.100:35357/v3/OS-INHERIT/domains/c000bbc3b52f41fe99e9f560666b36f1/users/14bc7c6869374b33bd5125f6c61d44b9/roles/9fe2ff9ee4384b1894a90878d3e92bab/inherited_to_projects"}}],
 "links": {"self": 
"http://172.29.236.100:35357/v3/role_assignments?user.id=14bc7c6869374b33bd5125f6c61d44b9;,
 "previous": null, "next": null}}

  
+--+--+---+-+--+---+
  | Role | User | Group 
| Project | Domain   | Inherited |
  
+--+--+---+-+--+---+
  | 9fe2ff9ee4384b1894a90878d3e92bab | 14bc7c6869374b33bd5125f6c61d44b9 |   
| | c000bbc3b52f41fe99e9f560666b36f1 | True  |
  
+--+--+---+-+--+---+

  Example output with debug request response with --name:

  :~# openstack --debug role assignment list --user 
14bc7c6869374b33bd5125f6c61d44b9 --name
  ...
  REQ: curl -g -i -X GET 
http://172.29.236.100:35357/v3/role_assignments?user.id=14bc7c6869374b33bd5125f6c61d44b9_names=True
 -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}1ee295769134d215d26474bfc59704338ddbfb52"
  "GET 
/v3/role_assignments?user.id=14bc7c6869374b33bd5125f6c61d44b9_names=True
 HTTP/1.1" 200 681
  RESP: [200] Content-Length: 681 Vary: X-Auth-Token Server: Apache Date: Mon, 
19 Sep 2016 15:08:52 GMT Content-Type: application/json x-openstack-request-id: 
req-70f3eb92-0cdd-4a02-866c-e8d1b2cbd113
  RESP BODY: {"role_assignments": [{"scope": {"domain": {"id": 
"c000bbc3b52f41fe99e9f560666b36f1", "name": "mytestdomain"}}, "role": {"id": 
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, "user": {"domain": 
{"id": "c000bbc3b52f41fe99e9f560666b36f1", "name": "mytestdomain"}, "id": 
"14bc7c6869374b33bd5125f6c61d44b9", "name": "testdomainuser"}, "links": 
{"assignment": 
"http://172.29.236.100:35357/v3/domains/c000bbc3b52f41fe99e9f560666b36f1/users/14bc7c6869374b33bd5125f6c61d44b9/roles/9fe2ff9ee4384b1894a90878d3e92bab"}}],
 "links": {"self": 

[Yahoo-eng-team] [Bug 1645571] Re: keystone-manage mapping_populate fails when wrong domain name is given and gives unhandled exception

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/404197
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=6b16e2cf88f775919eaf8f115986bcdc2aba1e68
Submitter: Jenkins
Branch:master

commit 6b16e2cf88f775919eaf8f115986bcdc2aba1e68
Author: Boris Bobrov 
Date:   Tue Nov 29 15:33:17 2016 +0300

Print domain name in mapping_populate error message

domain_id was not defined at the moment when it was referenced and
it caused UnboundLocalError.

Use domain name instead and test it.

Change-Id: Ib351df0942025451994873e272861afec1b60dea
Closes-Bug: 1645571


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645571

Title:
  keystone-manage mapping_populate fails when wrong domain name is given
  and gives unhandled exception

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Running keystone-manage mapping_populate --domain-name  where no 
such domain name exists, fails silently with no error message displayed on 
screen.
  The logs from keystone.log is displayed below.

  2016-11-18 07:22:26.470 1384 CRITICAL keystone 
[req-a2de669b-a645-42b0-8af6-a36e1cf53c54 - - - - -] UnboundLocalError: local 
variable 'domain_id' referenced before assignment
  2016-11-18 07:22:26.470 1384 ERROR keystone Traceback (most recent call last):
  2016-11-18 07:22:26.470 1384 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2016-11-18 07:22:26.470 1384 ERROR keystone sys.exit(main())
  2016-11-18 07:22:26.470 1384 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 43, in main
  2016-11-18 07:22:26.470 1384 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2016-11-18 07:22:26.470 1384 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1256, in main
  2016-11-18 07:22:26.470 1384 ERROR keystone CONF.command.cmd_class.main()
  2016-11-18 07:22:26.470 1384 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1202, in main
  2016-11-18 07:22:26.470 1384 ERROR keystone 'domain': domain_id})
  2016-11-18 07:22:26.470 1384 ERROR keystone UnboundLocalError: local variable 
'domain_id' referenced before assignment
  2016-11-18 07:22:26.470 1384 ERROR keystone

  My guess is that instead of printing domain_id, we should print
  domain_name to avoid the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648574] [NEW] ImageNotFound should not trace exception in delete_image_on_error decorator

2016-12-08 Thread Matt Riedemann
Public bug reported:

As seen here:

http://logs.openstack.org/69/405969/4/check/gate-tempest-dsvm-neutron-
src-neutron-lib-ubuntu-
xenial/04c26f3/logs/screen-n-cpu.txt?level=TRACE#_2016-12-07_06_42_10_616

2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager 
[req-fdc419c6-edf0-4b3c-89ac-1e376a5b64e5 tempest-ImagesTestJSON-2099199377 
tempest-ImagesTestJSON-2099199377] [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] Error while trying to clean up image 
dd2d2646-16a5-4135-8ff7-a3b255e01cd9
2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] Traceback (most recent call last):
2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 238, in decorated_function
2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] self.image_api.delete(context, 
image_id)
2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d]   File 
"/opt/stack/new/nova/nova/image/api.py", line 141, in delete
2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] return session.delete(context, 
image_id)
2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d]   File 
"/opt/stack/new/nova/nova/image/glance.py", line 765, in delete
2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] raise 
exception.ImageNotFound(image_id=image_id)
2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] ImageNotFound: Image 
dd2d2646-16a5-4135-8ff7-a3b255e01cd9 could not be found.
2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] 

The snapshot_instance method in the nova compute manager is decorated
with the delete_image_on_error method which is meant to delete an image
snapshot in glance if something fails during the snapshot/image upload
process. The thing is it's a cleanup decorator, and if glance raises
ImageNotFound, then we don't care, we shouldn't emit a stacktrace in
that case.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged

** Affects: nova/newton
 Importance: Undecided
 Status: New


** Tags: compute logs snapshot

** Summary changed:

- ImageNotFound exception traced in delete_image_on_error decorator
+ ImageNotFound should not trace exception in delete_image_on_error decorator

** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648574

Title:
  ImageNotFound should not trace exception in delete_image_on_error
  decorator

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  As seen here:

  http://logs.openstack.org/69/405969/4/check/gate-tempest-dsvm-neutron-
  src-neutron-lib-ubuntu-
  xenial/04c26f3/logs/screen-n-cpu.txt?level=TRACE#_2016-12-07_06_42_10_616

  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager 
[req-fdc419c6-edf0-4b3c-89ac-1e376a5b64e5 tempest-ImagesTestJSON-2099199377 
tempest-ImagesTestJSON-2099199377] [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] Error while trying to clean up image 
dd2d2646-16a5-4135-8ff7-a3b255e01cd9
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] Traceback (most recent call last):
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 238, in decorated_function
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] self.image_api.delete(context, 
image_id)
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d]   File 
"/opt/stack/new/nova/nova/image/api.py", line 141, in delete
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] return session.delete(context, 
image_id)
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d]   File 
"/opt/stack/new/nova/nova/image/glance.py", line 765, in delete
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] raise 
exception.ImageNotFound(image_id=image_id)
  2016-12-07 06:42:10.616 32194 ERROR nova.compute.manager [instance: 
a2e32d10-4374-45c5-a732-af3459f2950d] ImageNotFound: Image 
dd2d2646-16a5-4135-8ff7-a3b255e01cd9 could 

[Yahoo-eng-team] [Bug 1648542] Re: keystone does not retry on deadlock Transactions [500 Error]

2016-12-08 Thread Boris Bobrov
This is not a duplicate, the retrying code should be added to the
identity driver

** Changed in: keystone
   Status: Invalid => Confirmed

** Changed in: keystone
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1648542

Title:
  keystone does not retry on deadlock Transactions [500 Error]

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  Description of problem:
  DBDeadlock: (pymysql.err.InternalError) (1213, u'Deadlock found when trying 
to get lock; try restarting transaction')

  The above error is retry-able error, but no evidence for keystone
  would really did a retry before throwing a 500.

  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
[req-c7dba8f5-9269-4c6f-b3e2-9a6c43b6cf0b a8377c6c4d05430d92ac661a2319cc95 
c24adc59b2d0490c930d0270a1faecb5 - default default] (pymysql.err.InternalError) 
(1213, u'Deadlock found when trying to get lock; try restarting transaction')
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi result = 
method(req, **params)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_log/versionutils.py", line 174, in 
wrapped
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi return 
func_or_cls(*args, **kwargs)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/controllers.py", line 164, 
in delete_user
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self.identity_api.delete_user(user_id, initiator)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in 
wrapped
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 416, in 
wrapper
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 426, in 
wrapper
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 990, in 
delete_user
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
driver.delete_user(entity_id)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/sql.py", line 277, 
in delete_user
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
session.delete(ref)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi self.gen.next()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
875, in _transaction_scope
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi yield resource
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi self.gen.next()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
522, in _session
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self._end_session_transaction(self.session)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
543, in _end_session_transaction
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi session.commit()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 813, in 
commit
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self.transaction.commit()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 396, in 
commit
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi t[1].commit()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1574, 

[Yahoo-eng-team] [Bug 1648564] [NEW] Inconsistent labels for Stacks filters

2016-12-08 Thread Revon Mathews
Public bug reported:

Steps:

Go to Project>Orchestration>Stacks
Click on filter dropdown
Not all labels have the '=' following the filter name as is seen on the other 
pages.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screen Shot 2016-12-08 at 11.40.48 AM.png"
   
https://bugs.launchpad.net/bugs/1648564/+attachment/4789155/+files/Screen%20Shot%202016-12-08%20at%2011.40.48%20AM.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648564

Title:
  Inconsistent labels for Stacks filters

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps:

  Go to Project>Orchestration>Stacks
  Click on filter dropdown
  Not all labels have the '=' following the filter name as is seen on the other 
pages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648542] Re: keystone does not retry on deadlock Transactions [500 Error]

2016-12-08 Thread Adam Young
CLosing as a duplicate.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1648542

Title:
  keystone does not retry on deadlock Transactions [500 Error]

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Description of problem:
  DBDeadlock: (pymysql.err.InternalError) (1213, u'Deadlock found when trying 
to get lock; try restarting transaction')

  The above error is retry-able error, but no evidence for keystone
  would really did a retry before throwing a 500.

  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
[req-c7dba8f5-9269-4c6f-b3e2-9a6c43b6cf0b a8377c6c4d05430d92ac661a2319cc95 
c24adc59b2d0490c930d0270a1faecb5 - default default] (pymysql.err.InternalError) 
(1213, u'Deadlock found when trying to get lock; try restarting transaction')
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi result = 
method(req, **params)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_log/versionutils.py", line 174, in 
wrapped
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi return 
func_or_cls(*args, **kwargs)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/controllers.py", line 164, 
in delete_user
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self.identity_api.delete_user(user_id, initiator)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in 
wrapped
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 416, in 
wrapper
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 426, in 
wrapper
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 990, in 
delete_user
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
driver.delete_user(entity_id)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/sql.py", line 277, 
in delete_user
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
session.delete(ref)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi self.gen.next()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
875, in _transaction_scope
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi yield resource
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi self.gen.next()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
522, in _session
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self._end_session_transaction(self.session)
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
543, in _end_session_transaction
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi session.commit()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 813, in 
commit
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self.transaction.commit()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 396, in 
commit
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi t[1].commit()
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1574, in 
commit
  2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi self._do_commit()
  2016-11-12 08:55:10.995 13952 

[Yahoo-eng-team] [Bug 1648565] [NEW] Inconsistent labels for Stacks filters

2016-12-08 Thread Revon Mathews
Public bug reported:

Steps:

Go to Project>Orchestration>Stacks
Click on filter dropdown
Not all labels have the '=' following the filter name as is seen on the other 
pages.

** Affects: horizon
 Importance: Undecided
 Assignee: Revon Mathews (revon-mathews)
 Status: New

** Attachment added: "Screen Shot 2016-12-08 at 11.40.48 AM.png"
   
https://bugs.launchpad.net/bugs/1648565/+attachment/4789156/+files/Screen%20Shot%202016-12-08%20at%2011.40.48%20AM.png

** Changed in: horizon
 Assignee: (unassigned) => Revon Mathews (revon-mathews)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648565

Title:
  Inconsistent labels for Stacks filters

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps:

  Go to Project>Orchestration>Stacks
  Click on filter dropdown
  Not all labels have the '=' following the filter name as is seen on the other 
pages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648560] Re: List and Dict types not accepted in Oslo field String()

2016-12-08 Thread Nakul Dahiwade
** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648560

Title:
  List and Dict types not accepted in Oslo field String()

Status in oslo.versionedobjects:
  New

Bug description:
  This bug has been reported because String() field in
  oslo_versionedobjects/fields.py does not accept list and dict as valid
  value types for any DictOfStringsField type field.

  If an object like ]

  is passed. I am getting the following two errors:

  http://paste.openstack.org/show/591827/
  http://paste.openstack.org/show/591831/

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.versionedobjects/+bug/1648560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602057] Re: (libvirt) KeyError updating resources for some node, guest.uuid is not in BDM list

2016-12-08 Thread Corey Bryant
** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/newton
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New => Fix Released

** Changed in: cloud-archive/mitaka
   Importance: Undecided => High

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

** Changed in: cloud-archive/mitaka
   Importance: High => Medium

** Changed in: cloud-archive/newton
   Importance: Undecided => Medium

** Changed in: cloud-archive/newton
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1602057

Title:
  (libvirt) KeyError updating resources for some node, guest.uuid is not
  in BDM list

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Triaged
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress
Status in OpenStack Compute (nova) newton series:
  Fix Committed

Bug description:
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
[req-d5d5d486-b488-4429-bbb5-24c9f19ff2c0 - - - - -] Error updating resources 
for node controller.
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6726, in 
update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 500, 
in update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5728, in 
get_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
disk_over_committed = self._get_disk_over_committed_size_total()
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7397, in 
_get_disk_over_committed_size_total
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager KeyError: 
'0a5c5743-9555-4dfd-b26e-198449ebeee5'
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1602057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648542] [NEW] keystone does not retry on deadlock Transactions [500 Error]

2016-12-08 Thread Adam Young
Public bug reported:

Description of problem:
DBDeadlock: (pymysql.err.InternalError) (1213, u'Deadlock found when trying to 
get lock; try restarting transaction')

The above error is retry-able error, but no evidence for keystone would
really did a retry before throwing a 500.

2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
[req-c7dba8f5-9269-4c6f-b3e2-9a6c43b6cf0b a8377c6c4d05430d92ac661a2319cc95 
c24adc59b2d0490c930d0270a1faecb5 - default default] (pymysql.err.InternalError) 
(1213, u'Deadlock found when trying to get lock; try restarting transaction')
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi result = 
method(req, **params)
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_log/versionutils.py", line 174, in 
wrapped
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi return 
func_or_cls(*args, **kwargs)
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/controllers.py", line 164, 
in delete_user
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self.identity_api.delete_user(user_id, initiator)
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in 
wrapped
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 416, in 
wrapper
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 426, in 
wrapper
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 990, in 
delete_user
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
driver.delete_user(entity_id)
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/sql.py", line 277, 
in delete_user
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi session.delete(ref)
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi self.gen.next()
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
875, in _transaction_scope
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi yield resource
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi self.gen.next()
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
522, in _session
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self._end_session_transaction(self.session)
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
543, in _end_session_transaction
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi session.commit()
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 813, in 
commit
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self.transaction.commit()
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 396, in 
commit
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi t[1].commit()
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1574, in 
commit
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi self._do_commit()
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1605, in 
_do_commit
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 
self.connection._commit_impl()
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 690, in 
_commit_impl
2016-11-12 08:55:10.995 13952 ERROR keystone.common.wsgi 

[Yahoo-eng-team] [Bug 1633734] Re: ValueError: Field `instance_uuid' cannot be None

2016-12-08 Thread Matt Riedemann
I'd be OK with adding a newton-only online database migration, which
gets run when you upgrade to newton anyway, to sniff out build requests
without an instance_uuid set and delete them.

** Changed in: nova
   Status: New => Confirmed

** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633734

Title:
  ValueError: Field `instance_uuid' cannot be None

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  I "accidentally" upgraded from Mitaka to Newton a few days ago and I'm
  still cleaning up "the mess" that introduced (to used to Debian
  GNU/Linux packages takes care of all that for me).

  Anyway, I'm now getting

  ValueError: Field `instance_uuid' cannot be None

  in the nova-api log.

  I've been looking at
  http://docs.openstack.org/releasenotes/nova/newton.html#upgrade-notes
  but I'm not sure what to do.

  I've run

  nova-manage db online_data_migrations
  => ERROR nova.db.sqlalchemy.api [req-c08dbccb-d841-4e38-a895-26768f24222b 
- - - - -] Data migrations for PciDevice are not safe, likely because not all 
services that access the DB directly are updated to the latest version

  nova-manage db sync
  => ERROR: could not access cell mapping database - has api db been 
created?

  nova-manage api_db sync
  => Seems to run ok

  nova-manage cell_v2 discover_hosts
  => error: 'module' object has no attribute 'session'

  nova-manage cell_v2 map_cell0
  => Seemed like it ran ok

  nova-manage cell_v2 simple_cell_setup --transport-url rabbit://blabla/
  => Seemed like it ran ok

  nova-manage db null_instance_uuid_scan
  => There were no records found where instance_uuid was NULL.

  Other than that, I'm not sure what the problem is.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1633734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645616] Re: dnsmasq doesn't like providing DHCP for subnets with prefixes shorter than 64

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/398016
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=028a349bc531cbcd91fd15f072be4b84952376b8
Submitter: Jenkins
Branch:master

commit 028a349bc531cbcd91fd15f072be4b84952376b8
Author: Kevin Benton 
Date:   Tue Nov 15 17:34:16 2016 -0800

Skip larger than /64 subnets in DHCP agent

Dnsmasq can't handle these in IPv6 so we need to skip them to avoid
a whole bunch of log noise caused by continual retrying of issues.

Closes-Bug: #1645616
Change-Id: I36d167506cc45731e3f500a0c59b70b1bc27590f


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645616

Title:
  dnsmasq doesn't like providing DHCP for subnets with prefixes shorter
  than 64

Status in neutron:
  Fix Released

Bug description:
  Trace when you enable DHCP on an IPv6 network with a prefix less than
  64.

  2016-11-15 17:33:54.321 102837 ERROR neutron.agent.dhcp.agent 
ProcessExecutionError: Exit code: 1; Stdin: ; Stdout: ; Stderr:
  2016-11-15 17:33:54.321 102837 ERROR neutron.agent.dhcp.agent dnsmasq: bad 
command line options: prefix length must be at least 64

  At a minimum we need to skip these on the DHCP agent to prevent a
  bunch of log noise and retries. We probably should consider rejecting
  enable_dhcp=True in the API when the prefix is like this for IPv6 if
  it's a fundamental limitation of DHCPv6.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647715] Re: get_subnetpool() raise error when called with unset filters

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407058
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3ee76d4980b81354ff19da24de1a0c6e4b67903c
Submitter: Jenkins
Branch:master

commit 3ee76d4980b81354ff19da24de1a0c6e4b67903c
Author: Artur Korzeniewski 
Date:   Tue Dec 6 14:28:38 2016 +0100

Convert filters to empty dict if None in DB's get_subnetpools().

When calling ML2 plugin's get_subnetpools(), 'fields' parameters may be
passed. By default it is set to None. The 'filters' is passed to
SubnetPool.get_objects() as kwargs (using **).  If someone calls plugin's
get_subnetpools() without passing filters, the method will raise exception.

This patch is adding support for 'filters' being None when calling
plugin get_subnetpools(), by converting it to be empty dict '{}'.

Change-Id: Ic7432cb167583e82b3c0e237ef25a5c9f21986e6
Closes-Bug: 1647715


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647715

Title:
  get_subnetpool() raise error when called with unset filters

Status in neutron:
  Fix Released

Bug description:
  get_subnetpools() from db_base_plugin_v2 is raising error when called with 
unset filters argument.
  This is because filters are by default set to None and when calling OVO, the 
filters are passed as kwargs using '**':
  
https://github.com/openstack/neutron/blob/10.0.0.0b1/neutron/db/db_base_plugin_v2.py#L1087

  It is only issue when calling directly using plugin.
  Unit tests were not covering it.

  API is tested and okay.

  This is also affecting Newton.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1647715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595623] Re: Improperly rendered Modal Form after one invalid submission

2016-12-08 Thread Rob Cresswell
Fixed by https://review.openstack.org/#/c/337703/

** Changed in: horizon
   Status: In Progress => Won't Fix

** Changed in: horizon
   Status: Won't Fix => Invalid

** Changed in: horizon
Milestone: ocata-2 => None

** Changed in: horizon
 Assignee: Tatiana Ovchinnikova (tmazur) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1595623

Title:
  Improperly rendered Modal Form after one invalid submission

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Steps to reproduce:

  1. Go to Project -> Volumes
  2. Pick an existing volume, click "Extend Volume" and get a modal form
  3. Enter or select an invalid value for "New Size" field and click "Extend 
Volume"
  4. Get another modal form with a warning message "New size must be greater 
than current size."
  5. Enter an arbitrary value for "New Size" field, correct or not, and click 
"Extend Volume"

  Expected:
  You should get another modal form with a warning message or successfully 
extended volume depending on new size value.

  Current behaviour:
  The form closes with a message: "Danger: There was an error submitting the 
form. Please try again."

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1595623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646181] Re: NFS: Fail to boot VM out of large snapshots (30GB+)

2016-12-08 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => Confirmed

** Changed in: nova/newton
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646181

Title:
  NFS: Fail to boot VM out of large snapshots (30GB+)

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  Confirmed

Bug description:
  Description
  ===
  Using NFS Shared storage, when I try to boot a VM out of a smaller snapshot 
(1GB) it works fine.
  Although, when i try to do the same out of a larger snapshot (30GB+) it fails 
regardless of the OpenStack Release Newton or Mitaka.

  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue you noticed:
  * I have OpenStack RDO MNewton (or Mitaka) installed and functional
  * I boot a VM out of a QCOW2 image of about 1GB
  * Then I loginto that VM and create a large file (33GB) to inflat the VM image
  * then I shutoff the VM and take a snapshot of it that i call 
"largeVMsnapshotImage"

  Alternatively to the steps above,
  * I have a snapshot from a large VM (30GB+) that I upload in glance and call 
"largeVMsnapshotImage"

  Then I do:
  * then I try to boot a new VM out of that snapshot using the same network
  * Although the image seems to be copied to the compute node, the VM Creation 
fails on "qemu-img info" command

  If I run the same command manually, it works:
  /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=2 -- env 
LC_ALL=C LANG=C qemu-img info 
/var/lib/nova/instances/_base/2b54e1ca13134ceb7fc489d58d7aa6fd321b1885
  image: /var/lib/nova/instances/_base/2b54e1ca13134ceb7fc489d58d7aa6fd321b1885
  file format: raw
  virtual size: 80G (85899345920 bytes)
  disk size: 37G

  Although, in the logs it fails and the VM Creation is interrupted, see log 
from nova-compute.log on the compute node:
  ...
  2016-11-29 17:52:23.581 10284 ERROR nova.compute.manager [instance: 
d6889ea2-f277-40e5-afdc-b3b0698537ed] BuildAbortException: Build of instance 
d6889ea2-f277-40e5-afdc-b3b0698537ed aborted: Disk info file is invalid: 
qemu-img failed to execute on 
/var/lib/nova/instances/_base/2b54e1ca13134ceb7fc489d58d7aa6fd321b1885 : 
Unexpected error while running command.
  2016-11-29 17:52:23.581 10284 ERROR nova.compute.manager [instance: 
d6889ea2-f277-40e5-afdc-b3b0698537ed] Command: /usr/bin/python2 -m 
oslo_concurrency.prlimit --as=1073741824 --cpu=2 -- env LC_ALL=C LANG=C 
qemu-img info 
/var/lib/nova/instances/_base/2b54e1ca13134ceb7fc489d58d7aa6fd321b1885
  2016-11-29 17:52:23.581 10284 ERROR nova.compute.manager [instance: 
d6889ea2-f277-40e5-afdc-b3b0698537ed] Exit code: -9
  2016-11-29 17:52:23.581 10284 ERROR nova.compute.manager [instance: 
d6889ea2-f277-40e5-afdc-b3b0698537ed] Stdout: u''
  2016-11-29 17:52:23.581 10284 ERROR nova.compute.manager [instance: 
d6889ea2-f277-40e5-afdc-b3b0698537ed] Stderr: u''
  ...

  
  Expected result
  ===
  The VM should have been created/booted out of the larg snapshot image.

  Actual result
  =
  The command fails with exit code -9 when Noiva

  Environment
  ===
  1. Running RDO Newton on Centos 7.2 (or Oracle Linux 7.2) and reproduced on 
RDO Mitaka as well

 If this is from a distro please provide
 $ [root@controller ~]# rpm -qa|grep nova
  openstack-nova-console-14.0.0-1.el7.noarch
  puppet-nova-9.4.0-1.el7.noarch
  python-nova-14.0.0-1.el7.noarch
  openstack-nova-novncproxy-14.0.0-1.el7.noarch
  openstack-nova-conductor-14.0.0-1.el7.noarch
  openstack-nova-api-14.0.0-1.el7.noarch
  openstack-nova-common-14.0.0-1.el7.noarch
  openstack-nova-scheduler-14.0.0-1.el7.noarch
  openstack-nova-serialproxy-14.0.0-1.el7.noarch
  python2-novaclient-6.0.0-1.el7.noarch
  openstack-nova-cert-14.0.0-1.el7.noarch

  
  2. Which hypervisor did you use?
 KVM
 
 details:
 [root@compute4 nova]# rpm -qa|grep -Ei "kvm|qemu|libvirt"
  libvirt-gobject-0.1.9-1.el7.x86_64
  libvirt-gconfig-0.1.9-1.el7.x86_64
  libvirt-daemon-1.2.17-13.0.1.el7.x86_64
  qemu-kvm-common-1.5.3-105.el7.x86_64
  qemu-img-1.5.3-105.el7.x86_64
  ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch
  libvirt-client-1.2.17-13.0.1.el7.x86_64
  libvirt-daemon-driver-nodedev-1.2.17-13.0.1.el7.x86_64
  libvirt-daemon-driver-lxc-1.2.17-13.0.1.el7.x86_64
  libvirt-daemon-kvm-1.2.17-13.0.1.el7.x86_64
  libvirt-daemon-driver-secret-1.2.17-13.0.1.el7.x86_64
  libvirt-python-1.2.17-2.el7.x86_64
  libvirt-daemon-config-network-1.2.17-13.0.1.el7.x86_64
  libvirt-daemon-config-nwfilter-1.2.17-13.0.1.el7.x86_64
  libvirt-daemon-driver-storage-1.2.17-13.0.1.el7.x86_64
  qemu-kvm-1.5.3-105.el7.x86_64
  libvirt-1.2.17-13.0.1.el7.x86_64
  libvirt-daemon-driver-interface-1.2.17-13.0.1.el7.x86_64
  

[Yahoo-eng-team] [Bug 1648529] [NEW] additional project_id in sql query

2016-12-08 Thread Yuli
Public bug reported:

Hello

When fetching record of default security group, the following issue discovered:
SQL query has 2 times project_id (same value).

In MySQL log I see the following query:

2016-12-08T11:31:38.910379Z  1301 Query SELECT 
default_security_group.project_id AS default_security_group_project_id, 
default_security_group.security_group_id AS 
default_security_group_security_group_id, standardattributes_1.id AS 
standardattributes_1_id, standardattributes_1.resource_type AS 
standardattributes_1_resource_type, standardattributes_1.description AS 
standardattributes_1_description, standardattributes_1.revision_number AS 
standardattributes_1_revision_number, standardattributes_1.created_at AS 
standardattributes_1_created_at, standardattributes_1.updated_at AS 
standardattributes_1_updated_at, tags_1.standard_attr_id AS 
tags_1_standard_attr_id, tags_1.tag AS tags_1_tag, securitygroups_1.project_id 
AS securitygroups_1_project_id, securitygroups_1.id AS securitygroups_1_id, 
securitygroups_1.name AS securitygroups_1_name, 
securitygroups_1.standard_attr_id AS securitygroups_1_standard_attr_id, 
standardattributes_2.id AS standardattributes_2_id, standardattributes_2.
 resource_type AS standardattributes_2_resource_type, 
standardattributes_2.description AS standardattributes_2_description, 
standardattributes_2.revision_number AS standardattributes_2_revision_number, 
standardattributes_2.created_at AS standardattributes_2_created_at, 
standardattributes_2.updated_at AS standardattributes_2_updated_at, 
tags_2.standard_attr_id AS tags_2_standard_attr_id, tags_2.tag AS tags_2_tag, 
securitygrouprules_1.project_id AS securitygrouprules_1_project_id, 
securitygrouprules_1.id AS securitygrouprules_1_id, 
securitygrouprules_1.security_group_id AS 
securitygrouprules_1_security_group_id, securitygrouprules_1.remote_group_id AS 
securitygrouprules_1_remote_group_id, securitygrouprules_1.direction AS 
securitygrouprules_1_direction, securitygrouprules_1.ethertype AS 
securitygrouprules_1_ethertype, securitygrouprules_1.protocol AS 
securitygrouprules_1_protocol, securitygrouprules_1.port_range_min AS 
securitygrouprules_1_port_range_min, securitygrouprules_1.port_rang
 e_max AS securitygrouprules_1_port_range_max, 
securitygrouprules_1.remote_ip_prefix AS securitygrouprules_1_remote_ip_prefix, 
securitygrouprules_1.standard_attr_id AS securitygrouprules_1_standard_attr_id 
FROM default_security_group LEFT OUTER JOIN securitygroups AS securitygroups_1 
ON securitygroups_1.id = default_security_group.security_group_id LEFT OUTER 
JOIN standardattributes AS standardattributes_1 ON standardattributes_1.id = 
securitygroups_1.standard_attr_id LEFT OUTER JOIN tags AS tags_1 ON 
standardattributes_1.id = tags_1.standard_attr_id LEFT OUTER JOIN 
securitygrouprules AS securitygrouprules_1 ON securitygroups_1.id = 
securitygrouprules_1.security_group_id LEFT OUTER JOIN standardattributes AS 
standardattributes_2 ON standardattributes_2.id = 
securitygrouprules_1.standard_attr_id LEFT OUTER JOIN tags AS tags_2 ON 
standardattributes_2.id = tags_2.standard_attr_id 

WHERE default_security_group.project_id =
'cf94b55927aa4ed78baa6b88cda55a47' AND default_security_group.project_id
= 'cf94b55927aa4ed78baa6b88cda55a47'

How to reproduce:
1. Run in MySQL shell:
SET GLOBAL general_log='ON';
2. Create a network ( I am running rally tests for it)

This code is called from the following file:
neutron/db/securitygroups_db.py
_ensure_default_security_group()

query = self._model_query(context, sg_models.DefaultSecurityGroup)
default_group = query.filter_by(tenant_id=tenant_id).one()
return default_group['security_group_id']

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648529

Title:
  additional project_id in sql query

Status in neutron:
  New

Bug description:
  Hello

  When fetching record of default security group, the following issue 
discovered:
  SQL query has 2 times project_id (same value).

  In MySQL log I see the following query:

  2016-12-08T11:31:38.910379Z  1301 Query SELECT 
default_security_group.project_id AS default_security_group_project_id, 
default_security_group.security_group_id AS 
default_security_group_security_group_id, standardattributes_1.id AS 
standardattributes_1_id, standardattributes_1.resource_type AS 
standardattributes_1_resource_type, standardattributes_1.description AS 
standardattributes_1_description, standardattributes_1.revision_number AS 
standardattributes_1_revision_number, standardattributes_1.created_at AS 
standardattributes_1_created_at, standardattributes_1.updated_at AS 
standardattributes_1_updated_at, tags_1.standard_attr_id AS 
tags_1_standard_attr_id, tags_1.tag AS tags_1_tag, securitygroups_1.project_id 
AS 

[Yahoo-eng-team] [Bug 1645391] Re: mapped auth method not included by default

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/403816
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=74942e448a46736899da854b19bd6e997925eccb
Submitter: Jenkins
Branch:master

commit 74942e448a46736899da854b19bd6e997925eccb
Author: Rodrigo Duarte Sousa 
Date:   Mon Nov 28 15:14:41 2016 -0200

Include mapped in the default auth methods

Change-Id: Ib74ec1efc0a8ce10744f112a3870c7ce8dcf9154
Closes-Bug: 1645391


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645391

Title:
  mapped auth method not included by default

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The mapped auth method is not listed by default in keystone.conf [1].
  It should be there since it is required for proper federated
  authentication.

  [1]
  
https://github.com/openstack/keystone/blob/4fc95c8f9201eb09b8d1ff54b8e8d260264aabbc/keystone/conf/constants.py#L20

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648516] [NEW] Mitaka error install

2016-12-08 Thread Lucas Alves Martins
Public bug reported:

Obtaining file:///opt/stack/glance
Exception:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, 
in main
status = self.run(options, args)
  File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 
335, in run
wb.build(autobuilding=True)
  File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, 
in prepare_files
ignore_dependencies=self.ignore_dependencies))
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 521, 
in _prepare_file
req_to_install.check_if_exists()
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 
1036, in check_if_exists
self.req.name
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 558, in get_distribution
dist = get_provider(dist)
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 432, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 968, in require
needed = self.resolve(parse_requirements(requirements))
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 859, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
ContextualVersionConflict: (oslo.concurrency 3.7.1 
(/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('oslo.concurrency>=3.8.0'), set(['glance-store']))
+inc/python:pip_install:1  exit_trap
+./stack.sh:exit_trap:474  local r=2
++./stack.sh:exit_trap:475  jobs -p
+./stack.sh:exit_trap:475  jobs=
+./stack.sh:exit_trap:478  [[ -n '' ]]
+./stack.sh:exit_trap:484  kill_spinner
+./stack.sh:kill_spinner:370   '[' '!' -z '' ']'
+./stack.sh:exit_trap:486  [[ 2 -ne 0 ]]
+./stack.sh:exit_trap:487  echo 'Error on exit'
Error on exit
+./stack.sh:exit_trap:488  generate-subunit 1481208793 169 fail
+./stack.sh:exit_trap:489  [[ -z /opt/stack/logs ]]
+./stack.sh:exit_trap:492  
/home/openstack/devstack/tools/worlddump.py -d /opt/stack/logs
World dumping... see /opt/stack/logs/worlddump-2016-12-08-145603.txt for details
+./stack.sh:exit_trap:498  exit 2


Local.conf
[[local|localrc]]

ADMIN_PASSWORD=010465
DATABASE_PASSWORD=010465
RABBIT_PASSWORD=010465
SERVICE_PASSWORD=010465
MYSQL_PASSWORD=010465


#Enable heat services
enable_service h-eng h-api h-api-cfn h-api-cw

#Enable heat plugin
enable_plugin heat https://git.openstack.org/openstack/heat stable/mitaka

#Image for Heat
IMAGE_URL_SITE="http://fedora.c3sl.ufpr.br;
IMAGE_URL_PATH="/linux//releases/22/Cloud/x86_64/Images/"
IMAGE_URL_FILE="Fedora-Cloud-Base-22-20150521.x86_64.qcow2"
IMAGE_URLS+=","$IMAGE_URL_SITE$IMAGE_URL_PATH$IMAGE_URL_FILE

#Enable Ceilometer plugin
CEILOMETER_BACKEND=mongodb
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer 
stable/mitaka
enable_plugin aodh https://git.openstack.org/openstack/aodh stable/mitaka

#Enable Tacker plugin
enable_plugin tacker https://git.openstack.org/openstack/tacker stable/mitaka

help
thanks

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: mitaka-bugs

** Project changed: tacker => glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1648516

Title:
  Mitaka error install

Status in Glance:
  New

Bug description:
  Obtaining file:///opt/stack/glance
  Exception:
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, 
in main
  status = self.run(options, args)
File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 
335, in run
  wb.build(autobuilding=True)
File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in 
build
  self.requirement_set.prepare_files(self.finder)
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, 
in prepare_files
  ignore_dependencies=self.ignore_dependencies))
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 521, 
in _prepare_file
  req_to_install.check_if_exists()
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 
1036, in check_if_exists
  self.req.name
File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 558, in get_distribution
  dist = get_provider(dist)
File 

[Yahoo-eng-team] [Bug 1648516] [NEW] Mitaka error install

2016-12-08 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Obtaining file:///opt/stack/glance
Exception:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, 
in main
status = self.run(options, args)
  File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 
335, in run
wb.build(autobuilding=True)
  File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, 
in prepare_files
ignore_dependencies=self.ignore_dependencies))
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 521, 
in _prepare_file
req_to_install.check_if_exists()
  File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 
1036, in check_if_exists
self.req.name
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 558, in get_distribution
dist = get_provider(dist)
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 432, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 968, in require
needed = self.resolve(parse_requirements(requirements))
  File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 859, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
ContextualVersionConflict: (oslo.concurrency 3.7.1 
(/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('oslo.concurrency>=3.8.0'), set(['glance-store']))
+inc/python:pip_install:1  exit_trap
+./stack.sh:exit_trap:474  local r=2
++./stack.sh:exit_trap:475  jobs -p
+./stack.sh:exit_trap:475  jobs=
+./stack.sh:exit_trap:478  [[ -n '' ]]
+./stack.sh:exit_trap:484  kill_spinner
+./stack.sh:kill_spinner:370   '[' '!' -z '' ']'
+./stack.sh:exit_trap:486  [[ 2 -ne 0 ]]
+./stack.sh:exit_trap:487  echo 'Error on exit'
Error on exit
+./stack.sh:exit_trap:488  generate-subunit 1481208793 169 fail
+./stack.sh:exit_trap:489  [[ -z /opt/stack/logs ]]
+./stack.sh:exit_trap:492  
/home/openstack/devstack/tools/worlddump.py -d /opt/stack/logs
World dumping... see /opt/stack/logs/worlddump-2016-12-08-145603.txt for details
+./stack.sh:exit_trap:498  exit 2



Local.conf
[[local|localrc]]

ADMIN_PASSWORD=010465
DATABASE_PASSWORD=010465
RABBIT_PASSWORD=010465
SERVICE_PASSWORD=010465
MYSQL_PASSWORD=010465


#Enable heat services
enable_service h-eng h-api h-api-cfn h-api-cw

#Enable heat plugin
enable_plugin heat https://git.openstack.org/openstack/heat stable/mitaka

#Image for Heat
IMAGE_URL_SITE="http://fedora.c3sl.ufpr.br;
IMAGE_URL_PATH="/linux//releases/22/Cloud/x86_64/Images/"
IMAGE_URL_FILE="Fedora-Cloud-Base-22-20150521.x86_64.qcow2"
IMAGE_URLS+=","$IMAGE_URL_SITE$IMAGE_URL_PATH$IMAGE_URL_FILE

#Enable Ceilometer plugin
CEILOMETER_BACKEND=mongodb
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer 
stable/mitaka
enable_plugin aodh https://git.openstack.org/openstack/aodh stable/mitaka

#Enable Tacker plugin
enable_plugin tacker https://git.openstack.org/openstack/tacker stable/mitaka

help
thanks

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: mitaka-bugs
-- 
Mitaka error install
https://bugs.launchpad.net/bugs/1648516
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648501] [NEW] providing different imageRef when using block_device_mapping (image -> volume)

2016-12-08 Thread Mohamed Osman
Public bug reported:

When booting instance with block_device_mapping_v2 provided with
source_type set to "image" and also providing imageRef with a different
uuid than the one provided in block_device_mapping_v2.uuid the instance
will be booted but it will contain image metadata from the imageRef
while the boot volume with have image meta data from the
block_device_mapping_v2.uuid field.

Steps to reproduce
==

Step 1:
===
call /servers api curl REQ:

curl -g -i -X POST http://nova-
endpoint:8774/v2/5ee09552880049c6b56f96c99081ddf1/servers -H "Content-
Type: application/json; charset=UTF-8" -H "X-Auth-Token: authtoken" -H
"Content-Type: application/json" -d '{"server": {"name": "new-server-
test","imageRef": "cbb2746c-6342-4946-84ab-09ca0c8be82b","flavorRef":
"2e7f5353-cff7-4723-bbec-b35b12999d83","availability_zone":
"zone-1","block_device_mapping_v2": [{"boot_index": "0","uuid":
"1616a9c4-a6f4-4bed-99dc-a742695d","source_type":
"image","volume_size": "30","destination_type":
"volume","delete_on_termination": false}]}}'


Step 2:
===
check server information: openstack server show 
5e903f37-318e-4b03-a9ad-1cea598fa313 -c name -c image -c 
os-extended-volumes:volumes_attach

+--+---+
| Field| Value  
   |
+--+---+
| image| Image1 
(cbb2746c-6342-4946-84ab-09ca0c8be82b) |
| name | new-server-test
   |
| os-extended-volumes:volumes_attached | [{u'id': 
u'e03eb79c-67d8-4c4c-997e-fe3eeb400968'}]|
+--+---+


Step 3:
===
check volume information: openstack volume show 
e03eb79c-67d8-4c4c-997e-fe3eeb400968 -c volume_image_metadata

+---+--+
| Field | Value 

   |
+---+--+
| volume_image_metadata | {u'description': u'', u'checksum': 
u'e7f6e7d7d38423a705394ad72fdb823c', u'min_ram': u'0', u'disk_format': 
u'qcow2', u'image_name': u'Image2', u'image_id': |
|   | u'1616a9c4-a6f4-4bed-99dc-a742695d', 
u'container_format': u'bare', u'min_disk': u'30', u'size': u'8823242752'}   
|
+---+--+


Expected Result
===
server should not be tagged with image image1 since the actual image source is 
the image2 specified in block_device_mapping_v2. 


Actual Result
==
server is tagged with image image1

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648501

Title:
  providing different imageRef when using block_device_mapping (image ->
  volume)

Status in OpenStack Compute (nova):
  New

Bug description:
  When booting instance with block_device_mapping_v2 provided with
  source_type set to "image" and also providing imageRef with a
  different uuid than the one provided in block_device_mapping_v2.uuid
  the instance will be booted but it will contain image metadata from
  the imageRef while the boot volume with have image meta data from the
  block_device_mapping_v2.uuid field.

  Steps to reproduce
  ==

  Step 1:
  ===
  call /servers api curl REQ:

  curl -g -i -X POST http://nova-
  endpoint:8774/v2/5ee09552880049c6b56f96c99081ddf1/servers -H "Content-
  Type: application/json; charset=UTF-8" -H "X-Auth-Token: authtoken" -H
  "Content-Type: application/json" -d '{"server": {"name": "new-server-
  test","imageRef": "cbb2746c-6342-4946-84ab-09ca0c8be82b","flavorRef":
  "2e7f5353-cff7-4723-bbec-b35b12999d83","availability_zone":
  "zone-1","block_device_mapping_v2": [{"boot_index": "0","uuid":
  "1616a9c4-a6f4-4bed-99dc-a742695d","source_type":
  "image","volume_size": "30","destination_type":
  "volume","delete_on_termination": false}]}}'

  
  

[Yahoo-eng-team] [Bug 1648206] Re: sriov agent report_state is slow

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/408281
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1a2a71baf3904209679fc5448814a0e7940fe44d
Submitter: Jenkins
Branch:master

commit 1a2a71baf3904209679fc5448814a0e7940fe44d
Author: Kevin Benton 
Date:   Wed Dec 7 11:33:46 2016 -0800

SRIOV: don't block report_state with device count

The device count process can be quite slow on a system with
lots of interfaces. Doing this during report_state can block
it long enough that the agent will be reported as dead and
bindings will fail.

This adjusts the logic to only update the configuration during
the normal device retrieval for the scan loop. This will leave
the report_state loop unblocked by the operation so the agent
doesn't get reported as dead (which blocks port binding).

Closes-Bug: #1648206
Change-Id: Iff45fb6617974b1eceeed238a8d9e958f874f12b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648206

Title:
  sriov agent report_state is slow

Status in neutron:
  Fix Released

Bug description:
  On a system with lots of VFs and PFs we get these logs:

  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 29.67 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 45.43 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 47.64 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 23.89 sec
  WARNING oslo.service.loopingcall [-] Function 
'neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent.SriovNicSwitchAgent._report_state'
 run outlasted interval by 30.20 sec

  
  Depending on the agent_down_time configuration, this can cause the Neutron 
server to think the agent has died.

  
  This appears to be caused by blocking on the eswitch manager every time to 
get a device count to include in the state report.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1648206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515707] Re: Heat failure caused by switching project and revocation of the Horizon session token

2016-12-08 Thread Rob Cresswell
Token revocation has been removed from django openstack auth, so this
workaround is no longer required I believe.

** Changed in: horizon
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1515707

Title:
  Heat failure caused by switching project and revocation of the Horizon
  session token

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Steps to reproduce:

  0. All OpenStack services are configured using Keystone V3, and 
cfg.CONF.deferred_auth_method == 'trusts' is enabled for Heat to use trust.
  1. From Horizon, in project A, launch a Heat stack from Horizon with token 
only (password disabled per https://review.openstack.org/#/c/148997/)
  2. While the stack is being created, quickly switch Horizon to project B and 
then switch back to project A again.
  3. The stack being launched for Project B could be failing because the token 
used to launch the stack has been revoked when switching project.

  The revoking of the Horizon session token is a fix to Bug1227754,
  where the session token list grows endlessly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1515707/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648447] Re: Errors in logs while loading volume snapshots

2016-12-08 Thread Rob Cresswell
Seems to be sporadic; may be caused by a bad rebase. Can't recreate on
clean master.

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
Milestone: ocata-2 => None

** Changed in: horizon
 Assignee: Rob Cresswell (robcresswell) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648447

Title:
  Errors in logs while loading volume snapshots

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Getting a consistent error while loading Volumes page; looks like an
  object variable has been renamed at some point.

  
  Error while checking action permissions.
  Traceback (most recent call last):
File "/Users/robcresswell/horizon/horizon/tables/base.py", line 1321, in 
_filter_action
  return action._allowed(request, datum) and row_matched
File "/Users/robcresswell/horizon/horizon/tables/actions.py", line 137, in 
_allowed
  self.allowed(request, datum))
File 
"/Users/robcresswell/horizon/openstack_dashboard/dashboards/project/volumes/snapshots/tables.py",
 line 46, in allowed
  if (snapshot._volume and
File "/Users/robcresswell/horizon/openstack_dashboard/api/base.py", line 
103, in __getattribute__
  return object.__getattribute__(self, attr)
  AttributeError: 'VolumeSnapshot' object has no attribute '_volume'
  Error while checking action permissions.
  Traceback (most recent call last):
File "/Users/robcresswell/horizon/horizon/tables/base.py", line 1321, in 
_filter_action
  return action._allowed(request, datum) and row_matched
File "/Users/robcresswell/horizon/horizon/tables/actions.py", line 137, in 
_allowed
  self.allowed(request, datum))
File 
"/Users/robcresswell/horizon/openstack_dashboard/dashboards/project/volumes/snapshots/tables.py",
 line 46, in allowed
  if (snapshot._volume and
File "/Users/robcresswell/horizon/openstack_dashboard/api/base.py", line 
103, in __getattribute__
  return object.__getattribute__(self, attr)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648447] [NEW] Errors in logs while loading volume snapshots

2016-12-08 Thread Rob Cresswell
Public bug reported:

Getting a consistent error while loading Volumes page; looks like an
object variable has been renamed at some point.


Error while checking action permissions.
Traceback (most recent call last):
  File "/Users/robcresswell/horizon/horizon/tables/base.py", line 1321, in 
_filter_action
return action._allowed(request, datum) and row_matched
  File "/Users/robcresswell/horizon/horizon/tables/actions.py", line 137, in 
_allowed
self.allowed(request, datum))
  File 
"/Users/robcresswell/horizon/openstack_dashboard/dashboards/project/volumes/snapshots/tables.py",
 line 46, in allowed
if (snapshot._volume and
  File "/Users/robcresswell/horizon/openstack_dashboard/api/base.py", line 103, 
in __getattribute__
return object.__getattribute__(self, attr)
AttributeError: 'VolumeSnapshot' object has no attribute '_volume'
Error while checking action permissions.
Traceback (most recent call last):
  File "/Users/robcresswell/horizon/horizon/tables/base.py", line 1321, in 
_filter_action
return action._allowed(request, datum) and row_matched
  File "/Users/robcresswell/horizon/horizon/tables/actions.py", line 137, in 
_allowed
self.allowed(request, datum))
  File 
"/Users/robcresswell/horizon/openstack_dashboard/dashboards/project/volumes/snapshots/tables.py",
 line 46, in allowed
if (snapshot._volume and
  File "/Users/robcresswell/horizon/openstack_dashboard/api/base.py", line 103, 
in __getattribute__
return object.__getattribute__(self, attr)

** Affects: horizon
 Importance: Medium
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
Milestone: None => ocata-2

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648447

Title:
  Errors in logs while loading volume snapshots

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Getting a consistent error while loading Volumes page; looks like an
  object variable has been renamed at some point.

  
  Error while checking action permissions.
  Traceback (most recent call last):
File "/Users/robcresswell/horizon/horizon/tables/base.py", line 1321, in 
_filter_action
  return action._allowed(request, datum) and row_matched
File "/Users/robcresswell/horizon/horizon/tables/actions.py", line 137, in 
_allowed
  self.allowed(request, datum))
File 
"/Users/robcresswell/horizon/openstack_dashboard/dashboards/project/volumes/snapshots/tables.py",
 line 46, in allowed
  if (snapshot._volume and
File "/Users/robcresswell/horizon/openstack_dashboard/api/base.py", line 
103, in __getattribute__
  return object.__getattribute__(self, attr)
  AttributeError: 'VolumeSnapshot' object has no attribute '_volume'
  Error while checking action permissions.
  Traceback (most recent call last):
File "/Users/robcresswell/horizon/horizon/tables/base.py", line 1321, in 
_filter_action
  return action._allowed(request, datum) and row_matched
File "/Users/robcresswell/horizon/horizon/tables/actions.py", line 137, in 
_allowed
  self.allowed(request, datum))
File 
"/Users/robcresswell/horizon/openstack_dashboard/dashboards/project/volumes/snapshots/tables.py",
 line 46, in allowed
  if (snapshot._volume and
File "/Users/robcresswell/horizon/openstack_dashboard/api/base.py", line 
103, in __getattribute__
  return object.__getattribute__(self, attr)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648442] [NEW] [api-ref] The maximum number of characters for a tag not mentioned anywhere

2016-12-08 Thread Sharat Sharma
Public bug reported:

The maximum number of characters allowed in a tag are 60. The user comes
to know the limit only after he exceeds the limit and faces error like
this: "Invalid input for operation: 'tooo
lng tag 1234567890' exceeds maximum length of 60."

The limit must be documented in the api-ref here:
http://developer.openstack.org/api-ref/networking/v2/?expanded=add-a
-tag-detail,replace-all-tags-detail,remove-all-tags-detail,confirm-a
-tag-detail

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api-ref low-hanging-fruit

** Tags added: api-ref low-hanging-fruit

** Description changed:

  The maximum number of characters allowed in a tag are 60. The user comes
  to know the limit only after he exceeds the limit and faces error like
  this: "Invalid input for operation: 'tooo
  lng tag 1234567890' exceeds maximum length of 60."
  
- This must be documented in the api-ref here:
+ The limit must be documented in the api-ref here:
  http://developer.openstack.org/api-ref/networking/v2/?expanded=add-a
  -tag-detail,replace-all-tags-detail,remove-all-tags-detail,confirm-a
  -tag-detail

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648442

Title:
  [api-ref] The maximum number of characters for a tag not mentioned
  anywhere

Status in neutron:
  New

Bug description:
  The maximum number of characters allowed in a tag are 60. The user
  comes to know the limit only after he exceeds the limit and faces
  error like this: "Invalid input for operation:
  'tooo lng tag
  1234567890' exceeds maximum length of 60."

  The limit must be documented in the api-ref here:
  http://developer.openstack.org/api-ref/networking/v2/?expanded=add-a
  -tag-detail,replace-all-tags-detail,remove-all-tags-detail,confirm-a
  -tag-detail

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1648442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648430] [NEW] lbaasv2 loadbalancer stuck in pending_update

2016-12-08 Thread Maarten
Public bug reported:

I have a loadbalancer configured which is stuck in pending_update. I
would like to delete it but it returns a message that it cannot delete
it because it is pending_update.

(neutron) lbaas-loadbalancer-list 
+--+-+-+-+--+
| id   | name| vip_address | 
provisioning_status | provider |
+--+-+-+-+--+
| a5416558-23b5-4c15-943a-e0cc0ed1743b | yw-lb02 | 172.16.10.6 | ACTIVE 
 | haproxy  |
| c3b48167-9924-493e-a6c5-c4df5a8fd643 | yw-lb01 | 172.16.10.4 | PENDING_UPDATE 
 | haproxy  |
+--+-+-+-+--+
(neutron) lbaas-loadbalancer-delete c3b48167-9924-493e-a6c5-c4df5a8fd643
listener 5b9dd62f-e815-4d55-8554-c2f1a617c477 is using this loadbalancer
Neutron server returns request_ids: ['req-4a48664c-5358-4c51-b861-3d8e9072c919']
(neutron) lbaas-listener-list
+--+--+--+--+---++
| id   | default_pool_id  | 
name | protocol | protocol_port | admin_state_up |
+--+--+--+--+---++
| f7a49f34-d375-430f-92b7-0377cd906178 | 8632a63c-9ffa-431c-820a-9705a99bee58 | 
web-list | HTTP |80 | True   |
| 5b9dd62f-e815-4d55-8554-c2f1a617c477 | 8691087b-e694-4daa-8475-da0a55fadb40 | 
HTTP | HTTP |80 | True   |
+--+--+--+--+---++
(neutron) lbaas-listener-delete 5b9dd62f-e815-4d55-8554-c2f1a617c477
Invalid state PENDING_UPDATE of loadbalancer resource 
c3b48167-9924-493e-a6c5-c4df5a8fd643
Neutron server returns request_ids: ['req-83ca7c98-4745-40b2-8810-d22d1ab6ba61']


I have tried to reboot all services but it keeps stuck in pending_update and 
I'm not able to remove the loadbalancer or any options configurered on it.

Is it possible to fix this?

Thanks advance for helping

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648430

Title:
  lbaasv2 loadbalancer stuck in pending_update

Status in neutron:
  New

Bug description:
  I have a loadbalancer configured which is stuck in pending_update. I
  would like to delete it but it returns a message that it cannot delete
  it because it is pending_update.

  (neutron) lbaas-loadbalancer-list 
  
+--+-+-+-+--+
  | id   | name| vip_address | 
provisioning_status | provider |
  
+--+-+-+-+--+
  | a5416558-23b5-4c15-943a-e0cc0ed1743b | yw-lb02 | 172.16.10.6 | ACTIVE   
   | haproxy  |
  | c3b48167-9924-493e-a6c5-c4df5a8fd643 | yw-lb01 | 172.16.10.4 | 
PENDING_UPDATE  | haproxy  |
  
+--+-+-+-+--+
  (neutron) lbaas-loadbalancer-delete c3b48167-9924-493e-a6c5-c4df5a8fd643
  listener 5b9dd62f-e815-4d55-8554-c2f1a617c477 is using this loadbalancer
  Neutron server returns request_ids: 
['req-4a48664c-5358-4c51-b861-3d8e9072c919']
  (neutron) lbaas-listener-list
  
+--+--+--+--+---++
  | id   | default_pool_id  
| name | protocol | protocol_port | admin_state_up |
  
+--+--+--+--+---++
  | f7a49f34-d375-430f-92b7-0377cd906178 | 8632a63c-9ffa-431c-820a-9705a99bee58 
| web-list | HTTP |80 | True   |
  | 5b9dd62f-e815-4d55-8554-c2f1a617c477 | 8691087b-e694-4daa-8475-da0a55fadb40 
| HTTP | HTTP |80 | True   |
  
+--+--+--+--+---++
  (neutron) lbaas-listener-delete 5b9dd62f-e815-4d55-8554-c2f1a617c477
  Invalid state PENDING_UPDATE of loadbalancer resource 
c3b48167-9924-493e-a6c5-c4df5a8fd643
  Neutron server returns request_ids: 
['req-83ca7c98-4745-40b2-8810-d22d1ab6ba61']

  
  I have tried to reboot all services but it keeps stuck in pending_update and 
I'm not able to remove the loadbalancer or any options configurered 

[Yahoo-eng-team] [Bug 1648424] [NEW] check hw:mem_page_size before set flavor_extra

2016-12-08 Thread jichenjc
Public bug reported:

newton code, this -1 is invalid

[root@faydevnt ~] # nova flavor-create jitest 1000 1024 0 1
+--++---+--+---+--+---+-+---+
| ID   | Name   | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public |
+--++---+--+---+--+---+-+---+
| 1000 | jitest | 1024  | 0| 0 |  | 1 | 1.0 | 
True  |
+--++---+--+---+--+---+-+---+


[root@faydevnt ~] # nova flavor-key 1000 set hw:mem_page_size=-1
[root@faydevnt ~] 


[root@faydevnt ~] # nova flavor-show 1000
+++
| Property   | Value  |
+++
| OS-FLV-DISABLED:disabled   | False  |
| OS-FLV-EXT-DATA:ephemeral  | 0  |
| disk   | 0  |
| extra_specs| {"hw:mem_page_size": "-1"} |
| id | 1000   |
| name   | jitest |
| os-flavor-access:is_public | True   |
| ram| 1024   |
| rxtx_factor| 1.0|
| swap   ||
| vcpus  | 1  |
+++

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648424

Title:
  check hw:mem_page_size before set flavor_extra

Status in OpenStack Compute (nova):
  New

Bug description:
  newton code, this -1 is invalid

  [root@faydevnt ~] # nova flavor-create jitest 1000 1024 0 1
  
+--++---+--+---+--+---+-+---+
  | ID   | Name   | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public |
  
+--++---+--+---+--+---+-+---+
  | 1000 | jitest | 1024  | 0| 0 |  | 1 | 1.0 | 
True  |
  
+--++---+--+---+--+---+-+---+

  
  [root@faydevnt ~] # nova flavor-key 1000 set hw:mem_page_size=-1
  [root@faydevnt ~] 


  [root@faydevnt ~] # nova flavor-show 1000
  +++
  | Property   | Value  |
  +++
  | OS-FLV-DISABLED:disabled   | False  |
  | OS-FLV-EXT-DATA:ephemeral  | 0  |
  | disk   | 0  |
  | extra_specs| {"hw:mem_page_size": "-1"} |
  | id | 1000   |
  | name   | jitest |
  | os-flavor-access:is_public | True   |
  | ram| 1024   |
  | rxtx_factor| 1.0|
  | swap   ||
  | vcpus  | 1  |
  +++

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648417] [NEW] Failed to set admin pass

2016-12-08 Thread Eric Xie
Public bug reported:

Description
===
When i set 'admin pass' of one server, got 'AttributeError' error.

Steps to reproduce
==
* Upload windows image with qemu-guest-agent
* Then add metadata 'hw_qemu_guest_agent=yes' to the image
* Then boot one server A with this image
* Then use ``nova set-password A`` to change admin pass

Expected result
===
Set admin password successfully.

Actual result
=
ERROR (Conflict): Failed to set admin password on 
ba631d0f-bdad-4928-be5e-e52fee05f1e1 because error setting admin password (HTTP 
409) (Request-ID: req-52de5986-409e-40d1-ac74-59bed6d3b797)

Environment
===
1. nova version
# rpm -qa | grep nova
openstack-nova-api-13.0.0-3.el7.noarch
openstack-nova-console-13.0.0-3.el7.noarch
python-nova-13.0.0-3.el7.noarch
openstack-nova-conductor-13.0.0-3.el7.noarch
openstack-nova-scheduler-13.0.0-3.el7.noarch
openstack-nova-novncproxy-13.0.0-3.el7.noarch
openstack-nova-common-13.0.0-3.el7.noarch
python-novaclient-3.3.1-2.el7.noarch

2. Which hypervisor did you use?
Libvirt + KVM
# rpm -qa | grep libvirt
libvirt-daemon-1.2.17-13.el7_2.5.x86_64
libvirt-client-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64
libvirt-devel-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64
libvirt-docs-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64
libvirt-python-1.2.18-1.el7.x86_64

# rpm -qa | grep qemu
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
qemu-kvm-ev-2.3.0-31.el7.16.1.x86_64
qemu-kvm-common-ev-2.3.0-31.el7.16.1.x86_64
ipxe-roms-qemu-20130517-8.gitc4bce43.el7_2.1.noarch
qemu-img-ev-2.3.0-31.el7.16.1.x86_64
centos-release-qemu-ev-1.0-1.el7.noarch

   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

Logs & Configs
==
nova-compute.log
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager 
[req-52de5986-409e-40d1-ac74-59bed6d3b797 455e4c768a414f12927dfed27657c707 
bc7b1de930bf428295b69d5627513d9e - - -] [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1] set_admin_password failed
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1] Traceback (most recent call last):
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3301, in 
set_admin_password
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1] 
self.driver.set_admin_password(instance, new_pass)
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1815, in 
set_admin_password
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1] guest.set_user_password(user, 
new_pass)
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 387, in 
set_user_password
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1] self._domain.setUserPassword(user, 
new_pass, 0)
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 176, in __getattr__
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1] f = getattr(self._obj, attr_name)
2016-12-08 18:42:13.146 2500 ERROR nova.compute.manager [instance: 
ba631d0f-bdad-4928-be5e-e52fee05f1e1] AttributeError: 'virDomain' object has no 
attribute 'setUserPassword'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648417

Title:
  Failed to set admin pass

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When i set 'admin pass' of one server, got 'AttributeError' error.

  Steps to reproduce
  ==
  * Upload windows image with qemu-guest-agent
  * Then add metadata 'hw_qemu_guest_agent=yes' to the image
  * Then boot one server A with this image
  * Then use ``nova set-password A`` to change admin pass

  Expected result
  ===
  Set admin password successfully.

  Actual result
  =
  ERROR (Conflict): Failed to set 

[Yahoo-eng-team] [Bug 1648385] [NEW] Router is limited by the number of tenant networks

2016-12-08 Thread Xiaojun Han
Public bug reported:

When I create the 36th router failed(1 tenant 1 tenant network and 1
router),the log /var/log/neutron/server.log error is:

2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource 
[req-e8eaa389-4fb5-486c-8fe0-d67e2a837a89 52661276f37048e5b5d219e0a57132a0 
b5a8b21521c74a4097f522c35878b741 - - -] create failed
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 410, in create
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 521, in _create
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource obj = 
do_create(body)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 503, in 
do_create
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource 
request.context, reservation.reservation_id)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 496, in 
do_create
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource return 
obj_creator(request.context, **kwargs)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_hamode_db.py", line 417, in 
create_router
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource 
self.delete_router(context, router_dict['id'])
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_hamode_db.py", line 410, in 
create_router
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource 
router_db.tenant_id)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/l3_hamode_db.py", line 274, in 
_create_ha_network
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource context, 
creation, deletion, content)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/common_db_mixin.py", line 54, in 
safe_creation
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource obj = 
create_fn()
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/common/utils.py", line 127, 
in create_network
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource return 
core_plugin.create_network(context, {'network': net_data})
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 654, in 
create_network
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource result, 
mech_context = self._create_network_db(context, network)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 640, in 
_create_network_db
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource tenant_id)
2016-12-08 07:52:10.096 33561 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 208, 
in create_network_segments
2016-12-08 

[Yahoo-eng-team] [Bug 1595095] Re: Horizon tabbed view returns rendered read-only response

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/45
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=019e5490fbd8475483e9f33a74e127522426bec6
Submitter: Jenkins
Branch:master

commit 019e5490fbd8475483e9f33a74e127522426bec6
Author: Timur Sufiev 
Date:   Mon Dec 5 21:05:06 2016 +0300

Remove additional response.render() for tabs

Right now TabbedView has additional .render() call. Thus
get_context_data() for TabbedView returns already rendered response
with rendered=True flag, which makes it read-only.  This blocks
Profiler middleware in modifying the response. For example, we cannot
add any messages to response from tabbed view.

Simple views have no additional rendering. It was done because of
issues with raising redirect exceptions from tabbed views long
ago. Looks like now there is no such issues and raising redirects from
tabbed views is covered with unit tests. To ensure that nothing broke,
a test with a tab raising Http302 was added - it still redirects to
the url provided as before.

Change-Id: I37076abc15008a523f37da0d09b5b041ef77844e
Closes-Bug: #1595095


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1595095

Title:
  Horizon tabbed view returns rendered read-only response

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Horizon tabbed view returns rendered read-only response. It happens because 
of this string: 
https://github.com/openstack/horizon/blob/master/horizon/tabs/views.py#L81
  And for simple view we do not call additional response: 
https://github.com/openstack/horizon/blob/master/horizon/views.py#L65

  Described tabs view behaviour makes impossible to modify response via
  middleware. For example, we cannot add messages from middleware
  despite the fact we can do it if we handing simple view request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1595095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648339] [NEW] cloud_admin in non-default domain cannot see other domains

2016-12-08 Thread Marcus Furlong
Public bug reported:

When the cloud admin is in a domain that is not domain "Default", then
the cloud admin user loses the ability to see other domains in the
Domains tab. The tab appears, yet only one domain is shown, the users
domain.

When a domain is created in horizon, it gets created successfully, but
does not show up in the list of domains, still only one domain is shown.

This only happens with horizon. On the command line, all created domains
appear when doing "openstack domain list", including those created in
horizon.

As a result, the cloud admin cannot set the domain context on other
domains in horizon and all admin  tasks for the non-admin domains must
be completed via the command line.

When the cloud admin is in the "Default" domain, everything works
correctly.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1648339

Title:
  cloud_admin in non-default domain cannot see other domains

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the cloud admin is in a domain that is not domain "Default", then
  the cloud admin user loses the ability to see other domains in the
  Domains tab. The tab appears, yet only one domain is shown, the users
  domain.

  When a domain is created in horizon, it gets created successfully, but
  does not show up in the list of domains, still only one domain is
  shown.

  This only happens with horizon. On the command line, all created
  domains appear when doing "openstack domain list", including those
  created in horizon.

  As a result, the cloud admin cannot set the domain context on other
  domains in horizon and all admin  tasks for the non-admin domains must
  be completed via the command line.

  When the cloud admin is in the "Default" domain, everything works
  correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1648339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615919] Re: BGP: DVR fip has next_hop to snat gateway after associate first time

2016-12-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/372310
Committed: 
https://git.openstack.org/cgit/openstack/neutron-dynamic-routing/commit/?id=0980985b2f3590f60b4726a5419680a1d70f9ead
Submitter: Jenkins
Branch:master

commit 0980985b2f3590f60b4726a5419680a1d70f9ead
Author: LIU Yulong 
Date:   Mon Sep 19 11:29:04 2016 +0800

Let the bgp_plugin to query floating IP bgp next_hop

When a dvr floating IP is associated to a port, the bgp plugin
`floatingip_update_callback` will immediately send a notification
`start_route_advertisements` to DR agent, there is a new floating
IP bgp route.
But this bgp route is not correct, its next_hop is set to dvr router
snat gateway IP address. And then after `periodic_interval` seconds,
the DR agent will resync that DVR fip route with new next_hop to the
correct FIP namespace fg-device IP address.

This patch will let the bgp_plugin to handle the floating IP bgp route
next_hop query.

Change-Id: Ic6bb7f4263c6e2da315178be2ed041eb7020c905
Closes-bug: #1615919


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615919

Title:
  BGP: DVR fip has next_hop to snat gateway after associate first time

Status in neutron:
  Fix Released

Bug description:
  ENV:
  stable/mitaka

  When A DVR floating IP associate to a port, the `floatingip_update_callback` 
will immediately start a `start_route_advertisements` to notify DR agent that 
FIP bgp route.
  But this bgp route is not right, its next_hop is set to snat gateway IP 
address.
  And then after `periodic_interval` seconds, then the DR agent will resync 
that DVR fip route with new next_hop to the correct FIP namespace fg-device IP 
address.

  Reproduce:
  1. create a DVR router 1, set gateway
  2. create a network/subnet, and connected to the DVR router 1
  3. create a VM 1
  4. bind a floating IP to that VM 1
  5. in DR agent LOG, you may see the following infos:

  2016-08-23 13:08:26.301 13559 INFO bgpspeaker.api.base 
[req-829d21e2-98c3-49f3-9ba5-bd626aaf782e - - - - -] API method network.add 
called with args: {'prefix': u'172.16.10.68/32', 'next_hop': u'172.16.6.154'}
  2016-08-23 13:08:26.302 13559 INFO neutron.services.bgp.driver.ryu.driver 
[req-829d21e2-98c3-49f3-9ba5-bd626aaf782e - - - - -] Route 
cidr=172.16.10.68/32, nexthop=172.16.6.154 is advertised for BGP Speaker 
running for local_as=2345.
  2016-08-23 13:08:37.131 13559 INFO bgpspeaker.api.base 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] API method network.del 
called with args: {'prefix': u'172.16.10.68/32'}
  2016-08-23 13:08:37.131 13559 INFO neutron.services.bgp.driver.ryu.driver 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] Route cidr=172.16.10.68/32 
is withdrawn from BGP Speaker running for local_as=2345.
  2016-08-23 13:08:37.132 13559 INFO bgpspeaker.api.base 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] API method network.add 
called with args: {'prefix': u'172.16.10.68/32', 'next_hop': u'172.16.10.66'}
  2016-08-23 13:08:37.132 13559 INFO neutron.services.bgp.driver.ryu.driver 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] Route 
cidr=172.16.10.68/32, nexthop=172.16.10.66 is advertised for BGP Speaker 
running for local_as=2345.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1615919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647538] Re: ODL deletes QOS policy bound to a port inspiteof Neutron rejecting the policy delete since its associate to a port.

2016-12-08 Thread Poovizhi
** Project changed: networking-odl => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1647538

Title:
  ODL deletes QOS policy bound to a port inspiteof Neutron rejecting the
  policy delete since its associate to a port.

Status in neutron:
  New

Bug description:
  1. Qos Policy is bound to the port

  vishal@ubuntu:~/devstack$ neutron port-update 
9f2c5d5e-7dca-4482-9dee-b2118be21d4c --qos-policy bw-limiter
  Updated port: 9f2c5d5e-7dca-4482-9dee-b2118be21d4c

  
  vishal@ubuntu:~/devstack$ neutron port-show 
9f2c5d5e-7dca-4482-9dee-b2118be21d4c
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| True
   |
  | allowed_address_pairs | 
   |
  | binding:host_id   | ubuntu  
   |
  | binding:profile   | {}  
   |
  | binding:vif_details   | {"port_filter": true}   
   |
  | binding:vif_type  | ovs 
   |
  | binding:vnic_type | normal  
   |
  | created_at| 2016-12-06T05:28:30Z
   |
  | description   | 
   |
  | device_id | 0c42d9fb-462b-4ce5-9402-3273c3a4f221
   |
  | device_owner  | compute:nova
   |
  | extra_dhcp_opts   | 
   |
  | fixed_ips | {"subnet_id": 
"83b01e0a-860f-4131-9337-d5784b58b604", "ip_address": "5.1.1.6"} |
  | id| 9f2c5d5e-7dca-4482-9dee-b2118be21d4c
   |
  | mac_address   | fa:16:3e:6f:23:f1   
   |
  | name  | 
   |
  | network_id| cedf5f96-8524-460e-85bf-112d6a6e3e92
   |
  | port_security_enabled | True
   |
  | project_id| 8769516ed27c49df9e742a78e6b9c8d7
   |
  | qos_policy_id | 83068877-b9cc-4dc9-8d60-2fed4ce7be0f
   |
  | revision_number   | 8   
   |
  | security_groups   | 56c31cf8-63f6-4e01-a9f4-6d90f0aa21f4
   |
  | status| ACTIVE  
   |
  | tenant_id | 8769516ed27c49df9e742a78e6b9c8d7
   |
  | updated_at| 2016-12-06T05:28:33Z
   |
  
+---++

  
  2. When policy delete is tried

  vishal@ubuntu:~/devstack$ neutron qos-policy-delete 
83068877-b9cc-4dc9-8d60-2fed4ce7be0f
  QoS Policy 83068877-b9cc-4dc9-8d60-2fed4ce7be0f is used by port 
9f2c5d5e-7dca-4482-9dee-b2118be21d4c.
  Neutron server returns request_ids: 
['req-6629b80e-601d-4f51-aa47-01a16bc3291b']
  vishal@ubuntu:~/devstack$

  
  3. Neutron qos policy and port from database after tried deleting qos policy

  vishal@ubuntu:~/devstack$ neutron qos-policy-show 
83068877-b9cc-4dc9-8d60-2fed4ce7be0f
  
+-+--+
  | Field   | Value 
   |
  
+-+--+
  | created_at  | 2016-12-06T05:23:50Z  
   |
  | description |   
   |
  | id  | 83068877-b9cc-4dc9-8d60-2fed4ce7be0f  
   |
  | name| bw-limiter
   |
  |