[Yahoo-eng-team] [Bug 1594371] Re: Docs for keystone recommend deprecated memcache backend

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/339310
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=b278f03a7852fbc2ac2f1ae7972038234fc7ac2f
Submitter: Jenkins
Branch:master

commit b278f03a7852fbc2ac2f1ae7972038234fc7ac2f
Author: jolie 
Date:   Fri Jul 8 10:01:13 2016 +0800

keystone recommend deprecated memcache backend

There is a recommendation in doc to use
backend = keystone.cache.memcache_pool
however this seems to be deprecated in the code

Change-Id: Ic029a8c6fd8a88cd0e73fb7b61ba8ad8625f5ee5
closes-bug:#1594371


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1594371

Title:
  Docs for keystone recommend deprecated memcache backend

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  At http://docs.openstack.org/developer/keystone/configuration.html
  #cache-configuration-section there is a recommendation to use

  backend = keystone.cache.memcache_pool

  however this seems to be deprecated in the code:

  WARNING oslo_log.versionutils [-] Deprecated:
  keystone.cache.memcache_pool backend is deprecated as of Mitaka in
  favor of oslo_cache.memcache_pool backend and may be removed in N.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1594371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563454] Re: potential user_id conflict when REMOTE_USER is set

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/339165
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=82c7b8bedcc18d47722efa852ebaf5024bb9f846
Submitter: Jenkins
Branch:master

commit 82c7b8bedcc18d47722efa852ebaf5024bb9f846
Author: “Richard 
Date:   Thu Jul 7 17:54:49 2016 +

Doc update on enabled external auth and federation

By default the external auth is enabled and can cause user_id conflict
when REMOTE_USER is set due to the fact that federation uses
REMOTE_USER as well. Therefore, the docs were updated to advise users
against using both external auth and federation on the same sequence.

Closes-Bug: #1563454

Change-Id: I193f78ae0ad0232471b725d5700870c349703310


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1563454

Title:
  potential user_id conflict when REMOTE_USER is set

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  For Federation, the identity is validated outside of Keystone and its
  attributes are conveyed in the request environment. One of them is
  REMOTE_USER. If this attribute is present, Keystone will
  indiscriminately invoke the "external" plugin to "authenticate"

  
https://github.com/openstack/keystone/blob/master/keystone/auth/controllers.py#L496

  Since the default "external" plugin is DefaultDomain, it automatically
  validated the user in the "Default" domain. This is not the end of it,
  the authentication sequence will continue with other plugins.

  
https://github.com/openstack/keystone/blob/master/keystone/auth/controllers.py#L516

  For Federation, the Mapped plugin is also being invoked to validate
  the attributes in the request environment.  Now consider this
  scenario.

   1. There are two distinct users with the same username "foo", one in the 
"Default" domain while other is in the "BAR" domain.
   2. The external Federation modules (i.e. mod_shib) sets the REMOTE_USER 
attribute.
   3. Mapping which maps the incoming identity to "foo" in the "BAR" domain.

  This will result in user_id conflict because the first "external"
  plugin sets the user_id for user "foo" in the "Default" domain, while
  the Mapped plugin is trying to set the user_id for "foo" in the "BAR"
  domain.

  
https://github.com/openstack/keystone/blob/master/keystone/auth/controllers.py#L121

  One may argue that this is a "corner case". That it can be avoided  by

   1. configuring the external Federation modules not to set the REMOTE_USER 
attribute, or
   2. disabling the "external" auth plugin

  However, "external" auth with DefaultDomain is enabled by default. We
  really need to clearly distinguish "externa" auth from mapped auth. Do
  not invoke both at the same sequence. At the very least, I think this
  scenario need to be clearly documented. Moreover, I think we should
  deprecated "external" auth plugins altogether and enhance the Mapped
  plugin to support direct scoped token request. "external" auth is just
  another form of Mapped auth IMO.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1563454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597317] Re: Launch instance by using default lvm backend volume failed

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/337081
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=f9d52850e7cd092dc736a77de366248c6dee6881
Submitter: Jenkins
Branch:master

commit f9d52850e7cd092dc736a77de366248c6dee6881
Author: Cao Shufeng 
Date:   Mon Jul 4 06:41:10 2016 -0400

Do not conjecture volume-id from iscsi_name

Now iscsi target driver assumes that volume name always start with
"volume-". In fact the name can be configured[1]. This change gets
the volume id from volume object directly.

Closes-Bug: #1597317

[1]: 
https://github.com/openstack/cinder/blob/9da9ebb34581818c053e1e92bf6f552e32605479/cinder/objects/volume.py#L142

Change-Id: Iaa366fbc4ddc0265255e5a4d2bb9d166a665856c


** Changed in: cinder
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1597317

Title:
  Launch instance by using default lvm backend volume failed

Status in Cinder:
  Fix Released
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  
  Installed OpenStack Mitaka release (openstack 2.2.0)

  Default hypervisor libvirt+KVM is used
  Default volume backend "LVM" is used

  
  Steps followed

  1. Create bootable volume by using image "cirros" - OK
  2. Launch VM instance using the volume - NOK

  /var/log/cinder/volume.log
  

  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher 
[req-a3c3015e-353c-49d6-af28-8e97010a588b 46d26580334f4cb5b0cbfba58b926031 
14bd05fdfb86490f90f8e72848833ba7 - - -] Exception during message handling: list 
index out of range
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _dispatch
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 127, 
in _do_dispatch
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1441, in 
initialize_connection
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher volume, 
connector)
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 760, in 
create_export
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher 
volume_path)
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/cinder/volume/targets/iscsi.py", line 195, in 
create_export
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher 
chap_auth = self._get_target_chap_auth(context, iscsi_name)
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/cinder/volume/targets/iscsi.py", line 328, in 
_get_target_chap_auth
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher vol_id 
= iscsi_name.split(':volume-')[1]
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher IndexError: 
list index out of range
  2016-06-29 07:45:22.278 24205 ERROR oslo_messaging.rpc.dispatcher
  2016-06-29 07:45:22.280 24205 ERROR oslo_messaging._drivers.common 
[req-a3c3015e-353c-49d6-af28-8e97010a588b 46d26580334f4cb5b0cbfba58b926031 
14bd05fdfb86490f90f8e72848833ba7 - - -] Returning exception list index out of 
range to caller
  2016-06-29 07:45:22.281 24205 ERROR oslo_messaging._drivers.common 
[req-a3c3015e-353c-49d6-af28-8e97010a588b 46d26580334f4cb5b0cbfba58b926031 
14bd05fdfb86490f90f8e72848833ba7 - - -] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 
127, in _do_dispatch\nresult = func(ctxt, **new_args)\n', '  File 
"/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1441, 

[Yahoo-eng-team] [Bug 1600418] [NEW] Flavor name edit should trim for white spaces and compare with other flavor names

2016-07-08 Thread Vivek Agrawal
Public bug reported:

When editing the flavor name from horizon
1. Enter new flavor name which is same as one of the existing flavor name
2. Make sure that you have entered leading and trailing spaces
Try saving this flavor and it fails with error. 
we should ideally strip the flavor name before making any comparison.

** Affects: horizon
 Importance: Undecided
 Assignee: Vivek Agrawal (vivek-agrawal)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Vivek Agrawal (vivek-agrawal)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600418

Title:
  Flavor name edit should trim for white spaces and compare with other
  flavor names

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When editing the flavor name from horizon
  1. Enter new flavor name which is same as one of the existing flavor name
  2. Make sure that you have entered leading and trailing spaces
  Try saving this flavor and it fails with error. 
  we should ideally strip the flavor name before making any comparison.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532280] Fix merged to keystone (master)

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/339176
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=d53db1889e17d493202743246243936af90234b9
Submitter: Jenkins
Branch:master

commit d53db1889e17d493202743246243936af90234b9
Author: Lance Bragstad 
Date:   Thu Jul 7 18:32:11 2016 +

Fix fernet token validate for disabled domains/trusts

This commit adds a check when rebuilding the authorization context of a
trust-scoped token to make sure that both the trustor and the trustee are in
enabled domains. With this patch the uuid token provider and the fernet 
token
provider give the same response when caching is disabled. If caching is
enabled, the fernet provider will still consider a trust-scoped token valid
even though the trustor/trustee is in a disabled domain. A subsequent patch
will fix the revocation event to make sure the token is removed from the 
cache
when a domain is disabled.

Change-Id: If3e941018d5c2c9bd22397e69f83b7bf92643340
Partial-Bug: 1532280


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1532280

Title:
  Fernet trust token is still valid when trustee's domain is disabled.

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When you have a Fernet trust-scoped token, and the user's domain is
  disabled, the token is still valid. This is inconsistent with the
  behavior of the UUID token provider.

  Part of the fix has already been incorporated into a patch up for
  review [0], it was discovered by jorge_munoz in some of his testing.
  But, since this is an inconsistency between token providers - there
  was a case for breaking it out into it's own bug and it's own fix.

  Steps to reproduce:
  - Enable the Fernet token provider in the keystone.conf file
  - Create domain A
  - Create a user in domain A
  - Create a project in domain A
  - Grant the user in domain A a role on the project in domain A
  - Create domain B
  - Create a user in domain B
  - As the user in domain A, create a trust with the user in domain B on the 
project in domain A
  - As the user in domain B, get a project-scoped token using the trust
  - As the admin, disable domain B (which is the trustee's domain)
  - As the user in domain B, validate the trust-scoped token

  This validation should return 404 Not Found, but instead it returns
  200 OK. We have a patch in review that exposes the behavior for the
  Fernet provider [1].

  [0] https://review.openstack.org/#/c/253273/27
  [1] https://review.openstack.org/#/c/265455/4

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1532280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600398] Re: Conflict: An object with that identifier already exists on gate-tempest-dsvm-neutron-linuxbridge

2016-07-08 Thread Kevin Benton
** Project changed: neutron => tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600398

Title:
  Conflict: An object with that identifier already exists on gate-
  tempest-dsvm-neutron-linuxbridge

Status in tempest:
  Confirmed

Bug description:
  
http://logs.openstack.org/07/337807/1/gate/gate-tempest-dsvm-neutron-linuxbridge/6f92335/logs/
  
http://logs.openstack.org/50/332750/4/gate/gate-tempest-dsvm-neutron-linuxbridge/3d93d72/logs/

  
File "tempest/api/network/test_floating_ips.py", line 203, in 
test_create_update_floatingip_with_port_multiple_ip_address
  fixed_ips=fixed_ips)
File "tempest/lib/services/network/ports_client.py", line 22, in create_port
  return self.create_resource(uri, post_data)
File "tempest/lib/services/network/base.py", line 60, in create_resource
  resp, body = self.post(req_uri, req_post_data)
File "tempest/lib/common/rest_client.py", line 270, in post
  return self.request('POST', url, extra_headers, headers, body, chunked)
File "tempest/lib/common/rest_client.py", line 664, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 777, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'detail': u'', u'message': u'Unable to complete operation for 
network 58786c5e-247b-4b7b-8906-34d41a0e5d9c. The IP address 10.100.0.2 is in 
use.', u'type': u'IpAddressInUse'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1600398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593772] Re: Remove the deprecated config "quota_items"

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/331591
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ff73054e8637f29737cc65dc84ef0f9aea9d5abd
Submitter: Jenkins
Branch:master

commit ff73054e8637f29737cc65dc84ef0f9aea9d5abd
Author: Hong Hui Xiao 
Date:   Mon Jun 20 09:46:21 2016 +

Remove the deprecated config "quota_items"

It was deprecated at [1], and quota of resource will be registered
at initialization of APIRouter. So, no need to use the config now.

[1] https://review.openstack.org/#/c/181593

DocImpact: All references of 'quota_items' configuration option
and its description should be removed from the docs.

UpgradeImpact: Remove 'quota_items' configuration option from
neutron.conf file.

Change-Id: I0698772a49f51d7c65f1f4cf1ea7660cd07113a0
Closes-Bug: #1593772


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593772

Title:
  Remove the deprecated config "quota_items"

Status in neutron:
  Fix Released

Bug description:
  The quota_items configuration was deprecated in Liberty[1][2], it
  should be removed in Newton.

  [1] https://bugs.launchpad.net/neutron/+bug/1453322
  [2] https://review.openstack.org/#/c/181593/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600405] [NEW] Remove the deprecated config "quota_items"

2016-07-08 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/331591
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit ff73054e8637f29737cc65dc84ef0f9aea9d5abd
Author: Hong Hui Xiao 
Date:   Mon Jun 20 09:46:21 2016 +

Remove the deprecated config "quota_items"

It was deprecated at [1], and quota of resource will be registered
at initialization of APIRouter. So, no need to use the config now.

[1] https://review.openstack.org/#/c/181593

DocImpact: All references of 'quota_items' configuration option
and its description should be removed from the docs.

UpgradeImpact: Remove 'quota_items' configuration option from
neutron.conf file.

Change-Id: I0698772a49f51d7c65f1f4cf1ea7660cd07113a0
Closes-Bug: #1593772

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600405

Title:
  Remove the deprecated config "quota_items"

Status in neutron:
  New

Bug description:
  https://review.openstack.org/331591
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit ff73054e8637f29737cc65dc84ef0f9aea9d5abd
  Author: Hong Hui Xiao 
  Date:   Mon Jun 20 09:46:21 2016 +

  Remove the deprecated config "quota_items"
  
  It was deprecated at [1], and quota of resource will be registered
  at initialization of APIRouter. So, no need to use the config now.
  
  [1] https://review.openstack.org/#/c/181593
  
  DocImpact: All references of 'quota_items' configuration option
  and its description should be removed from the docs.
  
  UpgradeImpact: Remove 'quota_items' configuration option from
  neutron.conf file.
  
  Change-Id: I0698772a49f51d7c65f1f4cf1ea7660cd07113a0
  Closes-Bug: #1593772

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563101] Fix merged to keystone (master)

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/339112
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=5d707d510daa7ae0784fffd2cf45a9ee8906821f
Submitter: Jenkins
Branch:master

commit 5d707d510daa7ae0784fffd2cf45a9ee8906821f
Author: Ronald De Rose 
Date:   Thu Jul 7 16:19:28 2016 +

Move the auth plugins abstract base class out of core

This patch moves the auth plugins abstract base class out of core and
into plugins/base.py

This removes dependencies where backend code references code in the
core. The reasoning being that the core should know about the backend
interface, but the backends should not know anything about the core
(separation of concerns). And part of the risk here is a potential for
circular dependencies.

Partial-Bug: #1563101

Change-Id: I4413ef01523d02c30af97e306069229252cb4971


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1563101

Title:
  Remove backend dependency on core

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Remove dependencies where backend code references code in the core.  For 
example, identity backends imports identity:
  
https://github.com/openstack/keystone/blob/stable/mitaka/keystone/identity/backends/sql.py#L24

  The reasoning being that the core should know about the backend
  interface, but the backends should not know anything about the core
  (separation of concerns).

  And part of the risk here is a potential for circular dependencies.
  Backend code could reference code in the core, as well as other higher
  level modules inside identity; thus, creating a circular dependency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1563101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599983] Re: Python 3.5 gate exposes issue in webob.response status code type

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/339214
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=88de82e130e25aeff2cf2c0e98159ebce3199d28
Submitter: Jenkins
Branch:master

commit 88de82e130e25aeff2cf2c0e98159ebce3199d28
Author: Eric Brown 
Date:   Thu Jul 7 13:00:43 2016 -0700

Ensure status code is always passed as int

There is some inconsistency in calls to the wsgi render_response
function. Sometimes the status tuple uses an int for the http
status code and other times a string. The Python 3.5 gate job
discovered the problem.

This patch normalizes all calls to use an int.

Change-Id: I136b01f755ff99dfba244e79068fdaae614b2091
Closes-Bug: #1599983


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1599983

Title:
  Python 3.5 gate exposes issue in webob.response status code type

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Unit test
  
keystone.tests.unit.test_wsgi.ApplicationTest.test_render_response_custom_status
  fails with the following error:

  
  ft274.18: 
keystone.tests.unit.test_wsgi.ApplicationTest.test_render_response_custom_status_StringException:
 pythonlogging:'': {{{
  Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend.
  Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend.
  Adding cache-proxy 'oslo_cache.testing.CacheIsolatingProxy' to backend.
  }}}

  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/.tox/py35/lib/python3.5/site-packages/webob/response.py",
 line 271, in _status__set
  status_code = int(value.split()[0])
  ValueError: invalid literal for int() with base 10: 
'HTTPStatus.NOT_IMPLEMENTED'

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/keystone/tests/unit/test_wsgi.py",
 line 110, in test_render_response_custom_status
  status=(http_client.NOT_IMPLEMENTED, 'Not Implemented'))
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/keystone/common/wsgi.py", 
line 770, in render_response
  headerlist=headers)
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/.tox/py35/lib/python3.5/site-packages/webob/response.py",
 line 107, in __init__
  self.status = status
File 
"/home/jenkins/workspace/gate-keystone-python35-db-nv/.tox/py35/lib/python3.5/site-packages/webob/response.py",
 line 273, in _status__set
  raise ValueError('Invalid status code, integer required.')
  ValueError: Invalid status code, integer required.

  
  In some places we pass the status code as a str and others as an int

  
  
http://logs.openstack.org/42/339142/2/check/gate-keystone-python35-db-nv/847ed7d/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1599983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600396] [NEW] AssertionError: 0 == 0 : No IPv4 addresses found on gate-tempest-dsvm-neutron-full

2016-07-08 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/60/337060/2/gate/gate-tempest-dsvm-neutron-full/6beb9d6/logs/
http://logs.openstack.org/64/336264/3/gate/gate-tempest-dsvm-neutron-full/c8b4462/logs/
http://logs.openstack.org/64/336264/3/gate/gate-tempest-dsvm-neutron-full/7768042/logs/
http://logs.openstack.org/41/334641/3/gate/gate-tempest-dsvm-neutron-full/8dbdf00/logs/

For instance:

Traceback (most recent call last):
  File "tempest/test.py", line 106, in wrapper
return f(self, *func_args, **func_kwargs)
  File "tempest/scenario/test_network_advanced_server_ops.py", line 135, in 
test_server_connectivity_pause_unpause
server, keypair, floating_ip = self._setup_network_and_servers()
  File "tempest/scenario/test_network_advanced_server_ops.py", line 68, in 
_setup_network_and_servers
floating_ip = self.create_floating_ip(server, public_network_id)
  File "tempest/scenario/manager.py", line 868, in create_floating_ip
port_id, ip4 = self._get_server_port_id_and_ip4(thing)
  File "tempest/scenario/manager.py", line 847, in _get_server_port_id_and_ip4
"No IPv4 addresses found in: %s" % ports)
  File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py",
 line 845, in assertNotEqual
raise self.failureException(msg)
AssertionError: 0 == 0 : No IPv4 addresses found in: [{u'device_owner': 
u'compute:None', u'binding:vif_type': u'ovs', u'status': u'BUILD', u'name': 
u'', u'binding:vif_details': {u'ovs_hybrid_plug': True, u'port_filter': True}, 
u'mac_address': u'fa:16:3e:4b:38:5d', u'updated_at': u'2016-07-05T12:35:22', 
u'binding:host_id': u'ubuntu-trusty-ovh-bhs1-2218654', u'device_id': 
u'd988986e-64bd-45c6-9863-5d56efb0fdf6', u'security_groups': 
[u'9701d305-ee33-4ed7-a598-dfe4829e6862'], u'port_security_enabled': True, 
u'extra_dhcp_opts': [], u'binding:profile': {}, u'description': u'', u'id': 
u'7be0c6ef-a535-400e-8a75-4d747b1fb494', u'admin_state_up': True, u'tenant_id': 
u'6752ff05a55a436f9ed0b38fe0ff37de', u'binding:vnic_type': u'normal', 
u'created_at': u'2016-07-05T12:35:17', u'network_id': 
u'5fee9c7d-2d3c-4281-81d7-e5d9240edf89', u'fixed_ips': [{u'ip_address': 
u'10.100.0.9', u'subnet_id': u'8dc61ef8-39fa-44b3-bed6-7f9b78746c31'}], 
u'allowed_address_pairs': []}]

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure

** Changed in: neutron
   Status: New => Confirmed

** Tags added: gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600396

Title:
  AssertionError: 0 == 0 : No IPv4 addresses found on gate-tempest-dsvm-
  neutron-full

Status in neutron:
  Confirmed

Bug description:
  
http://logs.openstack.org/60/337060/2/gate/gate-tempest-dsvm-neutron-full/6beb9d6/logs/
  
http://logs.openstack.org/64/336264/3/gate/gate-tempest-dsvm-neutron-full/c8b4462/logs/
  
http://logs.openstack.org/64/336264/3/gate/gate-tempest-dsvm-neutron-full/7768042/logs/
  
http://logs.openstack.org/41/334641/3/gate/gate-tempest-dsvm-neutron-full/8dbdf00/logs/

  For instance:

  Traceback (most recent call last):
File "tempest/test.py", line 106, in wrapper
  return f(self, *func_args, **func_kwargs)
File "tempest/scenario/test_network_advanced_server_ops.py", line 135, in 
test_server_connectivity_pause_unpause
  server, keypair, floating_ip = self._setup_network_and_servers()
File "tempest/scenario/test_network_advanced_server_ops.py", line 68, in 
_setup_network_and_servers
  floating_ip = self.create_floating_ip(server, public_network_id)
File "tempest/scenario/manager.py", line 868, in create_floating_ip
  port_id, ip4 = self._get_server_port_id_and_ip4(thing)
File "tempest/scenario/manager.py", line 847, in _get_server_port_id_and_ip4
  "No IPv4 addresses found in: %s" % ports)
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py",
 line 845, in assertNotEqual
  raise self.failureException(msg)
  AssertionError: 0 == 0 : No IPv4 addresses found in: [{u'device_owner': 
u'compute:None', u'binding:vif_type': u'ovs', u'status': u'BUILD', u'name': 
u'', u'binding:vif_details': {u'ovs_hybrid_plug': True, u'port_filter': True}, 
u'mac_address': u'fa:16:3e:4b:38:5d', u'updated_at': u'2016-07-05T12:35:22', 
u'binding:host_id': u'ubuntu-trusty-ovh-bhs1-2218654', u'device_id': 
u'd988986e-64bd-45c6-9863-5d56efb0fdf6', u'security_groups': 
[u'9701d305-ee33-4ed7-a598-dfe4829e6862'], u'port_security_enabled': True, 
u'extra_dhcp_opts': [], u'binding:profile': {}, u'description': u'', u'id': 
u'7be0c6ef-a535-400e-8a75-4d747b1fb494', u'admin_state_up': True, u'tenant_id': 
u'6752ff05a55a436f9ed0b38fe0ff37de', u'binding:vnic_type': u'normal', 
u'created_at': u'2016-07-05T12:35:17', u'network_id': 
u'5fee9c7d-2d3c-4281-81d7-e5d9240edf89', u'fixed_ips': [{u'

[Yahoo-eng-team] [Bug 1600398] [NEW] Conflict: An object with that identifier already exists on gate-tempest-dsvm-neutron-linuxbridge

2016-07-08 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/07/337807/1/gate/gate-tempest-dsvm-neutron-linuxbridge/6f92335/logs/
http://logs.openstack.org/50/332750/4/gate/gate-tempest-dsvm-neutron-linuxbridge/3d93d72/logs/


  File "tempest/api/network/test_floating_ips.py", line 203, in 
test_create_update_floatingip_with_port_multiple_ip_address
fixed_ips=fixed_ips)
  File "tempest/lib/services/network/ports_client.py", line 22, in create_port
return self.create_resource(uri, post_data)
  File "tempest/lib/services/network/base.py", line 60, in create_resource
resp, body = self.post(req_uri, req_post_data)
  File "tempest/lib/common/rest_client.py", line 270, in post
return self.request('POST', url, extra_headers, headers, body, chunked)
  File "tempest/lib/common/rest_client.py", line 664, in request
resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 777, in _error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: An object with that identifier already exists
Details: {u'detail': u'', u'message': u'Unable to complete operation for 
network 58786c5e-247b-4b7b-8906-34d41a0e5d9c. The IP address 10.100.0.2 is in 
use.', u'type': u'IpAddressInUse'}

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure linuxbridge

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: gate-failure linuxbridge

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600398

Title:
  Conflict: An object with that identifier already exists on gate-
  tempest-dsvm-neutron-linuxbridge

Status in neutron:
  Confirmed

Bug description:
  
http://logs.openstack.org/07/337807/1/gate/gate-tempest-dsvm-neutron-linuxbridge/6f92335/logs/
  
http://logs.openstack.org/50/332750/4/gate/gate-tempest-dsvm-neutron-linuxbridge/3d93d72/logs/

  
File "tempest/api/network/test_floating_ips.py", line 203, in 
test_create_update_floatingip_with_port_multiple_ip_address
  fixed_ips=fixed_ips)
File "tempest/lib/services/network/ports_client.py", line 22, in create_port
  return self.create_resource(uri, post_data)
File "tempest/lib/services/network/base.py", line 60, in create_resource
  resp, body = self.post(req_uri, req_post_data)
File "tempest/lib/common/rest_client.py", line 270, in post
  return self.request('POST', url, extra_headers, headers, body, chunked)
File "tempest/lib/common/rest_client.py", line 664, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 777, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'detail': u'', u'message': u'Unable to complete operation for 
network 58786c5e-247b-4b7b-8906-34d41a0e5d9c. The IP address 10.100.0.2 is in 
use.', u'type': u'IpAddressInUse'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600395] [NEW] AttributeError: 'object' object has no attribute 'payload'

2016-07-08 Thread Eric Brown
Public bug reported:

When running Rally against our Mitaka deployment we found the following
traceback in the logs.

2016-07-06 06:44:14.631 18323 DEBUG keystone.middleware.auth 
[req-9b8bc44b-e7fa-4a6f-92fd-7a443d8b34ce - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. _build_auth_context 
/usr/lib/python2.7/dist-packages/keystone/middleware/auth.py:71
2016-07-06 06:44:14.633 18323 INFO keystone.common.wsgi 
[req-9b8bc44b-e7fa-4a6f-92fd-7a443d8b34ce - - - - -] POST 
http://10.111.109.81:5000/v2.0/tokens
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi 
[req-9b8bc44b-e7fa-4a6f-92fd-7a443d8b34ce - - - - -] 'object' object has no 
attribute 'payload'
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 249, in 
__call__
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi result = 
method(context, **params)
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/oslo_log/versionutils.py", line 165, in 
wrapped
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi return 
func_or_cls(*args, **kwargs)
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/controllers.py", line 130, in 
authenticate
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi user_ref['id'], 
tenant_ref['id'])
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 124, in 
wrapped
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1053, in 
decorate
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi should_cache_fn)
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 657, in 
get_or_create
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi async_creator) as 
value:
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158, in 
__enter__
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi return 
self._enter()
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 91, in _enter
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi value = value_fn()
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 610, in 
get_value
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi value = 
self.backend.get(key)
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/cache/_context_cache.py", 
line 98, in get
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi 
self._set_local_cache(key, value)
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/cache/_context_cache.py", 
line 66, in _set_local_cache
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi serialize = 
{'payload': value.payload, 'metadata': value.metadata}
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi AttributeError: 
'object' object has no attribute 'payload'
2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi 
2016-07-06 06:44:14.732 18323 INFO eventlet.wsgi.server 
[req-9b8bc44b-e7fa-4a6f-92fd-7a443d8b34ce - - - - -] 
10.111.109.191,10.111.109.89 - - [06/Jul/2016 06:44:14] "POST /v2.0/tokens 
HTTP/1.1" 500 400 0.102036

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1600395

Title:
  AttributeError: 'object' object has no attribute 'payload'

Status in OpenStack Identity (keystone):
  New

Bug description:
  When running Rally against our Mitaka deployment we found the
  following traceback in the logs.

  2016-07-06 06:44:14.631 18323 DEBUG keystone.middleware.auth 
[req-9b8bc44b-e7fa-4a6f-92fd-7a443d8b34ce - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. _build_auth_context 
/usr/lib/python2.7/dist-packages/keystone/middleware/auth.py:71
  2016-07-06 06:44:14.633 18323 INFO keystone.common.wsgi 
[req-9b8bc44b-e7fa-4a6f-92fd-7a443d8b34ce - - - - -] POST 
http://10.111.109.81:5000/v2.0/tokens
  2016-07-06 06:44:14.729 18323 ERROR keystone.common.wsgi 
[req-9b8bc44b-e7fa-4a6f-92fd-7a443d8

[Yahoo-eng-team] [Bug 1600394] [NEW] ValueError: too many values to unpack

2016-07-08 Thread Eric Brown
Public bug reported:

Found the following error in the keystone logs of our Mitaka deployment.

2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi 
[req-b256411d-ba72-4076-b644-ef08a5400ab2 66725b90caea4963b1b4f91f90ab1dee 
ab149973d5b84459bd3ece44074ec2aa - default default] too many values to unpack
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 249, in 
__call__
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi result = 
method(context, **params)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/controller.py", line 156, in 
inner
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi 
context['subject_token_id']))
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 124, in 
wrapped
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/provider.py", line 208, in 
validate_token
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi token = 
self._validate_token(unique_id)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1053, in 
decorate
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi should_cache_fn)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 657, in 
get_or_create
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi async_creator) as 
value:
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158, in 
__enter__
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi return 
self._enter()
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 98, in _enter
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi generated = 
self._enter_create(createdtime)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 149, in 
_enter_create
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi created = 
self.creator()
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 625, in 
gen_value
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi created_value = 
creator()
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1049, in 
creator
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi return fn(*arg, 
**kw)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/provider.py", line 298, in 
_validate_token
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi return 
self.driver.validate_non_persistent_token(token_id)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", line 
772, in validate_non_persistent_token
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi 
audit_info=audit_ids)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", line 
526, in get_token_data
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi project_id, trust)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", line 
469, in _populate_service_catalog
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi user_id, 
project_id)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 124, in 
wrapped
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1053, in 
decorate
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi should_cache_fn)
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 657, in 
get_or_create
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi async_creator) as 
value:
2016-07-06 05:14:30.311 18314 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158, in 
__enter__
2016-07-06 05:14:30.311 18314 ERROR keys

[Yahoo-eng-team] [Bug 1600393] [NEW] AttributeError: 'list' object has no attribute 'items'

2016-07-08 Thread Eric Brown
Public bug reported:

During a Rally test of our deployment using Mitaka keystone we observed
the following traceback in the logs. It seems that the v3 catalog is
returned as a list whereas the v2.0 catalog is a dict. But the
format_catalog() function always expects a dict.


2016-07-06 03:00:55.171 18314 INFO eventlet.wsgi.server 
[req-5ebbe11b-5efb-4606-a46c-58f100a8a550 5716d29278b8438a95f718ea926e4e7a 
954d6157b061441197b228ac7b4dd6ee - default default] 
10.111.109.191,10.111.109.89 - - [06/Jul/2016 03:00:55] "DELETE 
/v2.0/tenants/37b1a3bad0e54dc2a9824ac51ba02a9f HTTP/1.1" 204 212 0.070017
2016-07-06 03:00:55.779 18323 DEBUG keystone.middleware.auth 
[req-9843ab92-0f8f-42b9-8b56-a5fc6d011569 - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. _build_auth_context 
/usr/lib/python2.7/dist-packages/keystone/middleware/auth.py:71
2016-07-06 03:00:55.781 18323 INFO keystone.common.wsgi 
[req-9843ab92-0f8f-42b9-8b56-a5fc6d011569 - - - - -] POST 
http://10.111.109.81:5000/v2.0/tokens
2016-07-06 03:00:55.879 18323 INFO keystone.token.providers.fernet.utils 
[req-9843ab92-0f8f-42b9-8b56-a5fc6d011569 - - - - -] Loaded 2 encryption keys 
(max_active_keys=3) from: /etc/keystone/fernet-keys/
2016-07-06 03:00:55.882 18323 INFO eventlet.wsgi.server 
[req-9843ab92-0f8f-42b9-8b56-a5fc6d011569 - - - - -] 
10.111.109.191,10.111.109.89 - - [06/Jul/2016 03:00:55] "POST /v2.0/tokens 
HTTP/1.1" 200 3585 0.102872
2016-07-06 03:00:57.450 18323 DEBUG keystone.middleware.auth 
[req-57632939-e139-4dc7-a1f4-833ce4e84665 - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. _build_auth_context 
/usr/lib/python2.7/dist-packages/keystone/middleware/auth.py:71
2016-07-06 03:00:57.452 18323 INFO keystone.common.wsgi 
[req-57632939-e139-4dc7-a1f4-833ce4e84665 - - - - -] POST 
http://10.111.109.81:5000/v2.0/tokens
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi 
[req-57632939-e139-4dc7-a1f4-833ce4e84665 - - - - -] 'list' object has no 
attribute 'items'
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 249, in 
__call__
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi result = 
method(context, **params)
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/oslo_log/versionutils.py", line 165, in 
wrapped
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi return 
func_or_cls(*args, **kwargs)
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/controllers.py", line 144, in 
authenticate
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi auth_token_data, 
roles_ref=roles_ref, catalog_ref=catalog_ref)
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 124, in 
wrapped
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/provider.py", line 360, in 
issue_v2_token
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi token_ref, 
roles_ref, catalog_ref)
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/fernet/core.py", 
line 38, in issue_v2_token
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi *args, **kwargs)
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", line 
570, in issue_v2_token
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi token_ref, 
roles_ref, catalog_ref, trust_ref)
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", line 
163, in format_token
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi catalog_ref)
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", line 
214, in format_catalog
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi for region, 
region_ref in catalog_ref.items():
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi AttributeError: 'list' 
object has no attribute 'items'
2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi 
2016-07-06 03:00:57.570 18323 INFO eventlet.wsgi.server 
[req-57632939-e139-4dc7-a1f4-833ce4e84665 - - - - -] 
10.111.109.191,10.111.109.89 - - [06/Jul/2016 03:00:57] "POST /v2.0/tokens 
HTTP/1.1" 500 400 0.120986
2016-07-06 03:00:58.181 18317 INFO keystone.token.providers.fernet.utils 
[req-7e2abf43-3cee-4335-a

[Yahoo-eng-team] [Bug 1035279] Re: instance hangs at grub prompt after reboot followed by euca-reboot-instances

2016-07-08 Thread Mathew Hodson
** Project changed: nova => ubuntu-on-ec2

** Changed in: ubuntu-on-ec2
   Status: Invalid => Fix Released

** Changed in: Ubuntu Oneiric
   Importance: Undecided => High

** Package changed: ubuntu => grub2 (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1035279

Title:
  instance hangs at grub prompt after reboot followed by euca-reboot-
  instances

Status in Ubuntu on EC2:
  Fix Released
Status in grub2 package in Ubuntu:
  Fix Released
Status in grub2 source package in Oneiric:
  Fix Released
Status in grub2 source package in Precise:
  Fix Released

Bug description:
  This issue has been reproduced on Diablo and Essex so far.

  When doing "sudo reboot" in an instance shortly followed by "euca-
  reboot-instances {instanceID}", once the instaces comes back up, it is
  no longer accessible interactively. The kernel is still executing
  commands but it stucked in the early boot phases

  If only "sudo reboot" from within the instance or "euca-reboot-
  instances" is used, network connectivity comes back as expected.

  This can be reproduced by using the following steps :

  # On the compute node
  sudo -s
  source creds/novarc
  # start instance
  instanceid=`euca-run-instances -k novaadmin -t m1.tiny ami-0006 | grep 
INSTANCE | awk '{print $2}'`
  # wait for instance to start
  sleep 60
  # reboot the instance with reboot command
  ssh -i creds/novaadmin_.key ubuntu@`euca-describe-instances $instanceid | 
grep INSTANCE | awk '{print $4}'` 'sudo reboot'
  # wait 20 seconds
  sleep 20
  # reboot instance with euca-reboot-instance command
  euca-reboot-instances $instanceid

  Now euca-describe-instances will show the instance as running, but it
  is is unreachable via ssh. The kvm process for the instance is still
  visible and running.

  Related bugs:
    * bug 872244: grub2 recordfail logic prevents headless system from 
rebooting after power outage
* bug 669481:  Timeout should not be -1 if $recordfail

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-on-ec2/+bug/1035279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596445] Re: ‘ACL ’ appears in the spec document of 'rbac-qos-policies',actually should be RBAC

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/336755
Committed: 
https://git.openstack.org/cgit/openstack/neutron-specs/commit/?id=1062c22f25fe0a893326857e018e002c3e99c95a
Submitter: Jenkins
Branch:master

commit 1062c22f25fe0a893326857e018e002c3e99c95a
Author: xiewj 
Date:   Fri Jul 1 21:32:12 2016 -0400

Update specs use RBAC instead of ACL in rbac-qos-policies.rst

In the spec document of 'rbac-qos-policies' of Mitaka release,
should be used RBAC instead of ACL, and add 'object_type'
in the 'QosPolicyRBAC Table structure'.

Change-Id: Ice34596579e7b7961ca9261e38fe4e5a93852000
Closes-Bug: #1596445


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596445

Title:
  ‘ACL ’ appears in the spec document of 'rbac-qos-policies',actually
  should be RBAC

Status in neutron:
  Fix Released

Bug description:
  ‘ACL’ appears in the spec document of 'rbac-qos-policies',actually
  should be RBAC

  In the spec document of 'rbac-qos-policies' of Mitaka release, shown as 
follows:
  
https://github.com/openstack/neutron-specs/blob/master/specs/mitaka/rbac-qos-policies.rst

  In the QosPolicyRBAC Table structure of 'Data Model Impact' section,appears 
'ACL',actually should be RBAC
  1)Attribute 'id' is described as 'id of ACL entry',I think it should be 'id 
of RBAC entry'.
  2)Attribute 'tenant_id'is described as 'owner of ACL entry',I think it should 
be 'owner of RBAC entry'.
  3)Attribute 'object_id' is described as 'object affected by ACL',I think it 
should be 'the id of the RBAC object'.
  4)The table doesn't contain attribute 'type'

  In the API Tests section,it is the same,ACL should be replaced by RBAC
  Excercise basic CRUD of ACL entries.
  Make sure qos policies are revealed and hidden as ACL entries are changed

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1596445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385295] Re: use_syslog=True does not log to syslog via /dev/log anymore

2016-07-08 Thread Mathew Hodson
** Package changed: cinder (Ubuntu) => nova (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385295

Title:
  use_syslog=True does not log to syslog via /dev/log anymore

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in oslo.log:
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in python-oslo.log package in Ubuntu:
  In Progress

Bug description:
  python-oslo.log SRU:
  [Impact]

   * Nova services not able to write log to syslog

  [Test Case]

   * 1. Set use_syslog to True in nova.conf/cinder.conf
 2. stop rsyslog service
 3. restart nova/cinder services
 4. restart rsyslog service
 5. Log is not written to syslog after rsyslog is brought up.

  [Regression Potential]

   * none

  
  Reproduced on:
  https://github.com/openstack-dev/devstack 
514c82030cf04da742d16582a23cc64962fdbda1
  /opt/stack/keystone/keystone.egg-info/PKG-INFO:Version: 2015.1.dev95.g20173b1
  /opt/stack/heat/heat.egg-info/PKG-INFO:Version: 2015.1.dev213.g8354c98
  /opt/stack/glance/glance.egg-info/PKG-INFO:Version: 2015.1.dev88.g6bedcea
  /opt/stack/cinder/cinder.egg-info/PKG-INFO:Version: 2015.1.dev110.gc105259

  How to reproduce:
  Set
   use_syslog=True
   syslog_log_facility=LOG_SYSLOG
  for Openstack config files and restart processes inside their screens

  Expected:
  Openstack logs logged to syslog as well

  Actual:
  Nothing goes to syslog

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2016-07-08 Thread Mathew Hodson
** Changed in: python-oslo.log (Ubuntu Vivid)
   Status: Confirmed => Won't Fix

** No longer affects: python-oslo.log (Ubuntu Trusty)

** No longer affects: python-oslo.log (Ubuntu Utopic)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

Status in Ubuntu Cloud Archive:
  Confirmed
Status in Ubuntu Cloud Archive icehouse series:
  Fix Released
Status in Ubuntu Cloud Archive juno series:
  In Progress
Status in Ubuntu Cloud Archive kilo series:
  Confirmed
Status in Ubuntu Cloud Archive liberty series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.log:
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in python-oslo.log package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Fix Released
Status in nova source package in Utopic:
  Won't Fix
Status in nova source package in Vivid:
  Invalid
Status in python-oslo.log source package in Vivid:
  Won't Fix
Status in nova source package in Wily:
  Invalid
Status in python-oslo.log source package in Wily:
  Fix Released

Bug description:
  [Impact]

   * If Nova services are configured to log to syslog (use_syslog=True) they
 will currently fail with ECONNREFUSED if they cannot connect to syslog.
 This patch adds support for allowing nova to retry connecting a 
 configurable number of times before print an error message and continuing
 with startup.

  [Test Case]

   * Configure nova with use_syslog=True in nova.conf, stop rsyslog service and
 restart nova services. Check that upstart nova logs to see retries 
 occurring then start rsyslog and observe connection succeed and 
 nova-compute startup.

  [Regression Potential]

   * None

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1459046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600383] [NEW] More information of the selected project on create network modal should be provided

2016-07-08 Thread Ying Zuo
Public bug reported:

If a cloud admin tries to create a network from Admin/Networks panel, he
or she may see projects  with the same name from different domains
listed in the project dropdown on the create network modal.

Currently there's no way to tell the difference between the projects
with the same name. Thus we should provide more information of the
selected project so that the user can confirm it is the desired one.

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yingzuo)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yingzuo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600383

Title:
  More information of the selected project on create network modal
  should be provided

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If a cloud admin tries to create a network from Admin/Networks panel,
  he or she may see projects  with the same name from different domains
  listed in the project dropdown on the create network modal.

  Currently there's no way to tell the difference between the projects
  with the same name. Thus we should provide more information of the
  selected project so that the user can confirm it is the desired one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600366] [NEW] Federated users cannot use heat

2016-07-08 Thread Elvin Tubillara
Public bug reported:

Federated users cannot create heat stacks.

To reproduce:
Enable heat,
Sign into horizon using federation
Create a heat stack (errors out here)

My guess:
This is caused because federated users cannot perform trust delegation because 
they do not have any real roles associated with them (Although in other cases 
they somehow get the same roles as the group in the mapping and also the local 
user created after log in is not part of the group).

Work around:
1. list the users and find the federated user uuid that was created locally on 
the service provider after signing in
2. assign the heat_stack_owner role to the federated user uuid
3. should work now.

It would be nice if it worked out of the box without having to do the
work around.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1600366

Title:
  Federated users cannot use heat

Status in OpenStack Identity (keystone):
  New

Bug description:
  Federated users cannot create heat stacks.

  To reproduce:
  Enable heat,
  Sign into horizon using federation
  Create a heat stack (errors out here)

  My guess:
  This is caused because federated users cannot perform trust delegation 
because they do not have any real roles associated with them (Although in other 
cases they somehow get the same roles as the group in the mapping and also the 
local user created after log in is not part of the group).

  Work around:
  1. list the users and find the federated user uuid that was created locally 
on the service provider after signing in
  2. assign the heat_stack_owner role to the federated user uuid
  3. should work now.

  It would be nice if it worked out of the box without having to do the
  work around.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1600366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596573] Re: warlock 1.3.0 breaks a few gates

2016-07-08 Thread Travis Tripp
Duplicate of https://bugs.launchpad.net/searchlight/+bug/1596598, so
removing searchlight from that one.


** Also affects: searchlight
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1596573

Title:
  warlock 1.3.0 breaks a few gates

Status in OpenStack Dashboard (Horizon):
  New
Status in python-glanceclient:
  New
Status in python-openstackclient:
  New
Status in OpenStack Search (Searchlight):
  New

Bug description:
  Our functional test
  functional.tests.image.v2.test_image.ImageTests.test_image_unset is
  now failing:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "functional/tests/image/v2/test_image.py", line 71, in test_image_unset
  self.openstack('image set --tag 01 ' + self.NAME)
File "functional/common/test.py", line 53, in openstack
  return execute('openstack ' + cmd, fail_ok=fail_ok)
File "functional/common/test.py", line 42, in execute result_err)
  tempest.lib.exceptions.CommandFailed: Command 'openstack image set --tag 01 
7fbbb8c5da634c54aea88473e4e3c16b' returned non-zero exit status 1.
  stdout:

  stderr:
  WARNING: openstackclient.common.utils is deprecated and will be removed after 
Jun 2017. Please use osc_lib.utils
  400 Bad Request
  Invalid JSON pointer for this resource: '/tags/0'
  (HTTP 400)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1596573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600357] [NEW] Instance filter does not suport multidomain

2016-07-08 Thread Juan Pablo lopez Gutierrez
Public bug reported:

By default in admin->instance panel when you filter by project name it
looks for the tenants ids of the given project name, in case that it
finds more than one it only return the instances of the current domain.

Steps to reproduce:
- Setup multi-domain environment.
- Create a domain named 'domain_a'
- Create a domain named 'domain_b'
- Create a project named 'a' in domain 'domain_a'
- Create a project named 'a' in domain 'domain_b'
- Create an instance in project 'a' of domain 'domain_a'
- Create an instance in project 'a' of domain 'domain_b'
- Filter by project 'a'

ResultIt will not show both created instances.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  By default in admin->instance panel when you filter by project name it
  looks for the tenants ids of the given project name, in case that it
  finds more than one it only return the instances of the current domain.
  
  Steps to reproduce:
  - Setup multi-domain environment.
- - Create a domain_a
- - Create a domain_b
- - Create a project named a in domain_a
- - Create a project named a in domain_b
- - Create an instance in project a of domain_a
- - Create an instance in project a of domain_b
+ - Create a domain named 'domain_a'
+ - Create a domain named 'domain_b'
+ - Create a project named 'a' in domain 'domain_a'
+ - Create a project named 'a' in domain 'domain_b'
+ - Create an instance in project 'a' of domain 'domain_a'
+ - Create an instance in project 'a' of domain 'domain_b'
  - Filter by project 'a'
  
  ResultIt will not show both created instances.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600357

Title:
  Instance filter does not suport multidomain

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  By default in admin->instance panel when you filter by project name it
  looks for the tenants ids of the given project name, in case that it
  finds more than one it only return the instances of the current
  domain.

  Steps to reproduce:
  - Setup multi-domain environment.
  - Create a domain named 'domain_a'
  - Create a domain named 'domain_b'
  - Create a project named 'a' in domain 'domain_a'
  - Create a project named 'a' in domain 'domain_b'
  - Create an instance in project 'a' of domain 'domain_a'
  - Create an instance in project 'a' of domain 'domain_b'
  - Filter by project 'a'

  ResultIt will not show both created instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485529] Re: The API for getting console connection info works only for RDP

2016-07-08 Thread Anusha Unnam
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485529

Title:
  The API for getting console connection info works only for RDP

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There is an API (os-console-auth-tokens) which returns the connection
  info which correspond to a given console token.  However this API
  works only for RDP consoles:

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/console_auth_tokens.py#L49

  We need the same API for MKS consoles as well.  Also I don't see any
  reason why we should check the console type at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416269] Re: boot vm failed with --block-device set as attach volume failed during boot

2016-07-08 Thread Sujitha
*** This bug is a duplicate of bug 1433609 ***
https://bugs.launchpad.net/bugs/1433609

This was the kilo fix:

https://review.openstack.org/#/c/174060/ for bug 1433609. I'm not able
to reproduce this bug on stable/kilo.

I'm going to duplicate this bug against bug 1433609 - if it's not the
same issue, please re-open and explain why.

** This bug has been marked a duplicate of bug 1433609
   Not adding a image block device mapping causes some valid boot requests to 
fail

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416269

Title:
  boot vm failed with --block-device set as attach volume failed during
  boot

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When attach a existing vm during booting vm by following cmd:
  nova boot --flavor small --image c7e8738b-c2c6-4365-a305-040bfbd1b514 --nic 
net-id=abfe3157-d23c-4d15-a7ff-80429a7d9b27 --block-device 
source=volume,dest=volume,bootindex=1,shutdown=remove,id=ca383135-d619-43c2-8826-95ae4d475581
 test11

  It failed in "block device mapping" phase, error from nova is:
  2015-01-30 01:59:14.030 28957 ERROR nova.compute.manager [-] [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] Instance failed block device setup
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] Traceback (most recent call last):
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1856, in 
_prep_block_device
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] do_check_attach=do_check_attach) +
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 407, in 
attach_block_devices
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] map(_log_and_attach, 
block_device_mapping)
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 405, in 
_log_and_attach
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] bdm.attach(*attach_args, 
**attach_kwargs)
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 48, in 
wrapped
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] ret_val = method(obj, context, *args, 
**kwargs)
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 272, in 
attach
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] self['mount_device'], mode=mode)
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 213, in wrapper
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] res = method(self, ctx, volume_id, 
*args, **kwargs)
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 359, in attach
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] mountpoint, mode=mode)
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 326, in 
attach
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] 'mode': mode})
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 311, in 
_action
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] return self.api.client.post(url, 
body=body)
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c]   File 
"/usr/lib/python2.7/site-packages/cinderclient/client.py", line 91, in post
  2015-01-30 01:59:14.030 28957 TRACE nova.compute.manager [instance: 
5456c257-9dda-4ce3-b16d-112ac55e498c] return self._cs_request(url, 

[Yahoo-eng-team] [Bug 1600352] [NEW] Launch Instance from NG Images fails on first attempt

2016-07-08 Thread Matt Borland
Public bug reported:

Consistently, Launch Instance from the Angnular Images panel fails on
first attempt.

This is because magic-search relies on elements being present that
aren't always initialized right away.  It's OK, the lifecycle will catch
up, but at first we need to exclude rendering the magic-search when it's
not ready.

You can recreate this by enabling Angular Images panel:

enabled via:
./openstack_dashboard/enabled/_1051_project_ng_images_panel.py

Go to the panel, then with the JS console open, try to launch an
instance from an image in its item actions.  Notice a) how there are
errors in the console and b) how there is Angular crud on many of the
steps using magic-search.

Apply the fix, and see how (a) and (b) are no longer present.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

- Consistently, Launch Instance from NG Images fails on first attempt.
+ Consistently, Launch Instance from the Angnular Images panel fails on
+ first attempt.
  
  This is because magic-search relies on elements being present that
  aren't always initialized right away.  It's OK, the lifecycle will catch
  up, but at first we need to exclude rendering the magic-search when it's
  not ready.
+ 
+ You can recreate this by enabling Angular Images panel:
+ 
+ enabled via:
+ ./openstack_dashboard/enabled/_1051_project_ng_images_panel.py
+ 
+ Go to the panel, then with the JS console open, try to launch an
+ instance from an image in its item actions.  Notice a) how there are
+ errors in the console and b) how there is Angular crud on many of the
+ steps using magic-search.

** Description changed:

  Consistently, Launch Instance from the Angnular Images panel fails on
  first attempt.
  
  This is because magic-search relies on elements being present that
  aren't always initialized right away.  It's OK, the lifecycle will catch
  up, but at first we need to exclude rendering the magic-search when it's
  not ready.
  
  You can recreate this by enabling Angular Images panel:
  
  enabled via:
  ./openstack_dashboard/enabled/_1051_project_ng_images_panel.py
  
  Go to the panel, then with the JS console open, try to launch an
  instance from an image in its item actions.  Notice a) how there are
  errors in the console and b) how there is Angular crud on many of the
  steps using magic-search.
+ 
+ Apply the fix, and see how (a) and (b) are no longer present.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600352

Title:
  Launch Instance from NG Images fails on first attempt

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Consistently, Launch Instance from the Angnular Images panel fails on
  first attempt.

  This is because magic-search relies on elements being present that
  aren't always initialized right away.  It's OK, the lifecycle will
  catch up, but at first we need to exclude rendering the magic-search
  when it's not ready.

  You can recreate this by enabling Angular Images panel:

  enabled via:
  ./openstack_dashboard/enabled/_1051_project_ng_images_panel.py

  Go to the panel, then with the JS console open, try to launch an
  instance from an image in its item actions.  Notice a) how there are
  errors in the console and b) how there is Angular crud on many of the
  steps using magic-search.

  Apply the fix, and see how (a) and (b) are no longer present.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600352/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600344] [NEW] l3 code can leave orphaned ports

2016-07-08 Thread Kevin Benton
Public bug reported:

The L3 code creates ports and then RouterPort records for those ports in
a separate transaction. If the server encounters an exception when
creating the RouterPort records (e.g. lost connection to database,
router was concurrently deleted, etc), the port will remain but will not
have a RouterPort so it won't be able to be deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600344

Title:
  l3 code can leave orphaned ports

Status in neutron:
  New

Bug description:
  The L3 code creates ports and then RouterPort records for those ports
  in a separate transaction. If the server encounters an exception when
  creating the RouterPort records (e.g. lost connection to database,
  router was concurrently deleted, etc), the port will remain but will
  not have a RouterPort so it won't be able to be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600329] [NEW] Instance Size (flavor) column is sortable when it should not

2016-07-08 Thread Eddie Ramirez
Public bug reported:

If you go to Project->Instances you'll notice that sorting by Size
(flavor) column is not allowed. There's a reason why that behavior was
disabled here https://bugs.launchpad.net/horizon/+bug/1518893 and patch
merged here https://review.openstack.org/#/c/258407/. The patch only
affected the table in project/instances but not admin/instances.

How to reproduce:
1. Go to Admin->Instances
2. Filter by Size (allowed)
3. Go to Project->Instances
4. Filter by Size (not allowed)


Sort by Size in Admin->Instances should be disabled.

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- Instance Size (flavor) column has sort enabled when should not
+ Instance Size (flavor) column is sortable when it should not

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600329

Title:
  Instance Size (flavor) column is sortable when it should not

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If you go to Project->Instances you'll notice that sorting by Size
  (flavor) column is not allowed. There's a reason why that behavior was
  disabled here https://bugs.launchpad.net/horizon/+bug/1518893 and
  patch merged here https://review.openstack.org/#/c/258407/. The patch
  only affected the table in project/instances but not admin/instances.

  How to reproduce:
  1. Go to Admin->Instances
  2. Filter by Size (allowed)
  3. Go to Project->Instances
  4. Filter by Size (not allowed)

  
  Sort by Size in Admin->Instances should be disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600329/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600326] [NEW] neutron-lbaas health monitor timeout and delay values intepreated as milliseconds

2016-07-08 Thread Dustin Lundquist
Public bug reported:

The timeout and delay values on the health monitor objects in Neutron
LBaaS are purportedly in units of seconds, but the numeric value is
passed all the the way down to the HAProxy configuration[1] file (in
both the HAProxy namespace driver and Octavia) where it is interpreted
in milliseconds:

* 
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-timeout%20check
* https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-inter

Due to this unit mismatch, a user may configure a pool with a reasonable
10 second timeout, and the service may appear to function normally until
even a small load causes the backend servers to exceed a 10 millisecond
timeout and then they are removed from the pool.

A timeout value less than one second is useful some settings, such as
monitoring a pool of backend servers serving static content, let the
database field stores this value as an integer.

1: https://github.com/openstack/neutron-
lbaas/blob/b322615e4869eb42ed7888a3492eae4a52f3b4db/neutron_lbaas/services/loadbalancer/drivers/haproxy/templates/haproxy_proxies.j2#L72

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: octavia
 Importance: Undecided
 Status: New


** Tags: lbaas

** Description changed:

  The timeout and delay values on the health monitor objects in Neutron
  LBaaS are purportedly in units of seconds, but the numeric value is
- passed all the the way down to the HAProxy configuration[1] file where
- it is interpreted in milliseconds:
+ passed all the the way down to the HAProxy configuration[1] file (in
+ both the HAProxy namespace driver and Octavia) where it is interpreted
+ in milliseconds:
  
  * 
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-timeout%20check
  * https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-inter
  
  Due to this unit mismatch, a user may configure a pool with a reasonable
  10 second timeout, and the service may appear to function normally until
  even a small load causes the backend servers to exceed a 10 millisecond
  timeout and then they are removed from the pool.
  
  A timeout value less than one second is useful some settings, such as
  monitoring a pool of backend servers serving static content, let the
  database field stores this value as an integer.
  
- 
- 
- 1: 
https://github.com/openstack/neutron-lbaas/blob/b322615e4869eb42ed7888a3492eae4a52f3b4db/neutron_lbaas/services/loadbalancer/drivers/haproxy/templates/haproxy_proxies.j2#L72
+ 1: https://github.com/openstack/neutron-
+ 
lbaas/blob/b322615e4869eb42ed7888a3492eae4a52f3b4db/neutron_lbaas/services/loadbalancer/drivers/haproxy/templates/haproxy_proxies.j2#L72

** Also affects: octavia
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600326

Title:
  neutron-lbaas health monitor timeout and delay values intepreated as
  milliseconds

Status in neutron:
  New
Status in octavia:
  New

Bug description:
  The timeout and delay values on the health monitor objects in Neutron
  LBaaS are purportedly in units of seconds, but the numeric value is
  passed all the the way down to the HAProxy configuration[1] file (in
  both the HAProxy namespace driver and Octavia) where it is interpreted
  in milliseconds:

  * 
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-timeout%20check
  * https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-inter

  Due to this unit mismatch, a user may configure a pool with a
  reasonable 10 second timeout, and the service may appear to function
  normally until even a small load causes the backend servers to exceed
  a 10 millisecond timeout and then they are removed from the pool.

  A timeout value less than one second is useful some settings, such as
  monitoring a pool of backend servers serving static content, let the
  database field stores this value as an integer.

  1: https://github.com/openstack/neutron-
  
lbaas/blob/b322615e4869eb42ed7888a3492eae4a52f3b4db/neutron_lbaas/services/loadbalancer/drivers/haproxy/templates/haproxy_proxies.j2#L72

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600306] [NEW] Update metadata (images) CIM namespace metadefs don't work with glance v1

2016-07-08 Thread Travis Tripp
Public bug reported:

The Glance metadefs for CIM do not seem to work for the delete case with
images using the update metadata widget.

http:///admin/metadata_defs/CIM::ProcessorAllocationSettingData/detail

The metadata widget works fine with them on flavors, but not images.  I
suspect that it has something to do with the glance v1 API and colons
(:) in some of the values. Maybe this will go away with glance v2?  In
either case, it needs to be resolved.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600306

Title:
  Update metadata (images) CIM namespace metadefs don't work with glance
  v1

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Glance metadefs for CIM do not seem to work for the delete case
  with images using the update metadata widget.

  
http:///admin/metadata_defs/CIM::ProcessorAllocationSettingData/detail

  The metadata widget works fine with them on flavors, but not images.
  I suspect that it has something to do with the glance v1 API and
  colons (:) in some of the values. Maybe this will go away with glance
  v2?  In either case, it needs to be resolved.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600304] [NEW] _update_usage_from_migrations() can end up processing stale migrations

2016-07-08 Thread Chris Friesen
Public bug reported:

I recently found a bug in Mitaka, and it appears to be still present in
master.

I was testing a separate patch by doing resizes, and bugs in my code had
resulted in a number of incomplete resizes involving compute-1.  I then
did a resize from compute-0 to compute-0, and saw compute-1's resource
usage go up when it ran the resource audit.

This got me curious, so I went digging and discovered a gap in the current 
resource audit logic.  The problem arises if:

1) You have one or more stale migrations which didn't complete
properly that involve the current compute node.

2) The instance from the uncompleted migration is currently doing a
resize/migration that does not involve the current compute node.

When this happens, _update_usage_from_migrations() will be passed in the stale 
migration, and since the instance is in fact in a resize state, the current 
compute node will erroneously account for the instance.  (Even though the 
instance isn't doing anything involving the current compute node.)

The fix is to check that the instance migration ID matches the ID of the 
migration being analyzed.  This will work because in the case of the stale 
migration we will have hit the error case in _pair_instances_to_migrations(), 
and so the instance will be lazy-loaded from the DB, ensuring that its 
migration ID is up-to-date.

** Affects: nova
 Importance: Undecided
 Assignee: Chris Friesen (cbf123)
 Status: In Progress


** Tags: compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600304

Title:
  _update_usage_from_migrations() can end up processing stale migrations

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  I recently found a bug in Mitaka, and it appears to be still present
  in master.

  I was testing a separate patch by doing resizes, and bugs in my code
  had resulted in a number of incomplete resizes involving compute-1.  I
  then did a resize from compute-0 to compute-0, and saw compute-1's
  resource usage go up when it ran the resource audit.

  This got me curious, so I went digging and discovered a gap in the current 
resource audit logic.  The problem arises if:
  
  1) You have one or more stale migrations which didn't complete
  properly that involve the current compute node.
  
  2) The instance from the uncompleted migration is currently doing a
  resize/migration that does not involve the current compute node.
  
  When this happens, _update_usage_from_migrations() will be passed in the 
stale migration, and since the instance is in fact in a resize state, the 
current compute node will erroneously account for the instance.  (Even though 
the instance isn't doing anything involving the current compute node.)
  
  The fix is to check that the instance migration ID matches the ID of the 
migration being analyzed.  This will work because in the case of the stale 
migration we will have hit the error case in _pair_instances_to_migrations(), 
and so the instance will be lazy-loaded from the DB, ensuring that its 
migration ID is up-to-date.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1600304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600281] Re: ImportError: No module named api

2016-07-08 Thread Alfredo Moralejo
Note that this review in neutron passed the gate jobs because u-c for
neutron-lib in stable/mitaka is 0.0.2 while last actual version of
neutron-lib in stable/mitaka in https://github.com/openstack/neutron-lib
is 0.0.1 , so this is an error in u-c that must be fixed also.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600281

Title:
  ImportError: No module named api

Status in neutron:
  New
Status in tripleo:
  Triaged

Bug description:
  The mitaka ci job is currently failing during the undercloud install
  with the following error

  
Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_endpoint[regionOne/ceilometer::metering]/ensure:
 created^[[0m
  
Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_endpoint[regionOne/neutron::network]/ensure:
 created^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: No handlers 
could be found for logger "oslo_config.cfg"^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Traceback (most 
recent call last):^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/bin/neutron-db-manage", line 10, in ^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
sys.exit(main())^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 749, in 
main^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: return_val 
|= bool(CONF.command.func(config, CONF.command.name))^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 223, in 
do_upgrade^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
run_sanity_checks(config, revision)^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 731, in 
run_sanity_checks^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
script_dir.run_env()^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, in 
run_env^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
util.load_python_file(self.dir, 'env.py')^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in 
load_python_file^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: module = 
load_module_py(module_id, path)^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in 
load_module_py^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: mod = 
imp.load_source(module_id, path, fp)^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 25, in ^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from 
neutron.db.migration.models import head  # noqa^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/models/head.py", line 
28, in ^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from 
neutron.db import bgp_db  # noqa^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/bgp_db.py", line 26, in 
^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: from 
neutron_lib.api import validators^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: ImportError: No 
module named api^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Failed to call refresh: 
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugin.ini upgrade head returned 1 instead of one of [0]^[[0m
  Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: neutron-db-manage 
--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini 
upgrade head returned 1 instead of one of [0]^[[0m
  
Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]/ensure:
 created^[[0m

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/338001
Committed: 
https://git.openstack.org/cgit/openstack/solum/commit/?id=adc1b9ddcc77d7f8bc1c05c830f7b0c4a0805b8a
Submitter: Jenkins
Branch:master

commit adc1b9ddcc77d7f8bc1c05c830f7b0c4a0805b8a
Author: yuyafei 
Date:   Wed Jul 6 15:32:06 2016 +0800

Fix argument order for assertEqual to (expected, observed)

assertEqual expects that the arguments provided to it should be
(expected, observed). If a particluar order is kept as a convention,
then it helps to provide a cleaner message to the developer if Unit
Tests fail. The following patch fixes this issue.

TrivialFix
Closes-Bug: #1259292

Change-Id: I608b07f857d9fa2d401cab35fff6bdf2defa6d55


** Changed in: solum
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Astara:
  Fix Released
Status in Barbican:
  In Progress
Status in Blazar:
  New
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in daisycloud-core:
  New
Status in Designate:
  Fix Released
Status in Freezer:
  In Progress
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Higgins:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-calico:
  New
Status in networking-infoblox:
  In Progress
Status in networking-l2gw:
  In Progress
Status in networking-sfc:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  In Progress
Status in PBR:
  New
Status in pycadf:
  New
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in python-glanceclient:
  Fix Released
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Rally:
  New
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in sqlalchemy-migrate:
  New
Status in SWIFT:
  New
Status in tacker:
  In Progress
Status in tempest:
  Invalid
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/astara/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497272] Re: L3 HA: Unstable rescheduling time for keepalived v1.2.7

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/338129
Committed: 
https://git.openstack.org/cgit/openstack/openstack-ansible-os_neutron/commit/?id=0b8721141f9b526ba4902f5cfc53f05c2fc0758e
Submitter: Jenkins
Branch:master

commit 0b8721141f9b526ba4902f5cfc53f05c2fc0758e
Author: Jean-Philippe Evrard 
Date:   Wed Jul 6 10:10:52 2016 +0100

Use UCA for non-OVS neutron

This commit refactors tasks to allow the use of UCA for Linux Bridge.
It also changes default behavior: now every neutron install will
make use of Ubuntu Cloud Archive, unless mentionned.

Closes-Bug: 1497272
Closes-Bug: 1433172

Change-Id: I4373f544eb178720f33795a71adae925a8b8cb03
Signed-off-by: Jean-Philippe Evrard 


** Changed in: openstack-ansible
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497272

Title:
  L3 HA: Unstable rescheduling time for keepalived v1.2.7

Status in neutron:
  Triaged
Status in openstack-ansible:
  Fix Released
Status in openstack-manuals:
  Triaged

Bug description:
  I have tested work of L3 HA on environment with 3 controllers and 1 compute 
(Kilo) with this simple scenario:
  1) ping vm by floating ip
  2) disable master l3-agent (which ha_state is active)
  3) wait for pings to continue and another agent became active
  4) check number of packages that were lost

  My results are  following:
  1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
  2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled 
on every agent), 10 to 70 packages were lost.

  I should mention that in both cases there was only one ha router.

  It is expected that less packages will be lost when
  max_l3_agents_per_router=3(0).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433172] Re: L3 HA routers master state flapping between nodes after router updates or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6)

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/338129
Committed: 
https://git.openstack.org/cgit/openstack/openstack-ansible-os_neutron/commit/?id=0b8721141f9b526ba4902f5cfc53f05c2fc0758e
Submitter: Jenkins
Branch:master

commit 0b8721141f9b526ba4902f5cfc53f05c2fc0758e
Author: Jean-Philippe Evrard 
Date:   Wed Jul 6 10:10:52 2016 +0100

Use UCA for non-OVS neutron

This commit refactors tasks to allow the use of UCA for Linux Bridge.
It also changes default behavior: now every neutron install will
make use of Ubuntu Cloud Archive, unless mentionned.

Closes-Bug: 1497272
Closes-Bug: 1433172

Change-Id: I4373f544eb178720f33795a71adae925a8b8cb03
Signed-off-by: Jean-Philippe Evrard 


** Changed in: openstack-ansible
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433172

Title:
  L3 HA routers master state flapping between nodes after router updates
  or failovers when using 1.2.14 or 1.2.15 (-1.2.15-6)

Status in neutron:
  Triaged
Status in openstack-ansible:
  Fix Released

Bug description:
  keepalived 1.2.14 introduced a regression when running it in no-preempt mode. 
More details here in a thread I started on the keepalived-devel list:
  http://sourceforge.net/p/keepalived/mailman/message/33604497/

  A fix was backported to 1.2.15-6, and is present in 1.2.16.

  Current status (Updated on the 30th of April, 2015):
  Fedora 20, 21 and 22 have 1.2.16.
  CentOS and RHEL are on 1.2.13

  Ubuntu is using 1.2.10 or older.
  Debian is using 1.2.13.

  In summary, as long as you're not using 1.2.14 or 1.2.15 (Excluding
  1.2.15-6), you're OK, which should be the case if you're using the
  latest keepalived packaged for your distro.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579604] Re: project delete returns 501 NotImplemented with templated catalog

2016-07-08 Thread Lance Bragstad
** Also affects: keystone/mitaka
   Importance: Undecided
   Status: New

** Changed in: keystone/mitaka
   Importance: Undecided => High

** Changed in: keystone/mitaka
   Status: New => Confirmed

** Changed in: keystone/mitaka
 Assignee: (unassigned) => Sam Morrison (sorrison)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1579604

Title:
  project delete returns 501 NotImplemented with templated catalog

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) mitaka series:
  Confirmed

Bug description:
  Have upgraded to Mitaka and getting a 501 when deleting a project.
  This happens in both v2 and v3 api. The project actually deletes.

  Am using stable/mitaka branch and the sql backend


  
  $ keystone tenant-create --name deleteme

  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  |
  |   enabled   |   True   |
  |  id | 5fafe2512fb3404ead999c30a23d0107 |
  | name| deleteme |
  +-+--+

  
  $ keystone tenant-delete 5fafe2512fb3404ead999c30a23d0107

  The action you have requested has not been implemented. (HTTP 501)
  (Request-ID: req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc)

  
  $ keystone tenant-get 5fafe2512fb3404ead999c30a23d0107

  No tenant with a name or ID of '5fafe2512fb3404ead999c30a23d0107'
  exists.



  In logs:

  2016-05-09 12:06:40.265 16723 WARNING keystone.common.wsgi 
[req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - default default] The action you have 
requested has not been implemented.
  2016-05-09 12:06:40.269 16723 INFO eventlet.wsgi.server 
[req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - default default] 128.250.116.173 - - 
[09/May/2016 12:06:40] "DELETE /v2.0/tenants/5fafe2512fb3404ead999c30a23d0107 
HTTP/1.1" 501 354 0.223312

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1579604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270332] Re: cold migration fails in VMware driver

2016-07-08 Thread Markus Zoeller (markus_z)
Looks like this will be driven by the blueprint [1].

[1] https://blueprints.launchpad.net/nova/+spec/vmware-live-migration

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Medium => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270332

Title:
  cold migration fails in VMware driver

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  With two compute nodes (on different hosts) configured to two
  different clusters in the same vCenter Server i.e :-

  
  nova migrate  fails to migrate a server with the following error :-

  2014-01-17 16:00:21.336 ERROR nova.openstack.common.rpc.amqp 
[req-0c587eb7-3a23-4790-b23d-f4ad005b5fe7 admin admin] Exception during message 
handling
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp **args)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 90, in wrapped
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp payload)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 73, in wrapped
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 244, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp pass
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 230, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 295, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 272, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
six.reraise(self.type_, self.value, self.tb)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 259, in decorated_function
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3163, in resize_instance
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
block_device_info)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 406, in 
migrate_disk_and_power_off
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp dest, flavor)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1182, in 
migrate_disk_and_power_off
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp 
self._session._wait_for_task(instance['uuid'], vm_clone_task)
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 857, in _wait_for_task
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp ret_val = 
done.wait()
  2014-01-17 16:00:21.336 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lo

[Yahoo-eng-team] [Bug 1561056] Re: cinder volume driver's detach() causes TypeError exception on v1 cinder client

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/296543
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a45f5dd7021a15064ae50d07755be9e2bfc22ae9
Submitter: Jenkins
Branch:master

commit a45f5dd7021a15064ae50d07755be9e2bfc22ae9
Author: Corey Wright 
Date:   Wed Mar 23 10:07:59 2016 -0500

cinder: accommodate v1 cinder client in detach call

Call Cinder client's detach() with attachment_uuid only if the client
is v2.

Cinder client v2 supports passing volume_id and optionally
attachment_id to its volume manager's detach() method, but v1 does
not, only accepting volume_id.  Change I3cdc4992 indiscriminately
passes both volume_id and attachment_id to the Cinder client
regardless of its version, prompting with v1:

TypeError: detach() takes exactly 2 arguments (3 given)

Change-Id: I2e8b5947521d659e930141b0b8e6a6353e9163bd
Closes-Bug: 1561056


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1561056

Title:
  cinder volume driver's detach() causes TypeError exception on v1
  cinder client

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  New

Bug description:
  Nova version: git master branch's HEAD (as of today)
  Expected behavior: cinderclient v1 detach() called with accepted argument
  Actual behavior: cinderclient v1 detach() called with too many arguments

  Change I3cdc4992 indiscriminately passes both volume_id and
  attachment_id to the Cinder client regardless of its version even
  though Cinder client v2 supports passing volume_id and optionally
  attachment_id to its volume manager's detach() method, but v1 does
  not, only accepting volume_id.

  Calling Cinder client v1 detach() with both volume_id and
  attachment_id results in "TypeError: detach() takes exactly 2
  arguments (3 given)"

  Full traceback and proposed bug fix to follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1561056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531473] Re: Move graphics and serial console check to can_live_migrate_source/dest

2016-07-08 Thread Markus Zoeller (markus_z)
Already done and introduced bug 1595962

The report is also more a personal todo item and not a flaw in the
behavior of nova which could affect the ops.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1531473

Title:
  Move graphics and serial console check to can_live_migrate_source/dest

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  _check_graphics_addresses_can_live_migrate(listen_addrs) and
  _verify_serial_console_is_disabled() should be move to
  can_live_migrate_source/dest method to reduce extra operations of
  pre_live_migration and roll_back calling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1531473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600268] [NEW] Upgrading from Liberty to Mitaka erased passwords from SQL backend

2016-07-08 Thread David Stanek
Public bug reported:

This bug was reported in IRC on July 7, 2016 by jmlowe. Creating this
for tracking purposes.

IRC log: http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
/%23openstack-keystone.2016-07-07.log.html

Environment:
 - 3 total controllers (2 RDO and 2 Ubuntu) - all installed from vendor packages
 - 3 node Galera cluster
 - installed Liberty circa-December 2015
 - upgraded to recent Mitaka version

jmlowe found that after the migration LDAP users could login fine, but
SQL users could not. Upon further investigation the password hashes were
no longer in the database (the new `password` table was empty).

The lack of password records in the Passwords table and the fact that
the password column was removed from the User table leads me to believe
that migration 091 is to blame.

So far I have not been able to reproduce the issue.

** Affects: keystone
 Importance: Undecided
 Status: Incomplete


** Tags: upgrade

** Changed in: keystone
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1600268

Title:
  Upgrading from Liberty to Mitaka erased passwords from SQL backend

Status in OpenStack Identity (keystone):
  Incomplete

Bug description:
  This bug was reported in IRC on July 7, 2016 by jmlowe. Creating this
  for tracking purposes.

  IRC log: http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
  /%23openstack-keystone.2016-07-07.log.html

  Environment:
   - 3 total controllers (2 RDO and 2 Ubuntu) - all installed from vendor 
packages
   - 3 node Galera cluster
   - installed Liberty circa-December 2015
   - upgraded to recent Mitaka version

  jmlowe found that after the migration LDAP users could login fine, but
  SQL users could not. Upon further investigation the password hashes
  were no longer in the database (the new `password` table was empty).

  The lack of password records in the Passwords table and the fact that
  the password column was removed from the User table leads me to
  believe that migration 091 is to blame.

  So far I have not been able to reproduce the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1600268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600251] [NEW] live migration does not honor server group policy

2016-07-08 Thread Paul Carlton
Public bug reported:

What happens is that the live migration task uses the resource
specification created when the instances was created and passes this to
the scheduler to find a new host, marking its current host as excluded.
This resource spec object includes the instance's group object which
contains a list of instances in the group. The problem is that the
instance group object in the resource spec reflects the list of
instances in the group at the time the instance was created. Thus if you
migrate the first instance to be assigned an anti affinity group it will
think that the group has no other member instances and thus no compute
nodes will be excluded. Only the most recently created instance assigned
the anti affinity group will correctly exclude all nodes containing
members of its group!

There is code to update the instance group object in the resource spec
but the resource spec object is only updated with this information if it
is created by the live migration task, i.e. in the case of an instance
without a resource spec in the request_specs database table. This will
only be the case for instances created prior to the implementation of
the requests_specs table.

** Affects: nova
 Importance: Undecided
 Assignee: Paul Carlton (paul-carlton2)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Paul Carlton (paul-carlton2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600251

Title:
  live migration does not honor server group policy

Status in OpenStack Compute (nova):
  New

Bug description:
  What happens is that the live migration task uses the resource
  specification created when the instances was created and passes this
  to the scheduler to find a new host, marking its current host as
  excluded. This resource spec object includes the instance's group
  object which contains a list of instances in the group. The problem is
  that the instance group object in the resource spec reflects the list
  of instances in the group at the time the instance was created. Thus
  if you migrate the first instance to be assigned an anti affinity
  group it will think that the group has no other member instances and
  thus no compute nodes will be excluded. Only the most recently created
  instance assigned the anti affinity group will correctly exclude all
  nodes containing members of its group!

  There is code to update the instance group object in the resource spec
  but the resource spec object is only updated with this information if
  it is created by the live migration task, i.e. in the case of an
  instance without a resource spec in the request_specs database table.
  This will only be the case for instances created prior to the
  implementation of the requests_specs table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1600251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600068] Re: Cannot run tox using python 3

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/339359
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b06c7e857edbffe21745e0c85b021cc2d7423a7f
Submitter: Jenkins
Branch:master

commit b06c7e857edbffe21745e0c85b021cc2d7423a7f
Author: Brandon Logan 
Date:   Fri Jul 8 01:30:52 2016 -0500

Allow tox to be run with python 3

The tox.ini has some unicode characters that cannot be
decoded, so just executing tox will immediately cause an error
because the tox.ini cannot be parsed.

Closes-Bug: #1600068
Change-Id: Ia01ae80d9321584845bb06c3f6673d13027bd2db


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600068

Title:
  Cannot run tox using python 3

Status in neutron:
  Fix Released

Bug description:
  This is because of a unicode character that can't be decoded

  Traceback (most recent call last):
File "/usr/bin/tox", line 11, in 
  sys.exit(cmdline())
File "/usr/lib/python3.5/site-packages/tox/session.py", line 38, in main
  config = prepare(args)
File "/usr/lib/python3.5/site-packages/tox/session.py", line 26, in prepare
  config = parseconfig(args)
File "/usr/lib/python3.5/site-packages/tox/config.py", line 229, in 
parseconfig
  parseini(config, inipath)
File "/usr/lib/python3.5/site-packages/tox/config.py", line 644, in __init__
  self._cfg = py.iniconfig.IniConfig(config.toxinipath)
File "/usr/lib/python3.5/site-packages/py/_iniconfig.py", line 52, in 
__init__
  tokens = self._parse(iter(f))
File "/usr/lib/python3.5/site-packages/py/_iniconfig.py", line 80, in _parse
  for lineno, line in enumerate(line_iter):
File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
  return codecs.ascii_decode(input, self.errors)[0]
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 4222: 
ordinal not in range(128)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600109] Re: Unit tests should not perform logging, but some tests still use

2016-07-08 Thread Amrith
marking incomplete for Trove per IRC chat with keystone folks.

** Changed in: trove
   Status: Opinion => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600109

Title:
  Unit tests should not perform logging,but some tests still use

Status in Ceilometer:
  Incomplete
Status in Cinder:
  Incomplete
Status in Glance:
  Incomplete
Status in glance_store:
  New
Status in OpenStack Identity (keystone):
  Incomplete
Status in Magnum:
  Incomplete
Status in neutron:
  New
Status in OpenStack Compute (nova):
  Incomplete
Status in os-brick:
  New
Status in python-cinderclient:
  New
Status in python-glanceclient:
  New
Status in python-heatclient:
  New
Status in python-keystoneclient:
  Incomplete
Status in python-neutronclient:
  New
Status in python-novaclient:
  New
Status in python-rackclient:
  New
Status in python-swiftclient:
  New
Status in rack:
  New
Status in Rally:
  Incomplete
Status in OpenStack Object Storage (swift):
  Incomplete
Status in tempest:
  New
Status in OpenStack DBaaS (Trove):
  Incomplete

Bug description:
  We shuld remove the logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561056] Re: cinder volume driver's detach() causes TypeError exception on v1 cinder client

2016-07-08 Thread melanie witt
Proposed patch: https://review.openstack.org/#/c/296543/

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Corey Wright (coreywright)

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: Corey Wright (coreywright) => (unassigned)

** Tags added: volumes

** Changed in: nova
 Assignee: (unassigned) => Corey Wright (coreywright)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1561056

Title:
  cinder volume driver's detach() causes TypeError exception on v1
  cinder client

Status in OpenStack Compute (nova):
  In Progress
Status in nova package in Ubuntu:
  New

Bug description:
  Nova version: git master branch's HEAD (as of today)
  Expected behavior: cinderclient v1 detach() called with accepted argument
  Actual behavior: cinderclient v1 detach() called with too many arguments

  Change I3cdc4992 indiscriminately passes both volume_id and
  attachment_id to the Cinder client regardless of its version even
  though Cinder client v2 supports passing volume_id and optionally
  attachment_id to its volume manager's detach() method, but v1 does
  not, only accepting volume_id.

  Calling Cinder client v1 detach() with both volume_id and
  attachment_id results in "TypeError: detach() takes exactly 2
  arguments (3 given)"

  Full traceback and proposed bug fix to follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1561056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577753] Re: Cloud-init fails of stage init

2016-07-08 Thread Narinder Gupta
[  OK  ] Started Accounts Service.
[6.186699] cloud-init[552]: Cloud-init v. 0.7.7 running 'init-local' at 
Fri, 08 Jul 2016 13:30:15 +. Up 6.14 seconds.
[6.188555] cloud-init[552]: 2016-07-08 13:30:15,769 - util.py[WARNING]: 
failed of stage init-local
[6.198610] cloud-init[552]: failed run of stage init-local
[6.199817] cloud-init[552]: 

[6.201545] cloud-init[552]: Traceback (most recent call last):
[6.203028] cloud-init[552]:   File "/usr/bin/cloud-init", line 520, in 
status_wrapper
[6.204659] cloud-init[552]: ret = functor(name, args)
[6.206211] cloud-init[552]:   File "/usr/bin/cloud-init", line 250, in 
main_init
[6.207780] cloud-init[552]: init.fetch(existing=existing)
[6.209292] cloud-init[552]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 322, in fetch
[6.211067] cloud-init[552]: return 
self._get_data_source(existing=existing)
[6.212661] cloud-init[552]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 241, in 
_get_data_source
[6.214496] cloud-init[552]: util.del_file(self.paths.instance_link)
[6.216048] cloud-init[552]:   File 
"/usr/lib/python3/dist-packages/cloudinit/util.py", line 1567, in del_file
[6.217827] cloud-init[552]: raise e
[6.219214] cloud-init[552]:   File 
"/usr/lib/python3/dist-packages/cloudinit/util.py", line 1564, in del_file
[6.221006] cloud-init[552]: os.unlink(path)
[6.222416] cloud-init[552]: IsADirectoryError: [Errno 21] Is a directory: 
'/var/lib/cloud/instance'
[6.224049] cloud-init[552]: 

[FAILED] Failed to start Initial cloud-init job (pre-networking).
See 'systemctl status cloud-init-local.service' for details.
[  OK  ] Reached target Network (Pre).


** Also affects: opnfv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1577753

Title:
  Cloud-init fails of stage init

Status in cloud-init:
  New
Status in OPNFV:
  New

Bug description:
  Cloud-init 0.77 on Ubuntu 16.04 
  seems not to be able to connect to the OpenStack Metadata service and fails, 
  although the Metadata service is available itself, see log here:

  root@ubuntu:/etc/network# cloud-init  --debug init
  2016-05-03 12:25:04,855 - handlers.py[DEBUG]: start: init-network: searching 
for network datasources
  2016-05-03 12:25:04,855 - util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  2016-05-03 12:25:04,855 - util.py[DEBUG]: Read 16 bytes from /proc/uptime
  2016-05-03 12:25:04,856 - util.py[DEBUG]: Reading from 
/var/lib/cloud/data/status.json (quiet=False)
  2016-05-03 12:25:04,856 - util.py[DEBUG]: Read 548 bytes from 
/var/lib/cloud/data/status.json
  2016-05-03 12:25:04,857 - util.py[DEBUG]: Creating symbolic link from 
'/run/cloud-init/status.json' => '../../var/lib/cloud/data/status.json'
  2016-05-03 12:25:04,858 - util.py[DEBUG]: Attempting to remove 
/run/cloud-init/status.json
  2016-05-03 12:25:04,858 - util.py[DEBUG]: Running command 
['systemd-detect-virt', '--quiet', '--container'] with allowed return codes [0] 
(shell=False, capture=True)
  2016-05-03 12:25:04,861 - util.py[DEBUG]: Running command 
['running-in-container'] with allowed return codes [0] (shell=False, 
capture=True)
  2016-05-03 12:25:04,862 - util.py[DEBUG]: Running command 
['lxc-is-container'] with allowed return codes [0] (shell=False, capture=True)
  2016-05-03 12:25:04,864 - util.py[DEBUG]: Reading from /proc/1/environ 
(quiet=False)
  2016-05-03 12:25:04,864 - util.py[DEBUG]: Read 110 bytes from /proc/1/environ
  2016-05-03 12:25:04,864 - util.py[DEBUG]: Reading from /proc/self/status 
(quiet=False)
  2016-05-03 12:25:04,865 - util.py[DEBUG]: Read 896 bytes from 
/proc/self/status
  2016-05-03 12:25:04,865 - util.py[DEBUG]: Reading from /proc/cmdline 
(quiet=False)
  2016-05-03 12:25:04,865 - util.py[DEBUG]: Read 64 bytes from /proc/cmdline
  2016-05-03 12:25:04,866 - util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  2016-05-03 12:25:04,866 - util.py[DEBUG]: Read 16 bytes from /proc/uptime
  2016-05-03 12:25:04,866 - templater.py[WARNING]: Cheetah not available as the 
default renderer for unknown template, reverting to the basic renderer.
  2016-05-03 12:25:04,867 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg 
(quiet=False)
  2016-05-03 12:25:04,867 - util.py[DEBUG]: Read 3011 bytes from 
/etc/cloud/cloud.cfg
  2016-05-03 12:25:04,867 - util.py[DEBUG]: Attempting to load yaml from string 
of length 3011 with allowed root types (,)
  2016-05-03 12:25:04,887 - util.py[DEBUG]: Reading from 
/etc/cloud/cloud.cfg.d/90_dpkg.cfg (quiet=False)
  2016-05-03 12:25:04,887 - util.py[DEBUG]: Read 197 bytes from 
/etc/cloud/cloud.cfg.d/90_dpkg.cfg
  2016-05-03 12:25:04,887 - util.py[DEBUG

[Yahoo-eng-team] [Bug 1571839] Re: NoSuchOptError: no such option in group neutron: auth_plugin

2016-07-08 Thread John Garbutt
*** This bug is a duplicate of bug 1574988 ***
https://bugs.launchpad.net/bugs/1574988

This but has already been fixed here:
https://github.com/openstack/nova/commit/2647f91ae97844a73176fc1c8663d9b186bdec1a

** This bug has been marked a duplicate of bug 1574988
   

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1571839

Title:
  NoSuchOptError: no such option in group neutron: auth_plugin

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  After this  change[1], auth_type configuration value is used instead
  of auth_plugin, resulting in a deprecated value in the code[2]

  [1] 
https://github.com/openstack/keystoneauth/commit/a56ed4218aef5a2e528aa682cea967e767dca923
  [2] 
https://github.com/openstack/nova/blob/2bda625935f04f03622ac24eb5ad67e81bc748a1/nova/network/neutronv2/api.py#L107

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1571839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274523] Re: connection_trace does not work with DB2 backend

2016-07-08 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274523

Title:
  connection_trace does not work with DB2 backend

Status in Cinder:
  Invalid
Status in Glance:
  Triaged
Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in oslo.db:
  Fix Released

Bug description:
  When setting connection_trace=True, the stack trace does not get
  printed for DB2 (ibm_db).

  I have a patch that we've been using internally for this fix that I
  plan to upstream soon, and with that we can get output like this:

  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] SELECT 
services.created_at AS services_created_at, services.updated_at AS 
services_updated_at, services.deleted_at AS services_deleted_at, 
services.deleted AS services_deleted, services.id AS services_id, services.host 
AS services_host, services."binary" AS services_binary, services.topic AS 
services_topic, services.report_count AS services_report_count, 
services.disabled AS services_disabled, services.disabled_reason AS 
services_disabled_reason
  FROM services WHERE services.deleted = ? AND services.id = ? FETCH FIRST 1 
ROWS ONLY
  2013-09-11 13:07:51,985 INFO sqlalchemy.engine.base.Engine (0, 3)
  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] (0, 3)
  File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers/db.py:92 
_report_state() service.service_ref, state_catalog)
  File /usr/lib/python2.6/site-packages/nova/conductor/api.py:270 
service_update() return self._manager.service_update(context, service, values)
  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py:420 
catch_client_exception() return func(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/nova/conductor/manager.py:461 
service_update() svc = self.db.service_update(context, service['id'], values)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:505 
service_update() with_compute_node=False, session=session)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:388 
_service_get() result = query.first()

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1274523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599836] Re: Booting Ironic instance, neutron port remains in DOWN state

2016-07-08 Thread Vasyl Saienko
It is not only Nova issue since, we need to bind baremetal port on
Neutron side: https://review.openstack.org/#/c/339129/


** No longer affects: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599836

Title:
  Booting Ironic instance, neutron port remains in DOWN state

Status in neutron:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When booting ironic instance in flat network, neutron port always
  ramain in down state since it is not bound.

  stack@vsaienko-ironic-neutron-poller:~$ neutron port-show 
6cabc468-8828-4ca3-89e3-4d99a9018f03
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | admin_state_up| True
 |
  | allowed_address_pairs | 
 |
  | binding:host_id   | vsaienko-ironic-neutron-poller  
 |
  | binding:profile   | {}  
 |
  | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}  
 |
  | binding:vif_type  | ovs 
 |
  | binding:vnic_type | normal  
 |
  | created_at| 2016-07-07T07:32:05 
 |
  | description   | 
 |
  | device_id | 0a03a565-f3dd-4ad1-94e1-e1754b718ea0
 |
  | device_owner  | compute:None
 |
  | extra_dhcp_opts   | {"opt_value": "undionly.kpxe", "ip_version": 4, 
"opt_name": "tag:!ipxe,bootfile-name"}   |
  |   | {"opt_value": "10.11.0.51", "ip_version": 4, 
"opt_name": "tftp-server"}  |
  |   | {"opt_value": "10.11.0.51", "ip_version": 4, 
"opt_name": "server-ip-address"}|
  |   | {"opt_value": "http://10.11.0.51:3928/boot.ipxe";, 
"ip_version": 4, "opt_name": "tag:ipxe,bootfile-name"} |
  | fixed_ips | {"subnet_id": 
"17ab9d45-2b7e-4d71-8bb8-f76d5edbdce0", "ip_address": "10.20.30.12"}
   |
  | id| 6cabc468-8828-4ca3-89e3-4d99a9018f03
 |
  | mac_address   | 52:54:00:d9:9e:d8   
 |
  | name  | 
 |
  | network_id| b5587303-c13c-4245-9cb0-04de3443b84b
 |
  | port_security_enabled | True
 |
  | security_groups   | edd667ef-3806-47d9-b33b-1ee8af5d100d
 |
  | status| DOWN
 |
  | tenant_id | fd9f6a00f3a849b9bbd80ced82749c16
 |
  | updated_at| 2016-07-07T07:32:07 
 |
  
+---+--+

  
  stack@vsaienko-ironic-neutron-poller:~$ grep 
6cabc468-8828-4ca3-89e3-4d99a9018f03 new/q-agt.log
  2016-07-07 03:32:08.086 7799 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-79c54446-c81a-4f11-ba96-1ec2697b102c neutron -] port_update message 
processed for port 6cabc468-8828-4ca3-89e3-4d99a9018f03 port_update 
/opt/stac

[Yahoo-eng-team] [Bug 1600229] [NEW] With only one key pair it should be selected by default in AngularJS Launch Instance

2016-07-08 Thread Marcos Lobo
Public bug reported:

In the Launch Instance panel (not the new AngularJS version) when you
create a new instance and you have only one key pair, that key pair is
selected by default for the instance creation. If there is more than
one, anyone is selected by default an you have to select one if you
want.

In the new Launch Instance panel in AngularJS this is not the behaviour.
when you have only one key pair it's not selected by default, you have
to selected it.

This may cause some unexpected user experience, because a lot of users
have only one key pair and they were used to not worry about to select
key pair when they create a new instance. Now, with the new panel
(AngularJS), they report issues because the instance was not created
with their "default key pair" and they can not SSH it.

I would like to suggest to come back to the same behaviour, for the key
pair selection, that we have in the not AngularJs version of the Launch
Instance panel. At least make it configurable via local_settings config
file.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600229

Title:
  With only one key pair it should be selected by default in AngularJS
  Launch Instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the Launch Instance panel (not the new AngularJS version) when you
  create a new instance and you have only one key pair, that key pair is
  selected by default for the instance creation. If there is more than
  one, anyone is selected by default an you have to select one if you
  want.

  In the new Launch Instance panel in AngularJS this is not the
  behaviour. when you have only one key pair it's not selected by
  default, you have to selected it.

  This may cause some unexpected user experience, because a lot of users
  have only one key pair and they were used to not worry about to select
  key pair when they create a new instance. Now, with the new panel
  (AngularJS), they report issues because the instance was not created
  with their "default key pair" and they can not SSH it.

  I would like to suggest to come back to the same behaviour, for the
  key pair selection, that we have in the not AngularJs version of the
  Launch Instance panel. At least make it configurable via
  local_settings config file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569555] Re: Request is wrong for compute v2.1 os-flavor-access list

2016-07-08 Thread John Garbutt
We should say Fix Released for this.

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1569555

Title:
  Request is wrong for compute v2.1 os-flavor-access list

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The docs say that it takes a tenant_id parameter for listing flavor-
  access:

  http://developer.openstack.org/api-ref-
  compute-v2.1.html#listFlavorAccess

  But the code doesn't take a tenant_id, only a flavor_id:

  
https://github.com/openstack/nova/blob/2002120c459561d995eac4273befb42b3809d5bb/nova/api/openstack/compute/flavor_access.py#L51

  The response in the docs is correct, only the request is wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1569555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600226] [NEW] neutron-lib/api-ref: Create network missing the Request parameters

2016-07-08 Thread Danny Choi
Public bug reported:

https://git.openstack.org/cgit/openstack/neutron-lib/tree/api-
ref/source/v2/networks.inc

Create network missing the Request parameters.

Create network
==

.. rest_method::  POST /v2.0/networks

Creates a network.

A request body is optional. An administrative user can specify
another tenant UUID, which is the tenant who owns the network, in
the request body.

Error response codes:201,401,400,

Request
---
 << Danny Choi (dannchoi)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600226

Title:
  neutron-lib/api-ref: Create network missing the Request parameters

Status in neutron:
  In Progress

Bug description:
  https://git.openstack.org/cgit/openstack/neutron-lib/tree/api-
  ref/source/v2/networks.inc

  Create network missing the Request parameters.

  Create network
  ==

  .. rest_method::  POST /v2.0/networks

  Creates a network.

  A request body is optional. An administrative user can specify
  another tenant UUID, which is the tenant who owns the network, in
  the request body.

  Error response codes:201,401,400,

  Request
  ---
   <

[Yahoo-eng-team] [Bug 1600223] [NEW] Cloud init fails to install public keys when there's multiple AuthorizedKeysFile locations in sshd_config

2016-07-08 Thread karena
Public bug reported:

Our sshd_config file contains multiple locations:

AuthorizedKeysFile /usr/share/keys/mo-nonops/%u.pub /usr/share/keys/mo-
ops/%u.pub /usr/share/keys/mo-qa/%u.pub /usr/share/fish-keys/%u.pub
/etc/ssh/authorized_keys/%u.pub %h/.ssh/authorized_keys


When launching an instance, cloud init log shows that the default user's public 
key is created, but in fact no file exists.

When changing AuthorizedKeysFile to contain one single location, this
issue is "resolved".

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1600223

Title:
  Cloud init fails to install public keys when there's multiple
  AuthorizedKeysFile locations in sshd_config

Status in cloud-init:
  New

Bug description:
  Our sshd_config file contains multiple locations:

  AuthorizedKeysFile /usr/share/keys/mo-nonops/%u.pub /usr/share/keys
  /mo-ops/%u.pub /usr/share/keys/mo-qa/%u.pub /usr/share/fish-
  keys/%u.pub /etc/ssh/authorized_keys/%u.pub %h/.ssh/authorized_keys

  
  When launching an instance, cloud init log shows that the default user's 
public key is created, but in fact no file exists.

  When changing AuthorizedKeysFile to contain one single location, this
  issue is "resolved".

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1600223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592965] Re: i18n: babel_extract_angular should trim whitespaces in AngularJS templates

2016-07-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/330183
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f87b58fcabf84c9b73b07b587416cc0b6cc814b1
Submitter: Jenkins
Branch:master

commit f87b58fcabf84c9b73b07b587416cc0b6cc814b1
Author: Akihiro Motoki 
Date:   Thu Jun 16 05:06:32 2016 +0900

i18n: trim whitespaces in extracted messages from AngularJS templates

When extracting translation messages from AngularJS templates,
there is no need to keep whitespaces in the messages as there is
no meaning of repeated whitespaces in HTML.
This will make translation effort much simpler.
More detail is described in the bug report.

This commit trims such whitespaces. Django provides a convenient method
to do the same purpose for 'trimmed' option in Django templates.
This method is reused in this commit as well.

Closes-Bug: #1592965
Change-Id: I9b7ce54452f3db2350eecc3115db2e4173df5167


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1592965

Title:
  i18n: babel_extract_angular should trim whitespaces in AngularJS
  templates

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  This is "AngularJS templates" version of bug 1583757.

  At now, translators will get source strings with meaningless newlines
  and whitespaces like below. Zanata (translation check site) checks the
  number of newlines, so translators need to insert newlines to silent
  Zanata validations. It is really annoying and meaningless.

  For Django templates, Django provides 'trimmed' option to trim
  whitespaces in extracted messages. It would be nice if we have the
  similar behavior to Django 'trimmed' option for AngularJS template
  message extraction. In HTML case, we don't need to care consecutive
  whitespaces, so we can simply trim whitespaces in AngularJS HTML
  templates.

  #: 
openstack_dashboard/dashboards/project/static/dashboard/project/containers/create-container-modal.html:40
  msgid ""
  "A container is a storage compartment for your data and provides a way\n"
  "  for you to organize your data. You can think of a container as "
  "a\n"
  "  folder in Windows® or a directory in UNIX®. The primary "
  "difference\n"
  "  between a container and these other file system concepts is "
  "that\n"
  "  containers cannot be nested. You can, however, create an "
  "unlimited\n"
  "  number of containers within your account. Data must be stored "
  "in a\n"
  "  container so you must have at least one container defined in "
  "your\n"
  "  account prior to uploading data."
  msgstr ""

  We would like to have a string like:

  #: 
openstack_dashboard/dashboards/project/static/dashboard/project/containers/create-container-modal.html:40
  msgid ""
  "A container is a storage compartment for your data and provides a way for"
  " you to organize your data. You can think of a container as a folder in "
  "Windows® or a directory in UNIX®. The primary difference between a "
  "container and these other file system concepts is that containers cannot "
  "be nested. You can, however, create an unlimited number of containers "
  "within your account. Data must be stored in a container so you must have "
  "at least one container defined in your account prior to uploading data."
  msgstr ""

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1592965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600109] Re: Unit tests should not perform logging, but some tests still use

2016-07-08 Thread Amrith
Without more details about this, it is hard to tell what this bug is
about. How can code not perform logging just because it is in unit
tests? Heck, some tests actually verify that messages are being logged.

** Changed in: trove
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600109

Title:
  Unit tests should not perform logging,but some tests still use

Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in glance_store:
  New
Status in OpenStack Identity (keystone):
  New
Status in Magnum:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New
Status in python-cinderclient:
  New
Status in python-glanceclient:
  New
Status in python-heatclient:
  New
Status in python-keystoneclient:
  New
Status in python-neutronclient:
  New
Status in python-novaclient:
  New
Status in python-rackclient:
  New
Status in python-swiftclient:
  New
Status in rack:
  New
Status in Rally:
  New
Status in OpenStack Object Storage (swift):
  New
Status in tempest:
  New
Status in OpenStack DBaaS (Trove):
  Opinion

Bug description:
  We shuld remove the logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600195] [NEW] Domain admin cannot create/delete user

2016-07-08 Thread Kenji Ishii
Public bug reported:

When a user who have a privilege as a domain admin logged in with domain scoped 
token, he cannot create/delete user. Because in this case these menu are not 
displayed.
Originally, this user should be able to create/delete users.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600195

Title:
  Domain admin cannot create/delete user

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a user who have a privilege as a domain admin logged in with domain 
scoped token, he cannot create/delete user. Because in this case these menu are 
not displayed.
  Originally, this user should be able to create/delete users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600187] [NEW] Ironic does not authenticate correctly when using Keystone v3 AD/LDAP domain

2016-07-08 Thread midekra
Public bug reported:

I was in discussion about a problem at: 
https://bugs.launchpad.net/nova/+bug/1580703 
because i had similar symptoms. I have posted my initial error logs in that 
thread.

I found out that the OP solution worked in a plain (non-Active
Directory/LDAP backend domain) Keystone v3 configuration (with v2
enabled endpoints). In our production environment, which runs Mitaka, I
have configured Active Directory as a LDAP backend domain for Keystone.
All our users, including the service accounts, are created in Active
Directory.

Ironic doesn't handle this well. The rest of the services are working
perfectly. Nova could not authenticate and left me with "Rejected
requests" on the Ironic-Api.

If I create a "local" user in the default domain (e.g. a NOT in Active
Directory) then Ironic can authenticate with Keystone without any
problems.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic

** Tags added: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600187

Title:
  Ironic does not authenticate correctly when using Keystone v3 AD/LDAP
  domain

Status in OpenStack Compute (nova):
  New

Bug description:
  I was in discussion about a problem at: 
https://bugs.launchpad.net/nova/+bug/1580703 
  because i had similar symptoms. I have posted my initial error logs in that 
thread.

  I found out that the OP solution worked in a plain (non-Active
  Directory/LDAP backend domain) Keystone v3 configuration (with v2
  enabled endpoints). In our production environment, which runs Mitaka,
  I have configured Active Directory as a LDAP backend domain for
  Keystone. All our users, including the service accounts, are created
  in Active Directory.

  Ironic doesn't handle this well. The rest of the services are working
  perfectly. Nova could not authenticate and left me with "Rejected
  requests" on the Ironic-Api.

  If I create a "local" user in the default domain (e.g. a NOT in Active
  Directory) then Ironic can authenticate with Keystone without any
  problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1600187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600150] [NEW] target info is always None when policy_check is called

2016-07-08 Thread Kenji Ishii
Public bug reported:

Policy check method is configured in openstack_dashboard/settings.py 
(POLICY_CHECK_FUNCTION).
At the moment, when we call this method, target info will be always None.
Therefore, it cannot do policy check correctly.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600150

Title:
  target info is always None when policy_check is called

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Policy check method is configured in openstack_dashboard/settings.py 
(POLICY_CHECK_FUNCTION).
  At the moment, when we call this method, target info will be always None.
  Therefore, it cannot do policy check correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600143] [NEW] property reserved_host_memory_mb size problem

2016-07-08 Thread zhangliang
Public bug reported:

The file of nova.conf  in compute node has a property
reserved_host_memory_mb .When I deploy compute node in used
computer,then I need to set the property  reserved_host_memory_mb.  But
most of time I don't kown how to set this property.

I see the code of nova,I found the default value of
reserved_host_memory_mb is 512M. I think nova project should automatic
detect system memory.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600143

Title:
  property  reserved_host_memory_mb size problem

Status in OpenStack Compute (nova):
  New

Bug description:
  The file of nova.conf  in compute node has a property
  reserved_host_memory_mb .When I deploy compute node in used
  computer,then I need to set the property  reserved_host_memory_mb.
  But most of time I don't kown how to set this property.

  I see the code of nova,I found the default value of
  reserved_host_memory_mb is 512M. I think nova project should automatic
  detect system memory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1600143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600109] Re: Unit tests should not perform logging, but some tests still use

2016-07-08 Thread YaoZheng_ZTE
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
 Assignee: (unassigned) => YaoZheng_ZTE (zheng-yao1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600109

Title:
  Unit tests should not perform logging,but some tests still use

Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in glance_store:
  New
Status in OpenStack Identity (keystone):
  New
Status in Magnum:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New
Status in python-cinderclient:
  New
Status in python-glanceclient:
  New
Status in python-heatclient:
  New
Status in python-keystoneclient:
  New
Status in python-neutronclient:
  New
Status in python-novaclient:
  New
Status in python-rackclient:
  New
Status in python-swiftclient:
  New
Status in rack:
  New
Status in Rally:
  New
Status in OpenStack Object Storage (swift):
  New
Status in tempest:
  New
Status in OpenStack DBaaS (Trove):
  New

Bug description:
  We shuld remove the logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600109] Re: Unit tests should not perform logging, but some tests still use

2016-07-08 Thread yuyafei
** Also affects: trove
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1600109

Title:
  Unit tests should not perform logging,but some tests still use

Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in glance_store:
  New
Status in OpenStack Identity (keystone):
  New
Status in Magnum:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New
Status in python-cinderclient:
  New
Status in python-glanceclient:
  New
Status in python-heatclient:
  New
Status in python-keystoneclient:
  New
Status in python-neutronclient:
  New
Status in python-novaclient:
  New
Status in python-rackclient:
  New
Status in python-swiftclient:
  New
Status in rack:
  New
Status in Rally:
  New
Status in OpenStack Object Storage (swift):
  New
Status in tempest:
  New
Status in OpenStack DBaaS (Trove):
  New

Bug description:
  We shuld remove the logging

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1600109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp