[Yahoo-eng-team] [Bug 1467776] Re: glance-api & glance-registry doesn't set the control_exchange correctly

2017-06-30 Thread Sean McGinnis
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1467776

Title:
  glance-api & glance-registry doesn't set the control_exchange
  correctly

Status in Glance:
  Fix Released

Bug description:
  glance-api & glance-registry doesn't set the oslo.messaging's
  control_exchange correctly, so it will uses the default 'openstack'
  exchange for all the notifications they send out, which does not
  follow openstack convention. Other openstack projects will by default
  send out the notification to the exchange with the same name of that
  project, e.g. nova sends to exchange 'nova', cinder sends to exchange
  'cinder'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1467776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694666] Re: metadata service PicklingError: Can't pickle when using memcached

2017-06-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/478991
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2fee972bde4a04d398d32aa6c8b6d27819db697b
Submitter: Jenkins
Branch:master

commit 2fee972bde4a04d398d32aa6c8b6d27819db697b
Author: Dan Smith 
Date:   Thu Jun 29 09:42:20 2017 -0700

Sanitize instance in InstanceMetadata to avoid un-pickleable context

This is a more strategic fix for the issue of us trying to pickle
an instance with a context that has complex data structures inside
(i.e. SQLAlchemy connections) into the oslo cache. The right solution
is for us to stop storing random python objects (InstanceMetadata)
in the cache with pickle. However, that's a larger change and more
complex for deployers to roll over. This attempts to sanitize the
instance before we pickle it to get things working again.

Change-Id: Ie7d97ce5c62c8fb9da5822590a64210521f8ae7a
Closes-Bug: #1694666


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1694666

Title:
  metadata service PicklingError: Can't pickle  when using memcached

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  Launching an instance causes a lot of error messages in nova-api logs.
  The instance is not able to retrieve metadata.

  How to reproduce
  

  Deploy nova master branch.
  Spawn an instance.
  Wait the instance to be active.
  In nova-api logs will see error messages.

  Expected results
  

  Instance retrieve metadata information

  Actual results
  ==

  Instance is not able to retrieve metadata

  Environment configuration
  =

  OpenStack deployed with kolla
  Only source images from master fail, binary(rdo or ubuntu packages) works for 
now
  Affected CentOS, Ubuntu and OracleLinux distributions
  Database and memcached works as expected, other services consuming them are 
not affected.

  Logs
  

  All logs can be found at kolla gates:

  Nova: http://logs.openstack.org/73/469373/1/check/gate-kolla-ansible-
  dsvm-deploy-centos-source-centos-7-nv/8cecb36/logs/kolla/nova/

  Neutron: http://logs.openstack.org/73/469373/1/check/gate-kolla-
  ansible-dsvm-deploy-centos-source-
  centos-7-nv/8cecb36/logs/kolla/neutron/

  Related errors:

  Nova API:

  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler 
[req-3daa8e91-93e5-4676-b77a-048ad3dd53d2 - - - - -] Failed to get metadata for 
instance id: 8cbd067f-8cd6-4365-b299-3ffc146d0790
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler Traceback (most 
recent call last):
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/metadata/handler.py", 
line 285, in _get_meta_by_instance_id
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler remote_address)
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/metadata/handler.py", 
line 87, in get_metadata_by_instance_id
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler 
self._cache.set(cache_key, data)
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/cache_utils.py", line 
116, in set
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler return 
self.region.set(key, value)
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/dogpile/cache/region.py", line 
973, in set
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler 
self.backend.set(key, self._value(value))
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/dogpile/cache/backends/memcached.py",
 line 178, in set
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler 
**self.set_arguments
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_cache/backends/memcache_pool.py",
 line 32, in _run_method
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler return 
getattr(client, __name)(*args, **kwargs)
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/memcache.py", line 740, in set
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler return 
self._set("set", key, val, time, min_compress_len, noreply)
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/memcache.py", line 1060, in 
_set
  2017-05-31 09:09:34.703 31 ERROR nova.api.metadata.handler return 
_unsafe_set()
  2017-05-31 09:

[Yahoo-eng-team] [Bug 1701487] [NEW] Deletion of ERROR state firewall goes stuck into PENDING_DELETE state

2017-06-30 Thread Puneet Arora
Public bug reported:

Once firewall goes into ERROR state if I try to delete firewall then it goes 
into PENDING_DELETE state.
Right now if firewall is in ERROR state then no way to delete firewall because 
it always goes stuck into PENDING_DELETE state.

Steps to reproduce:-
1) Create router
2) Create firewall rule .
3) Attach firewall rule to firewall policy.
4) Create firewall using above firewall policy and attach router to it
5) Now if firewall goes into error state then try to delete firewall 
6) Check Firewall would get stuck in "PENDING_DELETE" state.

Please find logs from neutron-server.log file:-
http://paste.openstack.org/show/614157/

In logs I could see multiple times request goes for
"get_firewall_routers".

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1701487

Title:
  Deletion of ERROR state firewall goes stuck into PENDING_DELETE state

Status in neutron:
  New

Bug description:
  Once firewall goes into ERROR state if I try to delete firewall then it goes 
into PENDING_DELETE state.
  Right now if firewall is in ERROR state then no way to delete firewall 
because it always goes stuck into PENDING_DELETE state.

  Steps to reproduce:-
  1) Create router
  2) Create firewall rule .
  3) Attach firewall rule to firewall policy.
  4) Create firewall using above firewall policy and attach router to it
  5) Now if firewall goes into error state then try to delete firewall 
  6) Check Firewall would get stuck in "PENDING_DELETE" state.

  Please find logs from neutron-server.log file:-
  http://paste.openstack.org/show/614157/

  In logs I could see multiple times request goes for
  "get_firewall_routers".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1701487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688540] Re: When operation log enabled, logout returns "something went wrong"

2017-06-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/462928
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2c94075bbe6d3c452af40b82eb5a50b10aa23fd9
Submitter: Jenkins
Branch:master

commit 2c94075bbe6d3c452af40b82eb5a50b10aa23fd9
Author: Mateusz Kowalski 
Date:   Fri May 5 14:44:53 2017 +0200

operation_log: Fix logout generating AttributeError

When logging out request.user.is_authenticated() throws
AttributeError as request.user is NoneType and does not
contain required method.

This patch checks whether request.user is None and
takes a proper action.

Change-Id: Iaef08139e6f5c4c0e886328a16a7ac442acc1c10
Closes-Bug: #1688540


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1688540

Title:
  When operation log enabled, logout returns "something went wrong"

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When operation log enabled, logout returns "something went wrong" with
  the following stacktrace:

  2017-05-05 14:25:11,882 16560 ERROR django.request Internal Server Error: 
/auth/logout/
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
223, in get_response
  response = middleware_method(request, response)
File 
"/usr/lib/python2.7/site-packages/horizon/middleware/operation_log.py", line 
78, in process_response
  log_format = self._get_log_format(request)
File 
"/usr/lib/python2.7/site-packages/horizon/middleware/operation_log.py", line 
113, in _get_log_format
  request.user.is_authenticated()):
  AttributeError: 'NoneType' object has no attribute 'is_authenticated'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1688540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701511] [NEW] hyperv: Bad log message formating in livemigrationops

2017-06-30 Thread Claudiu Belu
Public bug reported:

In nova.virt.hyperv.livemigrationops, in the
check_can_live_migrate_source method, the instance is passed as an
argument to the logger, instead of as a key-value argument. Because of
this, the log message is never printed: [1]


[1] http://paste.openstack.org/show/614164/

** Affects: nova
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: In Progress


** Tags: hyper-v

** Tags added: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1701511

Title:
  hyperv: Bad log message formating in livemigrationops

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In nova.virt.hyperv.livemigrationops, in the
  check_can_live_migrate_source method, the instance is passed as an
  argument to the logger, instead of as a key-value argument. Because of
  this, the log message is never printed: [1]

  
  [1] http://paste.openstack.org/show/614164/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1701511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701527] [NEW] Exceptions in meta-mock.py need more care

2017-06-30 Thread Joonas Kylmälä
Public bug reported:

The tools/mock-meta.py has one error in exception handling (it excepts
KeyError which is made for dicts instead of IndexError that is made for
lists) and also it returns non-describing http code. I have attached a
patch that fixes this.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Patch added: "0001-Fix-exception-handling.patch"
   
https://bugs.launchpad.net/bugs/1701527/+attachment/4906560/+files/0001-Fix-exception-handling.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1701527

Title:
  Exceptions in meta-mock.py need more care

Status in cloud-init:
  New

Bug description:
  The tools/mock-meta.py has one error in exception handling (it excepts
  KeyError which is made for dicts instead of IndexError that is made
  for lists) and also it returns non-describing http code. I have
  attached a patch that fixes this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1701527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701530] [NEW] Extract custom resource classes from flavors

2017-06-30 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/473627
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 1dc93d00b0ea2d2be5c8a150b1336f2d56d71dff
Author: EdLeafe 
Date:   Mon Jun 12 23:26:40 2017 +

Extract custom resource classes from flavors

This patch adds code to look in a flavor's extra_specs for keys
beginning with "resources:", and if found, use them to update the
resources dict sent to placement.

The entry for a custom resource class will look like
"resources:CUSTOM_FOO=1". Additionally, standard classes in a flavor can
be overridden with an entry that looks like: "resources:VCPU=0". If a
standard class is found in extra_specs, it will be used instead of the
amount in the flavor. This is useful for things like Ironic, where an
operator may want to list the amount of RAM, disk, etc. in the flavor,
but use a custom Ironic resource class for doing the actual selection.
An amount for a standard class that is zero will result in that class
being removed from the requested resources dict sent to placement.

DocImpact
We should document the capability and rules in
https://docs.openstack.org/admin-guide/compute-flavors.html#extra-specs.

Blueprint: custom-resource-classes-in-flavors

Change-Id: I84f403fe78e04dd1d099d7d0d1d2925df59e80e7

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1701530

Title:
  Extract custom resource classes from flavors

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/473627
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 1dc93d00b0ea2d2be5c8a150b1336f2d56d71dff
  Author: EdLeafe 
  Date:   Mon Jun 12 23:26:40 2017 +

  Extract custom resource classes from flavors
  
  This patch adds code to look in a flavor's extra_specs for keys
  beginning with "resources:", and if found, use them to update the
  resources dict sent to placement.
  
  The entry for a custom resource class will look like
  "resources:CUSTOM_FOO=1". Additionally, standard classes in a flavor can
  be overridden with an entry that looks like: "resources:VCPU=0". If a
  standard class is found in extra_specs, it will be used instead of the
  amount in the flavor. This is useful for things like Ironic, where an
  operator may want to list the amount of RAM, disk, etc. in the flavor,
  but use a custom Ironic resource class for doing the actual selection.
  An amount for a standard class that is zero will result in that class
  being removed from the requested resources dict sent to placement.
  
  DocImpact
  We should document the capability and rules in
  https://docs.openstack.org/admin-guide/compute-flavors.html#extra-specs.
  
  Blueprint: custom-resource-classes-in-flavors
  
  Change-Id: I84f403fe78e04dd1d099d7d0d1d2925df59e80e7

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1701530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701541] [NEW] Keystone v3/roles has differnt response for HEAD and GET (again)

2017-06-30 Thread Attila Fazekas
Public bug reported:

The issue is very similar to the one already discussed at 
https://bugs.launchpad.net/keystone/+bug/1334368 , 
http://lists.openstack.org/pipermail/openstack-dev/2014-July/039140.html .

# curl -v -X HEAD  
http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
 -H "Content-Type: application/json" -H "X-Auth-Token: 
gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo"
* About to connect() to 172.17.1.18 port 5000 (#0)
*   Trying 172.17.1.18...
* Connected to 172.17.1.18 (172.17.1.18) port 5000 (#0)
> HEAD 
> /v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
>  HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.17.1.18:5000
> Accept: */*
> Content-Type: application/json
> X-Auth-Token: 
> gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo
> 
< HTTP/1.1 204 No Content
< Date: Fri, 30 Jun 2017 10:09:30 GMT
< Server: Apache
< Vary: X-Auth-Token
< x-openstack-request-id: req-e64410ae-5d4a-48f7-8508-615752877277
< Content-Type: text/plain
< 
* Connection #0 to host 172.17.1.18 left intact

# curl -v -X GET  
http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
 -H "Content-Type: application/json" -H "X-Auth-Token: 
gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo"
* About to connect() to 172.17.1.18 port 5000 (#0)
*   Trying 172.17.1.18...
* Connected to 172.17.1.18 (172.17.1.18) port 5000 (#0)
> GET 
> /v3/roles/7acb026c29a24fb2a1d92a4e5291de24/implies/11b21cc37d7644c8bc955ff956b2d56e
>  HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.17.1.18:5000
> Accept: */*
> Content-Type: application/json
> X-Auth-Token: 
> gABZViMqU8rSuv7qlmcUlv1hYHegvN6EelqJPt-MTWBkIOewhSjNeiwZcksDUKm2JOfNtw78iAAmscx86N9UiekxkluvzRpatFyWooOkCATkqJFn4HgCFr_an9X7kmOhJTOguqGH6uCYz4K6ak1NfuEvtRShe3lDXyScL51JaZqtw8bCWzo
> 
< HTTP/1.1 200 OK
< Date: Fri, 30 Jun 2017 10:09:38 GMT
< Server: Apache
< Content-Length: 507
< Vary: X-Auth-Token,Accept-Encoding
< x-openstack-request-id: req-cc320571-a59d-4ea2-b459-117053367c55
< Content-Type: application/json
< 
* Connection #0 to host 172.17.1.18 left intact
{"role_inference": {"implies": {"id": "11b21cc37d7644c8bc955ff956b2d56e", 
"links": {"self": 
"http://172.17.1.18:5000/v3/roles/11b21cc37d7644c8bc955ff956b2d56e"}, "name": 
"tempest-role-1212191884"}, "prior_role": {"id": 
"7acb026c29a24fb2a1d92a4e5291de24", "links": {"self": 
"http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92a4e5291de24"}, "name": 
"tempest-role-500046640"}}, "links": {"self": 
"http://172.17.1.18:5000/v3/roles/7acb026c29a24fb2a1d92


mod_wsgi based on the version and configuration (WSGIMapHEADToGET (requires 
mod_wsgi >= 4.3.0)) mod_wsgi might send GET instead of HEAD in order to avoid 
invalid responses being cached in case of an application bug.

Unfortunately tempest expects the wrong behavior, is it also needs to be
changed,

tempest.api.identity.admin.v3.test_roles.RolesV3TestJSON.test_implied_roles_create_check_show_delete[id-c90c316c-d706-4728-bcba-eb1912081b69]
-

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/tempest/api/identity/admin/v3/test_roles.py", 
line 228, in test_implied_roles_create_check_show_delete
prior_role_id, implies_role_id)
  File 
"/usr/lib/python2.7/site-packages/tempest/lib/services/identity/v3/roles_client.py",
 line 233, in check_role_inference_rule
self.expected_success(204, resp.status)
  File 
"/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", line 252, 
in expected_success
raise exceptions.InvalidHttpSuccessCode(details)
tempest.lib.exceptions.InvalidHttpSuccessCode: The success code is 
different than the expected one
Details: Unexpected http success status code 200, The expected status code 
is 204

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1701541

Title:
  Keystone v3/roles has differnt response for HEAD and GET (again)

Status in OpenStack Identity (keystone):
  New
Status in tempest:
  New

Bug description:
  The issue i

[Yahoo-eng-team] [Bug 1679976] Re: Quobyte driver may fail to raise an error upon error code 4 on volume mounting

2017-06-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/453537
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ffb228bcc221162d77e2def9b81895ecb4bf48b7
Submitter: Jenkins
Branch:master

commit ffb228bcc221162d77e2def9b81895ecb4bf48b7
Author: Silvan Kaiser 
Date:   Wed Apr 5 11:04:33 2017 +0200

Removes potentially bad exit value from accepted list in Quobyte volume 
driver

This disallows exit code 4 in the list of valid exit codes of the execute
call for mounting Quobyte volumes. As exit code 0 is the default value
for the allowed exit codes the whole check_exit_codes argument is
omitted.
Furthermore this updates the (dis-)connect volume sync locks to be
Quobyte specific.

Closes-Bug: #1679976

Change-Id: I87b74535bd1a2045948a56d1648cd587146facb7


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1679976

Title:
  Quobyte driver may fail to raise an error upon error code 4 on volume
  mounting

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The Quobyte libvirt volume driver currently accepts exit codes [0,4] when 
mounting a Quobyte volume 
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/quobyte.py#L55).
  However error code 4 might be thrown in acceptable (already mounted) and 
unacceptable cases (specific bad state). As the driver already checks for 
existence of the mount when connecting a volume the error code should be 
removed as acceptable and failed upon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1679976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669080] Re: "openstack role create" should support "--description"

2017-06-30 Thread Steve Martinelli
This sounds reasonable from a CLI point of view, but I don't recall if
keystone roles have a description attribute for both v2 and v3. Adding
keystone as a related project.

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1669080

Title:
  "openstack role create" should support "--description"

Status in OpenStack Identity (keystone):
  New
Status in python-openstackclient:
  New

Bug description:
  It would be nice to be able to create a new role with description as
  Keystone API supports it.

  openstack role create --help
  usage: openstack role create [-h] [-f {json,shell,table,value,yaml}]
   [-c COLUMN] [--max-width ]
   [--print-empty] [--noindent] [--prefix PREFIX]
   [--domain ] [--or-show]
   

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1669080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701324] Re: Removing duplicated items doesn't work in case of federations

2017-06-30 Thread Dmitry Stepanenko
** Changed in: keystone
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1701324

Title:
  Removing duplicated items doesn't work in case of federations

Status in OpenStack Identity (keystone):
  New

Bug description:
  In commit eed233cac8f34ce74a2f6fa989c484773c491df3 "Concrete role assignments 
for federated users" there was added handling of federation-related objects. In 
that implementation objects like roles, projects and domains were aggregated 
from 2 sources - from appropriate tables directly and from federation-related 
hooks.
  This mechanism can lead to situation when there's duplication of objects, so 
for such cases code for filtering out duplicates was added.

  It was impemented in the following way:

  domains = [dict(t) for t in set([tuple(d.items()) for d in domains])]

  where domains is a list of dicts, each of which contains information
  about appropriate domain. This code can work fine in some situations
  but in general can work in a wrong way because dict "items" method
  returns key-value pairs in arbitrary order according to
  https://docs.python.org/2/library/stdtypes.html#dict.items. So, this
  code may remain unchanged list of 2 similar dicts where items listed
  out in a different order.

  This code was introduced in upstream Thu Feb 25 21:39:15 2016, so it
  seems that this code remains in newton and ocata and master branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1701324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701451] Re: some legacy v2 API lose the protection of json-schema

2017-06-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/479170
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=aaeea4bf39377c4109f6b2857794ee0e7d51e786
Submitter: Jenkins
Branch:master

commit aaeea4bf39377c4109f6b2857794ee0e7d51e786
Author: He Jie Xu 
Date:   Fri Jun 30 14:47:20 2017 +0800

Ensure the JSON-Schema covers the legacy v2 API

The legacy v2 API compatible mode support the protection of JSON-Schema.
The input body will be validated with JSON-Schema, and the extra invalid
parameters will be filtered out of the input body instead of return
HTTPBadRequest 400.

But some of API are missing that protection, the JSON-Schema validation
was limited to the v2.1 API. This patch ensures those schema covers the
legacy v2 API.

Change-Id: Ie165b2a79efd56a299d2d4ebe40a6904a340414f
Closes-Bug: #1701451


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1701451

Title:
  some legacy v2 API lose the protection of json-schema

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The JSON-Schema support to validate the input for the legacy v2
  compatible mode, and for the legacy v2 request, it won't return 400
  for extra invalid parameters, instead by filter the extra parameters
  out of the input body to protect the API break by the extra
  parameters.

  
https://github.com/openstack/nova/blob/68bbddd8aea8f8b2d671e0d675524a1e568eb773/nova/api/openstack/compute/evacuate.py#L75

  
https://github.com/openstack/nova/blob/68bbddd8aea8f8b2d671e0d675524a1e568eb773/nova/api/openstack/compute/migrate_server.py#L66

  
https://github.com/openstack/nova/blob/68bbddd8aea8f8b2d671e0d675524a1e568eb773/nova/api/openstack/compute/server_groups.py#L166

  Those should be fixed to cover the legacy v2 request, and back-port
  the fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1701451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701287] [NEW] Dead link to Horizon Settings and Configuration

2017-06-30 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

>From the 'Customize and configure the Dashboard'
(https://docs.openstack.org/developer/horizon/admin/customize-
configure.html) the link to 'Horizon Settings and Configuration'
(https://docs.openstack.org/developer/horizon/topics/settings.html) is
dead and report a 404

Basically this 'Horizon Settings and Configuration' page doesn't exist.

** Affects: horizon
 Importance: Low
 Assignee: Petr Kovar (pmkovar)
 Status: Confirmed

-- 
Dead link to Horizon Settings and Configuration
https://bugs.launchpad.net/bugs/1701287
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701287] Re: Dead link to Horizon Settings and Configuration

2017-06-30 Thread Petr Kovar
** Project changed: openstack-manuals => horizon

** Changed in: horizon
   Importance: Undecided => Low

** Changed in: horizon
   Status: New => Confirmed

** Changed in: horizon
 Assignee: (unassigned) => Petr Kovar (pmkovar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1701287

Title:
  Dead link to Horizon Settings and Configuration

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  From the 'Customize and configure the Dashboard'
  (https://docs.openstack.org/developer/horizon/admin/customize-
  configure.html) the link to 'Horizon Settings and Configuration'
  (https://docs.openstack.org/developer/horizon/topics/settings.html) is
  dead and report a 404

  Basically this 'Horizon Settings and Configuration' page doesn't
  exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1701287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701287] Re: Dead link to Horizon Settings and Configuration

2017-06-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/479359
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4f376399024fede26663967d547b0ba6fc8d9032
Submitter: Jenkins
Branch:master

commit 4f376399024fede26663967d547b0ba6fc8d9032
Author: Petr Kovar 
Date:   Fri Jun 30 17:39:35 2017 +0200

[doc] Fix broken link

Change-Id: I42e876eb319cac57cc5223163be6c8ac67040a2d
Closes-Bug: #1701287


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1701287

Title:
  Dead link to Horizon Settings and Configuration

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  From the 'Customize and configure the Dashboard'
  (https://docs.openstack.org/developer/horizon/admin/customize-
  configure.html) the link to 'Horizon Settings and Configuration'
  (https://docs.openstack.org/developer/horizon/topics/settings.html) is
  dead and report a 404

  Basically this 'Horizon Settings and Configuration' page doesn't
  exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1701287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697960] Re: enable_new_services=False should only auto-disable nova-compute services

2017-06-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/474285
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=38cca9d90506a577025a4bc2c9b023f54123a252
Submitter: Jenkins
Branch:master

commit 38cca9d90506a577025a4bc2c9b023f54123a252
Author: Matt Riedemann 
Date:   Wed Jun 14 12:38:35 2017 -0400

Only auto-disable new nova-compute services

Change If1e03c9343b8cc9c34bd51c2b4d25acdb21131ff made the
os-services REST API only able to perform PUT actions on
nova-compute services, since those are the only ones with
host mappings in the API database. Attempting to enable or
disable a nova-scheduler service, for example, will fail with a
404 error now.

The enable_new_services config option is used to auto-disable
newly registered services to test them out before bringing them
into the pool of services for scheduling. This was really only
intended, and only makes sense for, nova-compute services. Disabling
scheduler, conductor, or API services does nothing functionally, and
requires the operator to later enable those services just to make
the GET /os-services output make sense.

This change makes the enable_new_services config option only have
an effect on auto-disabling new nova-compute services. All other
services are ignored and will not be auto-disabled. The config
option help text is updated to make this clear.

Change-Id: Ie9cb44d3f87ba85420e2909170f4d207ec4bf717
Closes-Bug: #1697960


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1697960

Title:
  enable_new_services=False should only auto-disable nova-compute
  services

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This came up in the mailing list:

  http://lists.openstack.org/pipermail/openstack-
  operators/2017-June/013765.html

  And was agreed that it can be considered a bug that the
  enable_new_services config option should only auto-disable new nova-
  compute services:

  http://lists.openstack.org/pipermail/openstack-
  operators/2017-June/013771.html

  It should not auto-disable things like nova-conductor, nova-scheduler
  or nova-osapi_compute, since (1) it doesn't make sense to disable
  those and (2) it just means the operator/admin has to enable them
  later to fix the nova service-list output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1697960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701712] [NEW] Bandit scanning on Nova generates false positives of high severity issue "jinja2_autoescape_false"

2017-06-30 Thread Zhen Qin
Public bug reported:

In the report generated by Bandit that scans against Nova code, there
are two security issues estimated as high severity as shown below. We
believe that these two issues are false positives. Therefore, the line
of Nova codes that trigger such Bandit issues should be marked with
something like # nosec so that any results associated with it will not
be reported by Bandit.

--
>> Issue: [B701:jinja2_autoescape_false] By default, jinja2 sets autoescape to 
>> False. Consider using autoescape=True to mitigate XSS vulnerabilities.
   Severity: High   Confidence: High
   Location: nova/console/xvp.py:113
112 tmpl_path, tmpl_file = 
os.path.split(CONF.injected_network_template)
113 env = 
jinja2.Environment(loader=jinja2.FileSystemLoader(tmpl_path))
114 env.filters['pass_encode'] = self.fix_console_password

--
>> Issue: [B701:jinja2_autoescape_false] By default, jinja2 sets autoescape to 
>> False. Consider using autoescape=True to mitigate XSS vulnerabilities.
   Severity: High   Confidence: High
   Location: nova/virt/netutils.py:174
173 tmpl_path, tmpl_file = os.path.split(template)
174 env = jinja2.Environment(loader=jinja2.FileSystemLoader(tmpl_path),
175  trim_blocks=True)
176 template = env.get_template(tmpl_file)

The reasons that we think the above issue is false positive are:
"When autoescaping is enabled, Jinja2 will filter input strings to escape any 
HTML content submitted via template variables. Without escaping HTML input the 
application becomes vulnerable to Cross Site Scripting (XSS) attacks."[1] 
However, the "injected_network_template" configured in nova.conf is a template 
with text format with different rules, and is not intended to be executable. An 
example template is 
https://github.com/openstack/nova/blob/stable/ocata/nova/virt/interfaces.template

This bug exists in multiple releases of Nova, including master branch,
Ocata, Newton etc.

References:
[1] 
https://docs.openstack.org/developer/bandit/plugins/jinja2_autoescape_false.html

** Affects: nova
 Importance: Undecided
 Assignee: Zhen Qin (zqinit)
 Status: New


** Tags: bandit

** Description changed:

  In the report generated by Bandit that scans against Nova code, there
  are two security issues estimated as high severity as shown below. We
  believe that these two issues are false positives. Therefore, the line
  of Nova codes that trigger such Bandit issues should be marked with
  something like # nosec so that any results associated with it will not
  be reported by Bandit.
  
  --
  >> Issue: [B701:jinja2_autoescape_false] By default, jinja2 sets autoescape 
to False. Consider using autoescape=True to mitigate XSS vulnerabilities.
-Severity: High   Confidence: High
-Location: nova/console/xvp.py:113
+    Severity: High   Confidence: High
+    Location: nova/console/xvp.py:113
  112   tmpl_path, tmpl_file = 
os.path.split(CONF.injected_network_template)
  113   env = 
jinja2.Environment(loader=jinja2.FileSystemLoader(tmpl_path))
  114   env.filters['pass_encode'] = self.fix_console_password
  
  --
  >> Issue: [B701:jinja2_autoescape_false] By default, jinja2 sets autoescape 
to False. Consider using autoescape=True to mitigate XSS vulnerabilities.
-Severity: High   Confidence: High
-Location: nova/virt/netutils.py:174
+    Severity: High   Confidence: High
+    Location: nova/virt/netutils.py:174
  173   tmpl_path, tmpl_file = os.path.split(template)
  174   env = jinja2.Environment(loader=jinja2.FileSystemLoader(tmpl_path),
  175trim_blocks=True)
  176   template = env.get_template(tmpl_file)
  
  The reasons that we think the above issue is false positive are:
- "When autoescaping is enabled, Jinja2 will filter input strings to escape any 
HTML content submitted via template variables. Without escaping HTML input the 
application becomes vulnerable to Cross Site Scripting (XSS) attacks."[1] 
However, the "injected_network_template" configured in nova.conf is a template 
with text format with different rules, and intended to be executable. An 
example template is 
https://github.com/openstack/nova/blob/stable/ocata/nova/virt/interfaces.template
+ "When autoescaping is enabled, Jinja2 will filter input strings to escape any 
HTML content submitted via template variables. Without escaping HTML input the 
application becomes vulnerable to Cross Site Scripting (XSS) attacks."[1] 
However, the "injected_network_template" configured in nova.conf is a template 
with text format with different rules, and is not intended to be executable. An 
example template is 
https://github.com/openstack/nova/blob/stable/ocata/nova/virt/interface

[Yahoo-eng-team] [Bug 1701765] [NEW] SyntaxError: Undefined variable: '$cursor-disabled'.

2017-06-30 Thread Alan Pevec
Public bug reported:

After https://review.openstack.org/337936 was merged, compress in RPM
package build fails with:

Compressing... CommandError: An error occurred during rendering 
/builddir/build/BUILD/horizon-12.0.0.0b3.dev113/openstack_dashboard/templates/_stylesheets.html:
 Error evaluating expression:
 $cursor-disabled
 on line 70 of dashboard/scss/components/_spinners.scss 
 imported from line 6 of dashboard/scss/horizon.scss
 imported from line 1 of u'string:0c838b58954113a8:\n// My Themes\n@import 
"/themes/default/variables";\n\n// Horizon\n@import "/dashboard/scss/horizon.'
 Traceback:
   File "/usr/lib64/python2.7/site-packages/scss/calculator.py", line 141, in 
evaluate_expression
 return ast.evaluate(self, divide=divide)
   File "/usr/lib64/python2.7/site-packages/scss/ast.py", line 346, in evaluate
 raise SyntaxError("Undefined variable: '%s'." % self.name)
 SyntaxError: Undefined variable: '$cursor-disabled'.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: alert ci

** Tags added: alert

** Tags added: ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1701765

Title:
  SyntaxError: Undefined variable: '$cursor-disabled'.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After https://review.openstack.org/337936 was merged, compress in RPM
  package build fails with:

  Compressing... CommandError: An error occurred during rendering 
/builddir/build/BUILD/horizon-12.0.0.0b3.dev113/openstack_dashboard/templates/_stylesheets.html:
 Error evaluating expression:
   $cursor-disabled
   on line 70 of dashboard/scss/components/_spinners.scss 
   imported from line 6 of dashboard/scss/horizon.scss
   imported from line 1 of u'string:0c838b58954113a8:\n// My 
Themes\n@import "/themes/default/variables";\n\n// Horizon\n@import 
"/dashboard/scss/horizon.'
   Traceback:
 File "/usr/lib64/python2.7/site-packages/scss/calculator.py", line 141, in 
evaluate_expression
   return ast.evaluate(self, divide=divide)
 File "/usr/lib64/python2.7/site-packages/scss/ast.py", line 346, in 
evaluate
   raise SyntaxError("Undefined variable: '%s'." % self.name)
   SyntaxError: Undefined variable: '$cursor-disabled'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1701765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696221] Re: Unnecessary instance lazy-loads during rebuild

2017-06-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/471486
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c3d466dd4d9739844404fdd9c4170c4a8284a4aa
Submitter: Jenkins
Branch:master

commit c3d466dd4d9739844404fdd9c4170c4a8284a4aa
Author: Matt Riedemann 
Date:   Tue Jun 6 17:28:42 2017 -0400

Avoid unnecessary lazy-loads in mutated_migration_context

If the instance isn't being migrated and has the
migration_context attribute set to None, like during a rebuild
operation, we should detect that and exit early rather
than lazy-load several fields (numa_topology, pci_requests,
and pci_devices - all separately) and then just not use/need
them.

Change-Id: I071ab575bfd80db029d542cebfdb3d4e34227881
Closes-Bug: #1696221


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1696221

Title:
  Unnecessary instance lazy-loads during rebuild

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The rebuild code in the compute manager also handles evacuate. Rebuild
  is rebuild on the same host, no migration. Evacuate is rebuild the
  instance on another host, and has a migration context.

  This code:

  
https://github.com/openstack/nova/blob/e01ae75d52900d96355dfcb39ef9b136f0c0d5c4/nova/compute/manager.py#L2718

  Is using the mutated_migration_context() context manager which lazy-
  loads numa_topology, pci_requests and pci_devices and then, since
  self.migration_context isn't set on the instance, yields as a noop.

  Seen here:

  http://logs.openstack.org/82/471082/1/gate/gate-novaclient-dsvm-
  functional-neutron-ubuntu-
  xenial/796acb7/logs/screen-n-cpu.txt.gz#_Jun_06_13_14_02_547424

  Jun 06 13:14:02.547424 ubuntu-xenial-infracloud-chocolate-9158824 
nova-compute[20994]: DEBUG nova.objects.instance [None 
req-5b3770c1-d332-4875-8933-97de8a9890b4 admin admin] Lazy-loading 
'pci_requests' on Instance uuid 573258a4-9416-4e13-a765-7c90683f3526 
{{(pid=20994) obj_load_attr /opt/stack/new/nova/nova/objects/instance.py:1038}}
  Jun 06 13:14:02.562243 ubuntu-xenial-infracloud-chocolate-9158824 
nova-compute[20994]: DEBUG nova.objects.instance [None 
req-5b3770c1-d332-4875-8933-97de8a9890b4 admin admin] Lazy-loading 
'pci_devices' on Instance uuid 573258a4-9416-4e13-a765-7c90683f3526 
{{(pid=20994) obj_load_attr /opt/stack/new/nova/nova/objects/instance.py:1038}}
  Jun 06 13:14:02.577132 ubuntu-xenial-infracloud-chocolate-9158824 
nova-compute[20994]: DEBUG nova.objects.instance [None 
req-5b3770c1-d332-4875-8933-97de8a9890b4 admin admin] Lazy-loading 
'migration_context' on Instance uuid 573258a4-9416-4e13-a765-7c90683f3526 
{{(pid=20994) obj_load_attr /opt/stack/new/nova/nova/objects/instance.py:1038}}
  Jun 06 13:14:02.590554 ubuntu-xenial-infracloud-chocolate-9158824 
nova-compute[20994]: DEBUG nova.objects.instance [None 
req-5b3770c1-d332-4875-8933-97de8a9890b4 admin admin] [instance: 
573258a4-9416-4e13-a765-7c90683f3526] Trying to apply a migration context that 
does not seem to be set for this instance {{(pid=20994) apply_migration_context 
/opt/stack/new/nova/nova/objects/instance.py:977}}

  This is wasteful as each lazy-loaded field is a round trip to the
  database via conductor to set the field. If self.migration_context
  isn't set, mutated_migration_context() should just yield and return.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1696221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700977] Re: Volumes tab doesn't list any volumes

2017-06-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/479322
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=45665406b49680967d246de647a9b95a27e14b39
Submitter: Jenkins
Branch:master

commit 45665406b49680967d246de647a9b95a27e14b39
Author: Radomir Dopieralski 
Date:   Fri Jun 30 16:06:55 2017 +0200

Prefer volumev3 and volumev2 endpoints over volume

Make cinderclient use volumev3 or volumev2 endpoints, before
falling back to the volume endpoint, otherwise Horizon is
trying to use the v1 API, and that doesn't work with the
"sort" parameter that we are using, resulting in an empty
volumes list.

Change-Id: Id03988d89000c4bc976090c68a41ee320b9d43f7
Closes-bug: #1700977


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1700977

Title:
  Volumes tab doesn't list any volumes

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  It seems that the Volumes tab doesn't show any volumes at the moment,
  even when there are some -- when I select "create volume" and "from
  volume", I have several volumes to choose from, but they don't show up
  in the main table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1700977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp