[Yahoo-eng-team] [Bug 1789351] [NEW] Glance deployment with python3 + "keystone" paste_deploy flavor Fails

2018-08-27 Thread yatin
Public bug reported:

This happens with oslo.config >= 6.3.0([1]) + python3 + "keystone" paste_deploy 
+ current glance(before 
https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30 it 
works)
Testing in devstack: https://review.openstack.org/#/c/596380/

The glance api service fails to start with below Error, reproducing here: 
https://review.openstack.org/#/c/596380/:-
ERROR: dictionary changed size during iteration , see logs below

Failure logs from job:- http://logs.openstack.org/80/596380/2/check
/tempest-full-
py3/514fa29/controller/logs/screen-g-api.txt.gz#_Aug_27_07_26_10_698243


The Runtime Error is returned at keystonemiddleware:- 
https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L551
Adding code snippet here:-
if self._conf.oslo_conf_obj != cfg.CONF:   <-- Fails here
oslo_cache.configure(self._conf.oslo_conf_obj)

So with pdb found that an additional key(fatal_deprecations) was added
to cfg.CONF at ^^, so Error is returned in python3. With python2 same
key is added but no Error.

There are multiple ways to avoid it, like use the paste_deploy configuration 
that works(ex: keystone+cachemanagement), use oslo.config <= 6.2.0, Use python2 
or update 
glance(https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30
 as use_user_token is deprecated since long)
with keystone+cachemanagement, all the config items were added before reaching 
the Failure point in keystonemiddleware and self._conf.oslo_conf_obj != 
cfg.CONF didn't raised an error and returned Boolean. Don't know why.

But it seems a real issue to me as it may happen in python3 at different 
places. So it would be good if Teams from affected projects(oslo.config, 
keystonemiddleware, glance) can look at it and fix(not avoid) at the best place.
To me it looks like keystonemiddleware is not handling(comparing the dict) it 
properly for python3, as the conf is dynamically updated(how ? and when ?).

- so can oslo.config Team check if glance and keystonmiddleware are 
handling/using oslo.config properly.
- i checked keystone+cachemanagement is default in devstack from last 6 years, 
is "keystone" flavor supported? if yes it should be fixed. Also it would be 
good to cleanup the deprecated options those are deprecated since Mitaka.
- If it's wrongly used in keystonemiddleware/glance, it would be good to fix 
there.


Initially detected while testing with Fedora[2], but later digged on why it's 
working in CI with Ubuntu and started [3].


[1] https://review.openstack.org/#/c/560094/
[2] https://review.rdoproject.org/r/#/c/14921/
[3] https://review.openstack.org/#/c/596380/

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: keystonemiddleware
 Importance: Undecided
 Status: New

** Affects: oslo.config
 Importance: Undecided
 Status: New

** Also affects: oslo.config
   Importance: Undecided
   Status: New

** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1789351

Title:
  Glance deployment with python3 + "keystone" paste_deploy flavor Fails

Status in Glance:
  New
Status in keystonemiddleware:
  New
Status in oslo.config:
  New

Bug description:
  This happens with oslo.config >= 6.3.0([1]) + python3 + "keystone" 
paste_deploy + current glance(before 
https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30 it 
works)
  Testing in devstack: https://review.openstack.org/#/c/596380/

  The glance api service fails to start with below Error, reproducing here: 
https://review.openstack.org/#/c/596380/:-
  ERROR: dictionary changed size during iteration , see logs below

  Failure logs from job:- http://logs.openstack.org/80/596380/2/check
  /tempest-full-
  py3/514fa29/controller/logs/screen-g-api.txt.gz#_Aug_27_07_26_10_698243

  
  The Runtime Error is returned at keystonemiddleware:- 
https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L551
  Adding code snippet here:-
  if self._conf.oslo_conf_obj != cfg.CONF:   <-- Fails here
  oslo_cache.configure(self._conf.oslo_conf_obj)

  So with pdb found that an additional key(fatal_deprecations) was added
  to cfg.CONF at ^^, so Error is returned in python3. With python2 same
  key is added but no Error.

  There are multiple ways to avoid it, like use the paste_deploy configuration 
that works(ex: keystone+cachemanagement), use oslo.config <= 6.2.0, Use python2 
or update 
glance(https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30
 as use_user_token is deprecated since long)
  with keystone+cachemanagement, all the config items were added before 
reaching the Failure point in keystonemiddleware and self._conf.oslo_conf_obj 
!= cfg.CONF didn't raised an error and returned B

[Yahoo-eng-team] [Bug 1789340] [NEW] Nova boot server failed due to exception "Cannot complete the operation because the file or folder .vmdk already exists "

2018-08-27 Thread kirandevraaj
Public bug reported:

Nova boot server failed due to exception "vmware_base/373ca119-5b29
-441d-bd9c-b2ccfbad/373ca119-5b29-441d-bd9c-b2ccfbad.20.vmdk
already exists"

Environment - 
Openstack Pike/Queens/Master
KVM computes - 4
VCenter + ESXi computes - 6

nova/conductor.log.1:2018-02-01 18:48:42.926 5826 ERROR
nova.scheduler.utils [req-4bcfad48-2a15-4aa4-8c32-ebb3798a8b09
47b7fb99c82c413caae4081c35b7456a f2ab30de47df41e488a77258c7a1c6a1 -
default default] [instance: 1c4830b5-f1f5-4bfb-bcd9-63c67aed127f] Error
from last host: oscompute01 (node domain-c74754.bcea1667-7332-4f99-8aae-
3f2bc39c2fc9): [u'Traceback (most recent call last):\n', u'  File
"/opt/mhos/openstack/nova/lib/python2.7/site-
packages/nova/compute/manager.py", line 1856, in
_do_build_and_run_instance\nfilter_properties)\n', u'  File
"/opt/mhos/openstack/nova/lib/python2.7/site-
packages/nova/compute/manager.py", line 2086, in
_build_and_run_instance\ninstance_uuid=instance.uuid,
reason=six.text_type(e))\n', u'RescheduledException: Build of instance
1c4830b5-f1f5-4bfb-bcd9-63c67aed127f was re-scheduled: Cannot complete
the operation because the file or folder [datastore-nfs]
vmware_base/373ca119-5b29-441d-bd9c-b2ccfbad/373ca119-5b29-441d-
bd9c-b2ccfbad.20.vmdk already exists\n']


neutron server log,
neutron/server.log.2:2018-02-01 18:48:53.788 5838 DEBUG neutron.notifiers.nova 
[-] Sending events: [{'tag': u'4c311f56-a73f-47a6-aa58-95f1a774396e', 'name': 
'network-vif-deleted', 'server_uuid': u'1c4830b5-f1f5-4bfb-bcd9-63c67aed127f'}] 
send_events 
/opt/mhos/openstack/neutron/lib/python2.7/site-packages/neutron/notifiers/nova.py:242
neutron/server.log.2:2018-02-01 18:48:54.317 5838 DEBUG novaclient.v2.client 
[-] REQ: curl -g -i -X POST 
http://192.168.7.7:8774/v2.1/86d8652e2b834d29a744c6c66a5fb65e/os-server-external-events
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.1" -H 
"X-Auth-Token: {SHA1}4befed70c1a44813879fce059e88ab310e29b039" -d '{"events": 
[{"tag": "4c311f56-a73f-47a6-aa58-95f1a774396e", "name": "network-vif-deleted", 
"server_uuid": "1c4830b5-f1f5-4bfb-bcd9-63c67aed127f"}]}' _http_log_request 
/opt/mhos/python/lib/python2.7/site-packages/keystoneauth1/session.py:375
neutron/server.log.2:2018-02-01 18:48:54.459 5838 DEBUG neutron.notifiers.nova 
[-] Nova returned NotFound for event: [{'tag': 
u'4c311f56-a73f-47a6-aa58-95f1a774396e', 'name': 'network-vif-deleted', 
'server_uuid': u'1c4830b5-f1f5-4bfb-bcd9-63c67aed127f'}] send_events 
/opt/mhos/openstack/neutron/lib/python2.7/site-packages/neutron/notifiers/nova.py:248
neutron/server.log:2018-02-05 06:17:38.871 5840 INFO neutron.wsgi 
[req-9413e35e-b050-4d88-b8c4-963b6cda07b0 d22bc5405f5841ad817bc6b75dd5b3df 
ca15303f1f2d4eb8af607f76b10e8250 - default default] 192.168.1.193,192.168.1.177 
"GET /v2.0/ports?device_id=1c4830b5-f1f5-4bfb-bcd9-63c67aed127f HTTP/1.1" 
status: 200  len: 205 time: 0.0538130

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1789340

Title:
  Nova boot server failed due to exception "Cannot complete the
  operation because the file or folder .vmdk already exists "

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova boot server failed due to exception "vmware_base/373ca119-5b29
  -441d-bd9c-b2ccfbad/373ca119-5b29-441d-bd9c-b2ccfbad.20.vmdk
  already exists"

  Environment - 
  Openstack Pike/Queens/Master
  KVM computes - 4
  VCenter + ESXi computes - 6

  nova/conductor.log.1:2018-02-01 18:48:42.926 5826 ERROR
  nova.scheduler.utils [req-4bcfad48-2a15-4aa4-8c32-ebb3798a8b09
  47b7fb99c82c413caae4081c35b7456a f2ab30de47df41e488a77258c7a1c6a1 -
  default default] [instance: 1c4830b5-f1f5-4bfb-bcd9-63c67aed127f]
  Error from last host: oscompute01 (node
  domain-c74754.bcea1667-7332-4f99-8aae-3f2bc39c2fc9): [u'Traceback
  (most recent call last):\n', u'  File
  "/opt/mhos/openstack/nova/lib/python2.7/site-
  packages/nova/compute/manager.py", line 1856, in
  _do_build_and_run_instance\nfilter_properties)\n', u'  File
  "/opt/mhos/openstack/nova/lib/python2.7/site-
  packages/nova/compute/manager.py", line 2086, in
  _build_and_run_instance\ninstance_uuid=instance.uuid,
  reason=six.text_type(e))\n', u'RescheduledException: Build of instance
  1c4830b5-f1f5-4bfb-bcd9-63c67aed127f was re-scheduled: Cannot complete
  the operation because the file or folder [datastore-nfs]
  vmware_base/373ca119-5b29-441d-bd9c-b2ccfbad/373ca119-5b29-441d-
  bd9c-b2ccfbad.20.vmdk already exists\n']

  
  neutron server log,
  neutron/server.log.2:2018-02-01 18:48:53.788 5838 DEBUG 
neutron.notifiers.nova [-] Sending events: [{'tag': 
u'4c311f56-a73f-47a6-aa58-95f1a774396e', 'name': 'network-vif-deleted', 
'server_uuid': u'1c4830b5-f1f5-4b

[Yahoo-eng-team] [Bug 1787977] Re: Inefficient multi-cell instance list

2018-08-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/593131
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c3a77f80b1863e114109af9c32ea01b205c1a735
Submitter: Zuul
Branch:master

commit c3a77f80b1863e114109af9c32ea01b205c1a735
Author: Dan Smith 
Date:   Fri Aug 17 07:56:05 2018 -0700

Make instance_list perform per-cell batching

This makes the instance_list module support batching across cells
with a couple of different strategies, and with room to add more
in the future.

Before this change, an instance list with limit 1000 to a
deployment with 10 cells would generate a query to each cell
database with the same limit. Thus, that API request could end
up processing up to 10,000 instance records despite only
returning 1000 to the user (because of the limit).

This uses the batch functionality in the base code added in
Iaa4759822e70b39bd735104d03d4deec988d35a1
by providing a couple of strategies by which the batch size
per cell can be determined. These should provide a lot of gain
in the short term, and we can extend them with other strategies
as we identify some with additional benefits.

Closes-Bug: #1787977
Change-Id: Ie3a5f5dc49f8d9a4b96f1e97f8a6ea0b5738b768


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1787977

Title:
  Inefficient multi-cell instance list

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New

Bug description:
  This is based on some performance and scale testing done by Huawei,
  reported in this dev ML thread:

  http://lists.openstack.org/pipermail/openstack-
  dev/2018-August/133363.html

  In that scenario, they have 10 cells with 1 instances in each
  cell. They then run through a few GET /servers/detail scenarios with
  multiple cells and varying limits.

  The thread discussion pointed out that they were wasting time pulling
  1000 records (the default [api]/max_limit) from all 10 cells and then
  throwing away 9000 of those results, so the DB query time per cell was
  small, but the sqla/ORM/python was chewing up the time.

  Dan Smith has a series of changes here:

  https://review.openstack.org/#/q/topic:batched-inst-
  list+(status:open+OR+status:merged)

  Which allow us to batch the DB queries per cell which, when
  distributed across the 10 cells, e.g. 1000 / 10 = 100 batch size per
  cell, ends up cutting the time spent in about half (around 11 sec to
  around 6 sec).

  This is clearly a performance issue which we have a fix, and we
  arguably should backport the fix.

  Note this is less of an issue for deployments that leverage the
  [api]/instance_list_per_project_cells option (like CERN):

  
https://docs.openstack.org/nova/latest/configuration/config.html#api.instance_list_per_project_cells

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1787977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774109] Re: OrphanedObjectError: Cannot call obj_load_attr on orphaned Instance object

2018-08-27 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1774109

Title:
  OrphanedObjectError: Cannot call obj_load_attr on orphaned Instance
  object

Status in OpenStack Compute (nova):
  Expired

Bug description:
  openstack:Pike

  # tailf /var/log/nova/nova-api.log
  2018-05-30 14:00:34.832 170348 ERROR nova.api.openstack.extensions 
self.flavor = instance.flavor
  2018-05-30 14:00:34.832 170348 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 67, in 
getter
  2018-05-30 14:00:34.832 170348 ERROR nova.api.openstack.extensions 
self.obj_load_attr(name)
  2018-05-30 14:00:34.832 170348 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 1029, in 
obj_load_attr
  2018-05-30 14:00:34.832 170348 ERROR nova.api.openstack.extensions 
objtype=self.obj_name())
  2018-05-30 14:00:34.832 170348 ERROR nova.api.openstack.extensions 
OrphanedObjectError: Cannot call obj_load_attr on orphaned Instance object
  2018-05-30 14:00:34.832 170348 ERROR nova.api.openstack.extensions
  2018-05-30 14:00:34.834 170348 INFO nova.api.openstack.wsgi 
[req-dde33c8b-7d91-4938-bab2-0410c046dd71 516e4eeaf0614802b3af422f40b140b6 
befcf911008745b69ca9e4c1fd1e868e - default default] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
  
  2018-05-30 14:00:34.836 170348 INFO nova.osapi_compute.wsgi.server 
[req-dde33c8b-7d91-4938-bab2-0410c046dd71 516e4eeaf0614802b3af422f40b140b6 
befcf911008745b69ca9e4c1fd1e868e - default default] 172.20.239.50 "GET 
/v2.1/servers/detail HTTP/1.1" status: 500 len: 577 time: 0.2061019

  # nova --debug list
  DEBUG (extension:180) found extension EntryPoint.parse('v2token = 
keystoneauth1.loading._plugins.identity.v2:Token')
  DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 = 
keystoneauth1.extras.oauth1._loading:V3OAuth1')
  DEBUG (extension:180) found extension EntryPoint.parse('admin_token = 
keystoneauth1.loading._plugins.admin_token:AdminToken')
  DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode')
  DEBUG (extension:180) found extension EntryPoint.parse('v2password = 
keystoneauth1.loading._plugins.identity.v2:Password')
  DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword = 
keystoneauth1.extras._saml2._loading:Saml2Password')
  DEBUG (extension:180) found extension EntryPoint.parse('v3password = 
keystoneauth1.loading._plugins.identity.v3:Password')
  DEBUG (extension:180) found extension EntryPoint.parse('v3oidcaccesstoken = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken')
  DEBUG (extension:180) found extension EntryPoint.parse('v3oidcpassword = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword')
  DEBUG (extension:180) found extension EntryPoint.parse('v3kerberos = 
keystoneauth1.extras.kerberos._loading:Kerberos')
  DEBUG (extension:180) found extension EntryPoint.parse('token = 
keystoneauth1.loading._plugins.identity.generic:Token')
  DEBUG (extension:180) found extension 
EntryPoint.parse('v3oidcclientcredentials = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials')
  DEBUG (extension:180) found extension EntryPoint.parse('v3tokenlessauth = 
keystoneauth1.loading._plugins.identity.v3:TokenlessAuth')
  DEBUG (extension:180) found extension EntryPoint.parse('v3token = 
keystoneauth1.loading._plugins.identity.v3:Token')
  DEBUG (extension:180) found extension EntryPoint.parse('v3totp = 
keystoneauth1.loading._plugins.identity.v3:TOTP')
  DEBUG (extension:180) found extension EntryPoint.parse('password = 
keystoneauth1.loading._plugins.identity.generic:Password')
  DEBUG (extension:180) found extension EntryPoint.parse('v3fedkerb = 
keystoneauth1.extras.kerberos._loading:MappedKerberos')
  DEBUG (extension:180) found extension EntryPoint.parse('token_endpoint = 
openstackclient.api.auth_plugin:TokenEndpoint')
  DEBUG (extension:180) found extension EntryPoint.parse('v1password = 
swiftclient.authv1:PasswordLoader')
  DEBUG (session:347) REQ: curl -g -i -X GET http://controller:35357/v3 -H 
"Accept: application/json" -H "User-Agent: nova keystoneauth1/2.18.0 
python-requests/2.11.1 CPython/2.7.5"
  INFO (connectionpool:214) Starting new HTTP connection (1): controller
  DEBUG (connectionpool:401) "GET /v3 HTTP/1.1" 200 250
  DEBUG (session:395) RESP: [200] Date: Wed, 30 May 2018 06:03:55 GMT Server: 
Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 Vary: X-Auth-Token 
x-openstack-request-id: req-21122cf8-a4c6-459e-9432-c

[Yahoo-eng-team] [Bug 1778989] Re: Keystone client is unable to correctly look up names of federated users

2018-08-27 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1778989

Title:
  Keystone client is unable to correctly look up names of federated
  users

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  When looking up a user in a domain, one can generally do this:

  openstack user show --domain testdomain testuser

  Unfortunately, if testuser is a federated user, the above command will
  fail.  For example:

    $ openstack domain list -c ID -c Name
    +--+--+
    | ID   | Name |
    +--+--+
    | 2b47931027ef4b9e914ab158ef77ae07 | testdomain   |
    | 3cb3f05971c243f08ec4715f228876f1 | heat_stack   |
    | 6657bdf192594898a1b9b846296c5141 | 6657bdf192594898a1b9b846296c5141 |
    | default  | Default  |
    +--+--+

  In the above, 6657bdf192594898a1b9b846296c5141 is a domain for
  federated users that was auto-generated for an identity provider.
  There is one user in the domain:

    $ openstack user list --domain 6657bdf192594898a1b9b846296c5141
    +--++
    | ID   | Name   |
    +--++
    | 428641fc53664e3ba66bd52ff64ce37e | larsks |
    +--++

  But the following command fails:

    $ openstack user show --domain 6657bdf192594898a1b9b846296c5141 larsks
    No user with a name or ID of 'larsks' exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1778989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779164] Re: openvswitch agent failed when adding esp security rule

2018-08-27 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779164

Title:
  openvswitch agent failed when adding esp security rule

Status in neutron:
  Expired

Bug description:
  When you add esp rule with port range the openvswitch agent fails during sync 
iptables rule with error:
  ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ; 
Stdout: ; Stderr: iptables-restore v1.4.21: multiport only works with TCP, UDP, 
UDPLITE, SCTP and DCCP
  Rule looks like:
   -s 10.10.10.10/32 -p esp -m multiport —dports 1:65535 -j

  I suggest it's a good idea to add filter for port like icmp in commit
  
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9c64da0a642148750d7e930d77278aa0977edf81
  to prevent such behavior in agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1789334] [NEW] Osprofiler works wrong

2018-08-27 Thread pengdake
Public bug reported:

I deploy Openstack Queen by rdo and enable osprofiler. But I can't get trace 
info from osprofiler.
For example:
1.run command like "# openstack --os-profile b827cb61b00c08648377fe889bf000d5 
network show private"
2.get trace info by "# osprofiler trace show  --connection-string 
elasticsearch://192.168.8.102:9200 --json 4a68f7fb-1f4b-4d04-bf93-7bfbd127fffa"

Trace with UUID 4a68f7fb-1f4b-4d04-bf93-7bfbd127fffa not found. Please
check the HMAC key used in the command.


Config(/etc/neutron/neutron.conf):
[profiler]
enabled = True
trace_sqlalchemy = True
hmac_keys = X
connection_string = elasticsearch://192.168.8.102:9200

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1789334

Title:
  Osprofiler works wrong

Status in neutron:
  New

Bug description:
  I deploy Openstack Queen by rdo and enable osprofiler. But I can't get trace 
info from osprofiler.
  For example:
  1.run command like "# openstack --os-profile b827cb61b00c08648377fe889bf000d5 
network show private"
  2.get trace info by "# osprofiler trace show  --connection-string 
elasticsearch://192.168.8.102:9200 --json 4a68f7fb-1f4b-4d04-bf93-7bfbd127fffa"

  Trace with UUID 4a68f7fb-1f4b-4d04-bf93-7bfbd127fffa not found. Please
  check the HMAC key used in the command.


  Config(/etc/neutron/neutron.conf):
  [profiler]
  enabled = True
  trace_sqlalchemy = True
  hmac_keys = X
  connection_string = elasticsearch://192.168.8.102:9200

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1789334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1789325] [NEW] Remove network name from instance > IP address column

2018-08-27 Thread Satish Patel
Public bug reported:

I wouldn't say this is a bug but i would like to have control on column
field to add/remove them.

I have attached screenshot.

In my screenshot you can see when i create dual nic VM then "IP Address"
column print network name also related that IP address, its unnecessary
and looks very ugly when i have 100s of VM and i want to see them on
Horizon, I believe just IP address is enough to print and not print
Network Name, Is there a way i can customize or add / remove that value
from Horizon?

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "os_instance.png"
   
https://bugs.launchpad.net/bugs/1789325/+attachment/5181489/+files/os_instance.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1789325

Title:
  Remove network name from instance > IP address column

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I wouldn't say this is a bug but i would like to have control on
  column field to add/remove them.

  I have attached screenshot.

  In my screenshot you can see when i create dual nic VM then "IP
  Address" column print network name also related that IP address, its
  unnecessary and looks very ugly when i have 100s of VM and i want to
  see them on Horizon, I believe just IP address is enough to print and
  not print Network Name, Is there a way i can customize or add / remove
  that value from Horizon?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1789325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1789322] [NEW] ImportError: cannot import name fake_notifier

2018-08-27 Thread YAMAMOTO Takashi
Public bug reported:

eg. http://logs.openstack.org/87/199387/121/check/openstack-tox-
py27/9567b48/job-output.txt.gz

2018-08-27 00:50:08.241865 | ubuntu-xenial | Failed to import test module: 
midonet.neutron.tests.unit.test_extension_fwaas
2018-08-27 00:50:08.241970 | ubuntu-xenial | Traceback (most recent call last):
2018-08-27 00:50:08.242331 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
2018-08-27 00:50:08.242478 | ubuntu-xenial | module = 
self._get_module_from_name(name)
2018-08-27 00:50:08.242857 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
2018-08-27 00:50:08.242940 | ubuntu-xenial | __import__(name)
2018-08-27 00:50:08.243136 | ubuntu-xenial |   File 
"midonet/neutron/tests/unit/test_extension_fwaas.py", line 18, in 
2018-08-27 00:50:08.243339 | ubuntu-xenial | from 
neutron_fwaas.tests.unit.services.firewall import test_fwaas_plugin as tfp
2018-08-27 00:50:08.243793 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron_fwaas/tests/unit/services/firewall/test_fwaas_plugin.py",
 line 20, in 
2018-08-27 00:50:08.243906 | ubuntu-xenial | from neutron.tests import 
fake_notifier
2018-08-27 00:50:08.244014 | ubuntu-xenial | ImportError: cannot import name 
fake_notifier

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress


** Tags: fwaas gate-failure

** Tags added: fwaas gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1789322

Title:
  ImportError: cannot import name fake_notifier

Status in neutron:
  In Progress

Bug description:
  eg. http://logs.openstack.org/87/199387/121/check/openstack-tox-
  py27/9567b48/job-output.txt.gz

  2018-08-27 00:50:08.241865 | ubuntu-xenial | Failed to import test module: 
midonet.neutron.tests.unit.test_extension_fwaas
  2018-08-27 00:50:08.241970 | ubuntu-xenial | Traceback (most recent call 
last):
  2018-08-27 00:50:08.242331 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  2018-08-27 00:50:08.242478 | ubuntu-xenial | module = 
self._get_module_from_name(name)
  2018-08-27 00:50:08.242857 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  2018-08-27 00:50:08.242940 | ubuntu-xenial | __import__(name)
  2018-08-27 00:50:08.243136 | ubuntu-xenial |   File 
"midonet/neutron/tests/unit/test_extension_fwaas.py", line 18, in 
  2018-08-27 00:50:08.243339 | ubuntu-xenial | from 
neutron_fwaas.tests.unit.services.firewall import test_fwaas_plugin as tfp
  2018-08-27 00:50:08.243793 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron_fwaas/tests/unit/services/firewall/test_fwaas_plugin.py",
 line 20, in 
  2018-08-27 00:50:08.243906 | ubuntu-xenial | from neutron.tests import 
fake_notifier
  2018-08-27 00:50:08.244014 | ubuntu-xenial | ImportError: cannot import name 
fake_notifier

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1789322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788922] Re: SRIOVServersTest.test_create_server_with_VF intermittently fails due to "FileNotFoundError: [Errno 2] No such file or directory: '/home/zuul/src/git.openstack.org/op

2018-08-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/596815
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ecfcf8653815f658c0399b000e9386adc121312b
Submitter: Zuul
Branch:master

commit ecfcf8653815f658c0399b000e9386adc121312b
Author: Stephen Finucane 
Date:   Mon Aug 27 16:28:32 2018 +0100

privsep: Handle ENOENT when checking for direct IO support

We've seen a recent issue that suggest direct IO support checks can fail
in other valid ways than EINVAL, namely, failures with ENOENT or the
FileNotFoundError exception, which is a Python 3-only exception type,
can occur. While we can't test for this without breaking Python 2.7
support, we can mimic this by looking for checking for the errno
attribute of the OSError exception. Do this.

Change-Id: I8aab86bb62cbc8ad538c706af037a30437c7964d
Closes-Bug: #1788922


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1788922

Title:
  SRIOVServersTest.test_create_server_with_VF intermittently fails due
  to "FileNotFoundError: [Errno 2] No such file or directory:
  '/home/zuul/src/git.openstack.org/openstack/nova/instances/.directio.test'"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/71/594571/2/gate/nova-tox-functional-
  py35/fd2d9ac/testr_results.html.gz

  2018-08-24 16:36:47,192 ERROR [nova.compute.manager] Instance failed to spawn
  Traceback (most recent call last):
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/compute/manager.py", line 
2354, in _build_resources
  yield resources
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/compute/manager.py", line 
2118, in _build_and_run_instance
  block_device_info=block_device_info)
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/virt/libvirt/driver.py", 
line 3075, in spawn
  mdevs=mdevs)
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/virt/libvirt/driver.py", 
line 5430, in _get_guest_xml
  context, mdevs)
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/virt/libvirt/driver.py", 
line 5216, in _get_guest_config
  flavor, guest.os_type)
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/virt/libvirt/driver.py", 
line 3995, in _get_guest_storage_config
  inst_type)
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/virt/libvirt/driver.py", 
line 3903, in _get_guest_disk_config
  self.disk_cachemode,
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/virt/libvirt/driver.py", 
line 416, in disk_cachemode
  if not nova.privsep.utils.supports_direct_io(CONF.instances_path):
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/privsep/utils.py", line 
62, in supports_direct_io
  {'path': dirpath, 'ex': e})
File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py35/lib/python3.5/site-packages/six.py",
 line 693, in reraise
  raise value
File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/privsep/utils.py", line 
45, in supports_direct_io
  fd = os.open(testfile, os.O_CREAT | os.O_WRONLY | os.O_DIRECT)
  FileNotFoundError: [Errno 2] No such file or directory: 
'/home/zuul/src/git.openstack.org/openstack/nova/instances/.directio.test'

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22FileNotFoundError%3A%20%5BErrno%202%5D%20No%20such%20file%20or%20directory%3A%20'%2Fhome%2Fzuul%2Fsrc%2Fgit.openstack.org%2Fopenstack%2Fnova%2Finstances%2F.directio.test'%5C%22%20AND%20tags%3A%5C%22console%5C%22&from=7d

  Just started, so it's likely related to these changes:

  https://review.openstack.org/#/c/595802/
  https://review.openstack.org/#/c/407055/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1788922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786519] Re: debugging why NoValidHost with placement challenging

2018-08-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/590041
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b5ab9f5acec172d16e46876f60ca338434483905
Submitter: Zuul
Branch:master

commit b5ab9f5acec172d16e46876f60ca338434483905
Author: Jay Pipes 
Date:   Wed Aug 8 17:11:25 2018 -0400

[placement] split gigantor SQL query, add logging

This patch modifies the code paths for the non-granular request group
allocation candidates processing. It removes the giant multi-join SQL
query and replaces it with multiple calls to
_get_providers_with_resource(), logging the number of matched providers
for each resource class requested and filter (on required traits,
forbidden traits and aggregate memebership).

Here are some examples of the debug output:

- A request for three resources with no aggregate or trait filters:

 found 7 providers with available 5 VCPU
 found 9 providers with available 1024 MEMORY_MB
 found 5 providers after filtering by previous result
 found 8 providers with available 1500 DISK_GB
 found 2 providers after filtering by previous result

- The same request, but with a required trait that nobody has, shorts
  out quickly:

 found 0 providers after applying required traits filter 
(['HW_CPU_X86_AVX2'])

- A request for one resource with aggregates and forbidden (but no
  required) traits:

 found 2 providers after applying aggregates filter 
([['3ed8fb2f-4793-46ee-a55b-fdf42cb392ca']])
 found 1 providers after applying forbidden traits filter ([u'CUSTOM_TWO', 
u'CUSTOM_THREE'])
 found 3 providers with available 4 VCPU
 found 1 providers after applying initial aggregate and trait filters

Co-authored-by: Eric Fried 
Closes-Bug: #1786519
Change-Id: If9ddb8a6d2f03392f3cc11136c4a0b026212b95b


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786519

Title:
  debugging why NoValidHost with placement challenging

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  With the advent of placement, the FilterScheduler no longer provides
  granular information about which class of resource (disk, VCPU, RAM)
  is not available in sufficient quantities to allow a host to be found.

  This is because placement is now making those choices and does not
  (yet) break down the results of its queries into easy to understand
  chunks. If it returns zero results all you know is "we didn't have
  enough resources". Nothing about which resources.

  This can be fixed by changing the way in queries are made so that
  there are a series of queries. After each one a report of how many
  results are left can be made.

  While this relatively straightforward to do for the (currently-)common
  simple non-nested and non-sharing providers situation it will be more
  difficult for the non-simple cases. Therefore, it makes sense to have
  different code paths for simple and non-simple allocation candidate
  queries. This will also result in performance gains for the common
  case.

  See this email thread for additional discussion and reports of
  problems in the wild: http://lists.openstack.org/pipermail/openstack-
  dev/2018-August/132735.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777491] Re: Avoid redundant compute node update

2018-08-27 Thread Eric Fried
*** This bug is a duplicate of bug 1729621 ***
https://bugs.launchpad.net/bugs/1729621

** This bug has been marked a duplicate of bug 1729621
   Inconsistent value for vcpu_used

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777491

Title:
  Avoid redundant compute node update

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  _update_available_resource() in nova/compute/resource_tracker.py invokes 
_init_compute_node() which internally calls _update() and once again _update() 
is invoked at the end of _update_available_resource().
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L762

  This triggers update_provider_tree() or get_inventory() on the virt
  driver, scanning all resources twice within same method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1789040] Re: requests should be declared in requirements.txt instead of test-requirements

2018-08-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/596552
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=de3e48479893c3a2d23a5b05654674fcf839ceda
Submitter: Zuul
Branch:master

commit de3e48479893c3a2d23a5b05654674fcf839ceda
Author: Akihiro Motoki 
Date:   Sun Aug 26 03:48:46 2018 +0900

Move requests to requirements.txt

requests is used in non-test code in horizon
(openstack_dashboard/exceptions.py).
It should be declared in requirements.txt.

Closes-Bug: #1789040
Change-Id: I325b5344d45f797d256bb213093082927068a88e


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1789040

Title:
  requests should be declared in requirements.txt instead of test-
  requirements

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  requests is used in openstack_dashboard/exceptions.py, but it is only
  declared in requests in test-requirements.txt. It should be declared
  in requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1789040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775382] Re: neutron-openvswitch-agent cannot start on Windows

2018-08-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/567621
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fee630efaa74ce3d19344e5b3a331fb2fbd9
Submitter: Zuul
Branch:master

commit fee630efaa74ce3d19344e5b3a331fb2fbd9
Author: Claudiu Belu 
Date:   Thu May 10 18:26:23 2018 +0300

Fix neutron-openvswitch-agent Windows support

Currently, the neutron-openvswitch-agent does not start on Windows
due to Linux specific imports. This patch addresses this issue.

Also, we're wrapping the object returned by subprocess.Popen using
tpool.Proxy in order to prevent IO operations on the stream
handles from blocking other threads. Currently, the ovs db monitor
blocks the whole process.

Closes-Bug: #1775382

Co-Authored-By: Lucian Petrut 
Change-Id: I8bbc9d1f8332e5644a6071f599a7c6a66bef7928


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775382

Title:
  neutron-openvswitch-agent cannot start on Windows

Status in neutron:
  Fix Released

Bug description:
  Currently, the neutron-openvswitch-agent cannot start on Windows [1]
  due to various Linux-centric modules being imported on Windows.

  This issue only affects master.

  
  [1] http://paste.openstack.org/show/722788/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788631] Re: KeyError: 'used' security_group_rule quota missing 'used' key

2018-08-27 Thread Corey Bryant
** Also affects: charm-neutron-api
   Importance: Undecided
   Status: New

** Changed in: charm-neutron-api
   Importance: Undecided => High

** Changed in: charm-neutron-api
   Status: New => Triaged

** Changed in: charm-neutron-api
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1788631

Title:
  KeyError: 'used' security_group_rule quota missing 'used' key

Status in OpenStack neutron-api charm:
  Triaged
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  On rocky rc1, after attempting to log in to the dashboard I hit:

  Internal Server Error: /horizon/project/
  Traceback (most recent call last):
    File "/usr/lib/python2.7/dist-packages/django/core/handlers/exception.py", 
line 41, in inner
  response = get_response(request)
    ...
    File "/usr/lib/python2.7/dist-packages/openstack_dashboard/usage/views.py", 
line 163, in _process_chart_section
  used = self.usage.limits[key]['used']
  KeyError: 'used'

  Full traceback: https://paste.ubuntu.com/p/RcMCjWs8HG/

  From openstack_dashboard/usage/views.py:

  def _process_chart_section(self, chart_defs):
  charts = []
  for t in chart_defs:
  if t.quota_key not in self.usage.limits:
  continue
  key = t.quota_key
  used = self.usage.limits[key]['used'] # <--- KeyError
  quota = self.usage.limits[key]['quota']

  Further debugging shows we're failing on key='security_group_rule'

  chart_def=ChartDef(quota_key='security_group_rule', label=u'Security Group 
Rules', used_phrase=None, filters=None)
  self.usage.limits[key]={'quota': 100}

  Notice there's no 'used' key in self.usage.limits. Compare that vs
  'security_group' which has:

  chart_def=ChartDef(quota_key='security_group', label=u'Security Groups', 
used_phrase=None, filters=None)
  self.usage.limits[key]={'available': 9, 'used': 1, 'quota': 10}

  From openstack_dashboard/usage/quotas.py:

  def tally(self, name, value):
  """Adds to the "used" metric for the given quota."""
  value = value or 0  # Protection against None.
  # Start at 0 if this is the first value.
  if 'used' not in self.usages[name]:
  self.usages[name]['used'] = 0

  I haven't confirmed but it seems that tally does the initialization of
  the 'used' key and for some reason that's not happening.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-api/+bug/1788631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784374] Re: Image Service API v2 `filter` and `detail` missing

2018-08-27 Thread Brian Rosmaita
1. There is no 'images/detail' path in the Image API v2.
2. Property names are used as filters in the v2 API, there is not prefix 
required.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1784374

Title:
  Image Service API v2 `filter` and `detail` missing

Status in Glance:
  Invalid

Bug description:
  - [X] This doc is inaccurate in this way: Document doesn't write about the 
`images/detail` URL as well it doesn't state that all filter properties need to 
be appended by the `property-` prefix
  - [X] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  ---
  Release:  on 2018-07-27 07:29
  SHA: ff77f59bd4376be3bed8f8c62258f9973b7ef1f2
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/api-ref/source/v2/index.rst
  URL: https://developer.openstack.org/api-ref/image/v2/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1784374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1789172] [NEW] Can't update image metadata in 14.0.0.0rc2.dev44

2018-08-27 Thread ByungYeol Woo
Public bug reported:

I can't update image metadata by using Horizon 14.0.0.0rc2.dev44.
But I could update image metadata by using openstack CLI.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1789172

Title:
  Can't update image metadata in 14.0.0.0rc2.dev44

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I can't update image metadata by using Horizon 14.0.0.0rc2.dev44.
  But I could update image metadata by using openstack CLI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1789172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp