[Yahoo-eng-team] [Bug 1668141] Re: provide API for admin user to show neutron configure

2017-04-28 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1668141

Title:
  provide API for admin user to show neutron configure

Status in neutron:
  Expired

Bug description:
  problem:
  physical networks are defined in configure file of neutron, admin does not 
have way to show the system configure.
  some upper apps need these information. for example horizon allows the admin 
to create a network for target project, but horizon cannot fill in the physical 
network select box with neutron supported physical networks, cannot fill in the 
supported network type supported in neutron either.

  expectation:
  provide API to get neutron server configuration, return the json.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1668141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687139] [NEW] dsid_missing_source of datasource OpenStack

2017-04-28 Thread bonzo
Public bug reported:

A new feature in cloud-init identified possible datasources for#
# this system as:#
#   ['Ec2', 'None']  #
# However, the datasource used was: OpenStack#
##
# In the future, cloud-init will only attempt to use datasources that#
# are identified or specifically configured. #

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: dsid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1687139

Title:
  dsid_missing_source of datasource OpenStack

Status in cloud-init:
  New

Bug description:
  A new feature in cloud-init identified possible datasources for#
  # this system as:#
  #   ['Ec2', 'None']  #
  # However, the datasource used was: OpenStack#
  ##
  # In the future, cloud-init will only attempt to use datasources that#
  # are identified or specifically configured. #

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1687139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684994] Re: POST v3/auth/tokens API is returning unexpected 500 error when ldap credentials are incorrect

2017-04-28 Thread Matthew Edmonds
I don't think this is totally invalid. It's right to return a 500, but I
think we could improve the error message that goes with that. I.e., add
code to raise LDAPServerConnectionError once the bug Breton opened in
comment 6 is addressed.

** Changed in: keystone
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1684994

Title:
  POST v3/auth/tokens API is returning unexpected 500 error when ldap
  credentials are incorrect

Status in OpenStack Identity (keystone):
  New

Bug description:
  When keystone is configured with ldap server as identity backend, if 
incorrect credentials were configured under [ldap] section [1] of domains conf 
file, then POST request on /v3/auth/tokens API with users in ldap is returning 
unexpected 500 error [0] with stacktrace[2] shown below. 
  Instead of unexpected error user should be given a proper message about 
invalid credentials configured.

  [0]
  {"error": {"message": "An unexpected error prevented the server from 
fulfilling your request.", "code": 500, "title": "Internal Server Error"}}

  [1]
  [ldap]
  url = ldap://9.9.9.9
  user = cn=root
  password = <>

  [2]Stacktrace: 
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi 
[req-7b62d1db-64bd-4961-819e-0815bc355636 
02b49a455f5c9d9561881683c0f09919c5ab38a6eeed6de5c4ae3523df2dc706 
36b96caa022742a1b74692b29bd044a7 - 3ae481350a504cbdaf35e18b8753d002 
3ae481350a504cbdaf35e18b8753d002] {'desc': 'Invalid credentials'}
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 228, in 
__call__
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi result = 
method(req, **params)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 235, in 
wrapper
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return f(self, 
request, filters, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/controllers.py", line 230, 
in list_users
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi refs = 
self.identity_api.list_users(domain_scope=domain, hints=hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 123, in 
wrapped
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 413, in 
wrapper
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 423, in 
wrapper
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 1027, in 
list_users
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi ref_list = 
self._handle_federated_attributes_in_hints(driver, hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 1010, in 
_handle_federated_attributes_in_hints
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return 
driver.list_users(hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/core.py", 
line 88, in list_users
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return 
self.user.get_all_filtered(hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/core.py", 
line 353, in get_all_filtered
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi for user in 
self.get_all(query, hints)]
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/core.py", 
line 345, in get_all
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi hints=hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/common.py", 
line 1872, in get_all
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return 
super(EnabledEmuMixIn, self).get_all(ldap_filter, hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 

[Yahoo-eng-team] [Bug 1685634] Re: Correct oauth create_request_token documentation

2017-04-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/459114
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c8ffdf0bf60ab2d2095f5d4cd301586271f274f1
Submitter: Jenkins
Branch:master

commit c8ffdf0bf60ab2d2095f5d4cd301586271f274f1
Author: Felipe Monteiro 
Date:   Sun Apr 23 16:41:46 2017 +0100

Correct oauth create_request_token documentation

Currently, the oauth documentation for the `create_request_token`
endpoint is incorrect. The parameter "requested_project_id" [0]
is actually spelled "Request-Project-Id" and is located in the
header, not the body, of the request object [1].

[0] 
https://developer.openstack.org/api-ref/identity/v3-ext/?expanded=create-request-token-detail
[1] 
https://github.com/openstack/keystone/blob/master/keystone/oauth1/controllers.py#L220

Change-Id: Ib249efffc1e7a14635ab5d767cb70caa8b8baf0f
Closes-Bug: #1685634


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1685634

Title:
  Correct oauth create_request_token documentation

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Currently, the oauth documentation for the `create_request_token`
  endpoint is incorrect. The parameter "requested_project_id" [0]
  is actually spelled "Request-Project-Id" and is located in the
  header, not the body, of the request object [1].

  [0] 
https://developer.openstack.org/api-ref/identity/v3-ext/?expanded=create-request-token-detail
  [1] 
https://github.com/openstack/keystone/blob/e5edf3fc2823cdfc079efac0026e8f970c212677/keystone/oauth1/controllers.py#L220

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1685634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687115] [NEW] LDAPServerConnectionError gives out too much info

2017-04-28 Thread Boris Bobrov
Public bug reported:

Exception LDAPServerConnectionError
(https://git.openstack.org/cgit/openstack/keystone/tree/keystone/exception.py?h=12.0.0.0b1#n597)
is now implemented as a subclass of Error. It gives out too much info
about setup (that LDAP is used) and it should not set its error code.

Instead, it should be implemented as subclass of UnexpectedError and
debug_message_format should be used, like in
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/exception.py?h=12.0.0.0b1#n491

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1687115

Title:
  LDAPServerConnectionError gives out too much info

Status in OpenStack Identity (keystone):
  New

Bug description:
  Exception LDAPServerConnectionError
  
(https://git.openstack.org/cgit/openstack/keystone/tree/keystone/exception.py?h=12.0.0.0b1#n597)
  is now implemented as a subclass of Error. It gives out too much info
  about setup (that LDAP is used) and it should not set its error code.

  Instead, it should be implemented as subclass of UnexpectedError and
  debug_message_format should be used, like in
  
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/exception.py?h=12.0.0.0b1#n491

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1687115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680183] Re: neutron-keepalived-state-change fails with "AssertionError: do not call blocking functions from the mainloop"

2017-04-28 Thread Ihar Hrachyshka
We still hit the issue, though the trace is a bit different now:

http://logs.openstack.org/38/284738/69/check/gate-neutron-dsvm-
fullstack-ubuntu-xenial/2e022c5/logs/syslog.txt.gz

Apr 28 17:24:20 ubuntu-xenial-rax-ord-8648308 
neutron-keepalived-state-change[21615]: 2017-04-28 17:24:20.423 21615 CRITICAL 
neutron [-] AssertionError: do not call blocking functions from the mainloop

  2017-04-28 17:24:20.423 21615 ERROR neutron Traceback (most recent call 
last):

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/bin/neutron-keepalived-state-change",
 line 10, in 

  2017-04-28 17:24:20.423 21615 ERROR neutron sys.exit(main())

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/cmd/keepalived_state_change.py", line 19, in 
main

  2017-04-28 17:24:20.423 21615 ERROR neutron 
keepalived_state_change.main()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/l3/keepalived_state_change.py", line 156, 
in main

  2017-04-28 17:24:20.423 21615 ERROR neutron 
cfg.CONF.monitor_cidr).start()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/linux/daemon.py", line 253, in start

  2017-04-28 17:24:20.423 21615 ERROR neutron self.run()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/l3/keepalived_state_change.py", line 69, 
in run

  2017-04-28 17:24:20.423 21615 ERROR neutron for iterable in 
self.monitor:

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/neutron/agent/linux/async_process.py", line 261, in 
_iter_queue

  2017-04-28 17:24:20.423 21615 ERROR neutron yield 
queue.get(block=block)

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/queue.py",
 line 313, in get

  2017-04-28 17:24:20.423 21615 ERROR neutron return waiter.wait()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/queue.py",
 line 141, in wait

  2017-04-28 17:24:20.423 21615 ERROR neutron return get_hub().switch()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch

  2017-04-28 17:24:20.423 21615 ERROR neutron return 
self.greenlet.switch()

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in run

  2017-04-28 17:24:20.423 21615 ERROR neutron self.wait(sleep_time)

  2017-04-28 17:24:20.423 21615 ERROR neutron   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 85, in wait

  2017-04-28 17:24:20.423 21615 ERROR neutron presult = 
self.do_poll(seconds)
 

[Yahoo-eng-team] [Bug 1684994] Re: POST v3/auth/tokens API is returning unexpected 500 error when ldap credentials are incorrect

2017-04-28 Thread Boris Bobrov
We are now giving error code 500, and this is the correct code. 504 is
Gateway Timeout, means that one server did not receive a timely response
from another server. There is a timely response, and the response says
that the server is mis configured.

> but the error in the logs leaks information to user that keystone is
configured with LDAP as identity backend

Logs are ops-only thing. Users don't see logs, only operators do.

Sorry, i still believe current behavior is exactly what we want.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1684994

Title:
  POST v3/auth/tokens API is returning unexpected 500 error when ldap
  credentials are incorrect

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  When keystone is configured with ldap server as identity backend, if 
incorrect credentials were configured under [ldap] section [1] of domains conf 
file, then POST request on /v3/auth/tokens API with users in ldap is returning 
unexpected 500 error [0] with stacktrace[2] shown below. 
  Instead of unexpected error user should be given a proper message about 
invalid credentials configured.

  [0]
  {"error": {"message": "An unexpected error prevented the server from 
fulfilling your request.", "code": 500, "title": "Internal Server Error"}}

  [1]
  [ldap]
  url = ldap://9.9.9.9
  user = cn=root
  password = <>

  [2]Stacktrace: 
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi 
[req-7b62d1db-64bd-4961-819e-0815bc355636 
02b49a455f5c9d9561881683c0f09919c5ab38a6eeed6de5c4ae3523df2dc706 
36b96caa022742a1b74692b29bd044a7 - 3ae481350a504cbdaf35e18b8753d002 
3ae481350a504cbdaf35e18b8753d002] {'desc': 'Invalid credentials'}
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 228, in 
__call__
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi result = 
method(req, **params)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 235, in 
wrapper
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return f(self, 
request, filters, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/controllers.py", line 230, 
in list_users
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi refs = 
self.identity_api.list_users(domain_scope=domain, hints=hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 123, in 
wrapped
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 413, in 
wrapper
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 423, in 
wrapper
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 1027, in 
list_users
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi ref_list = 
self._handle_federated_attributes_in_hints(driver, hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 1010, in 
_handle_federated_attributes_in_hints
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return 
driver.list_users(hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/core.py", 
line 88, in list_users
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return 
self.user.get_all_filtered(hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/core.py", 
line 353, in get_all_filtered
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi for user in 
self.get_all(query, hints)]
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/core.py", 
line 345, in get_all
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi hints=hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 

[Yahoo-eng-team] [Bug 1684994] Re: POST v3/auth/tokens API is returning unexpected 500 error when ldap credentials are incorrect

2017-04-28 Thread Matthew Edmonds
That I would agree with.

** Changed in: keystone
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1684994

Title:
  POST v3/auth/tokens API is returning unexpected 500 error when ldap
  credentials are incorrect

Status in OpenStack Identity (keystone):
  New

Bug description:
  When keystone is configured with ldap server as identity backend, if 
incorrect credentials were configured under [ldap] section [1] of domains conf 
file, then POST request on /v3/auth/tokens API with users in ldap is returning 
unexpected 500 error [0] with stacktrace[2] shown below. 
  Instead of unexpected error user should be given a proper message about 
invalid credentials configured.

  [0]
  {"error": {"message": "An unexpected error prevented the server from 
fulfilling your request.", "code": 500, "title": "Internal Server Error"}}

  [1]
  [ldap]
  url = ldap://9.9.9.9
  user = cn=root
  password = <>

  [2]Stacktrace: 
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi 
[req-7b62d1db-64bd-4961-819e-0815bc355636 
02b49a455f5c9d9561881683c0f09919c5ab38a6eeed6de5c4ae3523df2dc706 
36b96caa022742a1b74692b29bd044a7 - 3ae481350a504cbdaf35e18b8753d002 
3ae481350a504cbdaf35e18b8753d002] {'desc': 'Invalid credentials'}
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 228, in 
__call__
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi result = 
method(req, **params)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 235, in 
wrapper
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return f(self, 
request, filters, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/controllers.py", line 230, 
in list_users
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi refs = 
self.identity_api.list_users(domain_scope=domain, hints=hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 123, in 
wrapped
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 413, in 
wrapper
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 423, in 
wrapper
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 1027, in 
list_users
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi ref_list = 
self._handle_federated_attributes_in_hints(driver, hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 1010, in 
_handle_federated_attributes_in_hints
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return 
driver.list_users(hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/core.py", 
line 88, in list_users
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return 
self.user.get_all_filtered(hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/core.py", 
line 353, in get_all_filtered
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi for user in 
self.get_all(query, hints)]
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/core.py", 
line 345, in get_all
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi hints=hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/common.py", 
line 1872, in get_all
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi return 
super(EnabledEmuMixIn, self).get_all(ldap_filter, hints)
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/ldap/common.py", 
line 1518, in get_all
  2017-04-20 09:09:08.304 12300 ERROR keystone.common.wsgi for x in 
self._ldap_get_all(hints, ldap_filter)]
  2017-04-20 09:09:08.304 12300 ERROR 

[Yahoo-eng-team] [Bug 1315201] Re: test_create_server TimeoutException failed while waiting for server to build in setup

2017-04-28 Thread Ken'ichi Ohmichi
The gate is good status now.

** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1315201

Title:
  test_create_server TimeoutException failed while waiting for server to
  build in setup

Status in OpenStack Compute (nova):
  Expired
Status in tempest:
  Fix Released

Bug description:
  There are already several timeout related bugs but nothing really fit
  the timeout to build in setup for this test, and it's not really the
  same as bug 1254890 as far as where it fails in Tempest, but could
  potentially be similar issues under the covers in nova.

  http://logs.openstack.org/37/84037/8/check/check-grenade-dsvm-partial-
  ncpu/ab64155/console.html

  message:"Details\: Server" AND message:"failed to reach ACTIVE status
  and task state \"None\" within the required time" AND message:"Current
  status\: BUILD. Current task state\: spawning." AND tags:console

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsc1xcOiBTZXJ2ZXJcIiBBTkQgbWVzc2FnZTpcImZhaWxlZCB0byByZWFjaCBBQ1RJVkUgc3RhdHVzIGFuZCB0YXNrIHN0YXRlIFxcXCJOb25lXFxcIiB3aXRoaW4gdGhlIHJlcXVpcmVkIHRpbWVcIiBBTkQgbWVzc2FnZTpcIkN1cnJlbnQgc3RhdHVzXFw6IEJVSUxELiBDdXJyZW50IHRhc2sgc3RhdGVcXDogc3Bhd25pbmcuXCIgQU5EIHRhZ3M6Y29uc29sZSIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5ODk4ODE3Njc0OX0=

  48 hits in 7 days, all failures, check and gate, several different
  jobs.  Since it's a timeout there isn't an error in the nova logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1315201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687086] [NEW] nova fails to rescue an instance because ramdisk file doesn't exist

2017-04-28 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/67/457467/4/gate/gate-tempest-dsvm-neutron-
dvr-ubuntu-
xenial/4d6be0a/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-04-20_16_18_59_065

2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager 
[req-26543eff-dd70-4526-bec6-fc977ea734dc 
tempest-ServerRescueNegativeTestJSON-295821689 
tempest-ServerRescueNegativeTestJSON-295821689] [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] Error trying to Rescue Instance
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] Traceback (most recent call last):
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3370, in rescue_instance
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] rescue_image_meta, admin_password)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2636, in rescue
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] self._create_domain(xml, 
post_xml_callback=gen_confdrive)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5002, in _create_domain
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] guest.launch(pause=pause)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 145, in launch
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] self._encoded_xml, errors='ignore')
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] self.force_reraise()
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] six.reraise(self.type_, self.value, 
self.tb)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 140, in launch
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] return 
self._domain.createWithFlags(flags)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] rv = execute(f, *args, **kwargs)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] six.reraise(c, e, tb)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] rv = meth(*args, **kwargs)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1065, in 
createWithFlags
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
2017-04-20 16:18:59.065 2166 ERROR nova.compute.manager [instance: 
6e63ccaa-f174-4371-a169-d5303db821eb] libvirtError: unable to stat: 
/opt/stack/data/nova/instances/6e63ccaa-f174-4371-a169-d5303db821eb/ramdisk.rescue:
 No such file or directory
2017-04-20 

[Yahoo-eng-team] [Bug 1436314] Re: Option to boot VM only from volume is not available

2017-04-28 Thread Ken'ichi Ohmichi
There are not any activity on Tempest side in long-term, and we could
not get enough feedback for Nova team questions. It would be nice to
drop this bug report from Tempest queue.

** Changed in: tempest
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436314

Title:
  Option to boot VM only from volume is not available

Status in OpenStack Compute (nova):
  Opinion
Status in tempest:
  Won't Fix

Bug description:
  Issue:
  When service provider wants to use only boot from volume option for booting a 
server then the integration tests fails. No option in Tempest to use only boot 
from volume for booting the server.

  Expected :

  a parameter in Tempest.conf for option of boot_from_volume_only for
  all the tests except for image tests.

  
  $ nova boot --flavor FLAVOR_ID [--image IMAGE_ID] / [ --boot-volume 
BOOTABLE_VOLUME]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1436314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1014647] Re: Tempest has no test for soft reboot

2017-04-28 Thread Ken'ichi Ohmichi
"soft reboot" makes the gate unstable, and there was not activity for this bug 
in long-term.
So we need to drop this bug report from our queue.

** Changed in: tempest
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1014647

Title:
  Tempest has no test for soft reboot

Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Won't Fix

Bug description:
  1. soft reboot requires support from the guest to operate. The current nova 
implementation tells the guest to reboot and
  then waits. If the soft reboot did not happen, it triggers a hard reboot but 
after a default wait of 2 minutes.

  Solution: Provide a new soft_reboot_image_ref, defaults to None, that
  is used for soft reboot tests which. If the value is None then the
  test is skipped.

  2. Because of (1), we should only use soft reboot when we are actually
  testing that feature.

  3. The current soft reboot test does not check that a soft reboot was
  done rather than hard. It should check for the server state of REBOOT.
  Same issue for the hard reboot test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1014647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685881] Re: l3-agent-router-add doesn't error/warn about router already existing on agent

2017-04-28 Thread Miguel Lavalle
@Drew,

Thanks. Since this is not causing continuous operational issues, I am
going to mark it invalid. If you feel we should pursue further, please
feel free to change it back

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1685881

Title:
  l3-agent-router-add doesn't error/warn about router already existing
  on agent

Status in OpenStack neutron-api charm:
  New
Status in neutron:
  Invalid

Bug description:
  we had an incident on a network that ended up with random packet
  dropping between nodes within the cloud, and outside of cloud when
  crossing l3-routers.

  Steps to reproduce:
  juju set neutron-api min-agents-per-router=2
  juju set neutron-api max-agents-per-router=2
  juju set neutron-api l2-population=false
  juju set neutron-api enable-l3ha=true
  for i in $(neutron router-list -f value -c id); do
  neutron router-update $i --admin-state=up=false
  neutron router-update $i --ha=true
  neutron router-update $i --admin-state=up=true
  done
  juju set neutron-api max-agents-per-router=3
  neutron
  for i in $(neutron router-list -f value -c id); do
neutron l3-agent-list-hosting-router $i
for j in $(neutron agent-list -f value -c id); do
  neutron l3-agent-router-add $j $i
done
  done
  sleep 120 #for settle
  for i in $(neutron router-list -f value -c id); do
neutron l3-agent-list-hosting-router $i
  done

  Potentially you may see two active l3-agents for a given router.  (We
  saw this corresponded to rabbitmq messaging failures concurrent with
  this activity).  Our environment had 9 active routers.

  You'll notice that there's no error that comes out of adding a router
  to an agent it's already running on.

  After making these updates, we found that ssh and RDP sessions to the
  floating IPs associated with VMs across several different
  networks/routers were exhibiting random session drops as if the route
  were hosted in multiple locations and we were getting an asymmetric
  route issue.

  We had to revert to --ha=false and enable-l3ha=false before we could
  gather deeper info/SOS reports.  May be able to reproduce in lab at
  some point in the future.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-api/+bug/1685881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687074] [NEW] Sometimes ovsdb fails with "tcp:127.0.0.1:6640: error parsing stream"

2017-04-28 Thread Ihar Hrachyshka
Public bug reported:

Example (Ocata): http://logs.openstack.org/67/460867/1/check/gate-
neutron-dsvm-functional-ubuntu-xenial/382d800/logs/dsvm-functional-
logs/neutron.tests.functional.agent.ovsdb.test_impl_idl.ImplIdlTestCase.test_post_commit_vswitchd_completed_no_failures.txt.gz

2017-04-28 07:59:01.430 11929 WARNING neutron.agent.ovsdb.native.vlog [-] 
tcp:127.0.0.1:6640: error parsing stream: line 0, column 1, byte 1: syntax 
error at beginning of input
2017-04-28 07:59:01.431 11929 DEBUG neutron.agent.ovsdb.impl_idl [-] Running 
txn command(idx=0): AddBridgeCommand(name=test-brc6de03bf, may_exist=True, 
datapath_type=None) do_commit neutron/agent/ovsdb/impl_idl.py:100
2017-04-28 07:59:01.433 11929 DEBUG neutron.agent.ovsdb.impl_idl [-] OVSDB 
transaction returned TRY_AGAIN, retrying do_commit 
neutron/agent/ovsdb/impl_idl.py:111
2017-04-28 07:59:01.433 11929 WARNING neutron.agent.ovsdb.native.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Protocol error)

If we look at logstash here:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22tcp%3A127.0.0.1%3A6640%3A%20error%20parsing%20stream%5C%22

We see some interesting data points, sometimes it actually logs what's
in the buffer, and I see instances of:

2017-04-27 19:02:51.755
[neutron.tests.functional.tests.common.exclusive_resources.test_port.TestExclusivePort.test_port]
3300 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 1355, byte 1355: invalid keyword 'id'

2017-04-27 14:22:02.294
[neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestSimpleInterfaceMonitor.test_get_events_native_]
3433 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'new'

2017-04-27 04:46:17.667
[neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_bad_address_allocation]
4136 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'ace'

2017-04-26 18:04:59.110
[neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_correct_protection_allowed_address_pairs]
3477 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'err'

2017-04-25 19:00:01.452
[neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_agent_mtu_set_on_interface_driver]
4251 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'set'

2017-04-25 16:34:11.355
[neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_fails_incorrect_mac_protection]
3332 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 5, byte 5: invalid keyword 'tatus'

2017-04-25 03:28:25.858
[neutron.tests.functional.agent.ovsdb.test_impl_idl.ImplIdlTestCase.test_post_commit_vswitchd_completed_no_failures]
4112 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 3, byte 3: invalid keyword 'set'

2017-04-24 21:59:39.094
[neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_protection_port_security_disabled]
3682 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: error
parsing stream: line 0, column 5, byte 5: invalid keyword 'rsion'

Terry says it doesn't resemble the protocol, but some random crap,
potentially from some random place in memory (SCARY!)

** Affects: neutron
 Importance: High
 Status: Confirmed

** Affects: ovsdbapp
 Importance: Undecided
 Status: Confirmed


** Tags: fullstack functional-tests gate-failure ovs

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Also affects: ovsdbapp
   Importance: Undecided
   Status: New

** Changed in: ovsdbapp
   Status: New => Confirmed

** Tags added: fullstack functional-tests gate-failure ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687074

Title:
  Sometimes ovsdb fails with "tcp:127.0.0.1:6640: error parsing stream"

Status in neutron:
  Confirmed
Status in ovsdbapp:
  Confirmed

Bug description:
  Example (Ocata): http://logs.openstack.org/67/460867/1/check/gate-
  neutron-dsvm-functional-ubuntu-xenial/382d800/logs/dsvm-functional-
  
logs/neutron.tests.functional.agent.ovsdb.test_impl_idl.ImplIdlTestCase.test_post_commit_vswitchd_completed_no_failures.txt.gz

  2017-04-28 07:59:01.430 11929 WARNING neutron.agent.ovsdb.native.vlog [-] 
tcp:127.0.0.1:6640: error parsing stream: line 0, column 1, byte 1: syntax 
error at beginning of input
  2017-04-28 07:59:01.431 11929 DEBUG neutron.agent.ovsdb.impl_idl [-] Running 

[Yahoo-eng-team] [Bug 1687073] [NEW] Keystone Memory usage remains high

2017-04-28 Thread Prajeesh Murukan
Public bug reported:

I found something interesting while doing a quick load test of keystone
/ newton . When I started the load test the memory usage for keystone
processes (admin and public wsgi) went up – and it never came down even
hours after the test is stopped.  Also found few errors in the log
(given below ) .  E

Also, found that many functions in resource/backends/sql.py are not closing the 
sessions once open . 
Do we need to close the sessions explicitly ? Is that the reason for persistent 
high memory usage ? 
Below error is thrown during the test . I guess the error may be due to 
settings in keystone.conf. Not sure it has anything to do with memory cleanup .
Attached is the script to execute stress test . It will launch 40 threads, and 
hit keystone at the same time 
--
Error-

2017-04-28 14:17:20.702 653 INFO keystone.common.wsgi 
[req-651d1776-9e5c-405c-82d2-3efe7dbcd5f3 - - - - -] POST 
http://10.10.10.2:5000/v3/auth/tokens
2017-04-28 14:17:20.878 691 INFO keystone.common.wsgi 
[req-8bd6baa6-976d-41b5-817a-554b3a7d6c54 - - - - -] GET 
http://192.168.204.2:35357/v3/
2017-04-28 14:17:20.898 691 INFO keystone.common.wsgi 
[req-da74eaeb-34d9-4190-8477-fd14a16fab3f b94369832d4d41cea555a9e98c216dd7 
f9f5aa29f7994730b0fc845aaba5ade5 - default default] GET 
http://192.168.204.2:35357/v3/projects
2017-04-28 14:17:20.915 692 INFO keystone.common.wsgi 
[req-38122a48-2c97-4b41-b3a0-b2a9062c68ac - - - - -] POST 
http://10.10.10.2:5000/v3/auth/tokens
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi 
[req-2208fddc-6801-4a9c-a6fd-22cfd310427d - - - - -] QueuePool limit of size 1 
overflow 10 reached, connection timed out, timeout 30
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi result = method(req, 
**params)
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 397, in 
authenticate_for_token
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi auth_info = 
AuthInfo.create(auth=auth)
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 137, in 
create
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi 
auth_info._validate_and_normalize_auth_data(scope_only)
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 310, in 
_validate_and_normalize_auth_data
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi 
self._validate_and_normalize_scope_data()
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 252, in 
_validate_and_normalize_scope_data
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi project_ref = 
self._lookup_project(self.auth['scope']['project'])
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 215, in 
_lookup_project
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi domain_ref = 
self._lookup_domain(project_info['domain'])
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 189, in 
_lookup_domain
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi domain_ref = 
self.resource_api.get_domain(domain_id)
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in 
wrapped
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1220, in 
decorate
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi should_cache_fn)
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 825, in 
get_or_create
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi async_creator) as 
value:
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/dogpile/lock.py", line 154, in __enter__
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi return self._enter()
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/dogpile/lock.py", line 94, in _enter
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi generated = 
self._enter_create(createdtime)
2017-04-28 14:17:21.001 653 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/dogpile/lock.py", line 145, in _enter_create
2017-04-28 

[Yahoo-eng-team] [Bug 1627106] Re: TimeoutException while executing tests adding bridge using OVSDB native

2017-04-28 Thread Ihar Hrachyshka
This still happens. At least once in Ocata functional job:
http://logs.openstack.org/67/460867/1/check/gate-neutron-dsvm-
functional-ubuntu-xenial/382d800/console.html

Also logstash shows 45 hits overall for "message:"exceeded timeout 10
seconds"", almost all of them in fullstack now.

** Changed in: neutron
   Status: Fix Released => Confirmed

** Changed in: neutron
Milestone: pike-1 => pike-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627106

Title:
  TimeoutException while executing tests adding bridge using OVSDB
  native

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/91/366291/12/check/gate-neutron-dsvm-
  functional-ubuntu-trusty/a23c816/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 125, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 62, in 
test_post_commit_vswitchd_completed_no_failures
  self._add_br_and_test()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 56, in 
_add_br_and_test
  self._add_br()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 52, in 
_add_br
  tr.add(ovsdb.add_br(self.brname))
File "neutron/agent/ovsdb/api.py", line 76, in __exit__
  self.result = self.commit()
File "neutron/agent/ovsdb/impl_idl.py", line 72, in commit
  'timeout': self.timeout})
  neutron.agent.ovsdb.api.TimeoutException: Commands 
[AddBridgeCommand(name=test-br6925d8e2, datapath_type=None, may_exist=True)] 
exceeded timeout 10 seconds

  
  I suspect this one may hit us because we finally made timeout working with 
Icd745514adc14730b9179fa7a6dd5c115f5e87a5.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687067] [NEW] problems with cpu and cpu-thread policy where flavor/image specify different settings

2017-04-28 Thread Chris Friesen
Public bug reported:

There are a number of issues related to CPU policy and CPU thread policy
where the flavor extra-spec and image properties do not match up.

The docs at https://docs.openstack.org/admin-guide/compute-cpu-
topologies.html say the following:

"Image metadata takes precedence over flavor extra specs. Thus,
configuring competing policies causes an exception. By setting a shared
policy through image metadata, administrators can prevent users
configuring CPU policies in flavors and impacting resource utilization."

For the CPU policy this is exactly backwards based on the code.  The
flavor is specified by the admin, and so it generally takes priority
over the image which is specified by the end user.  If the flavor
specifies "dedicated" then the result is dedicated regardless of what
the image specifies.  If the flavor specifies "shared" then the result
depends on the image--if it specifies "dedicated" then we will raise an
exception, otherwise we use "shared".  If the flavor doesn't specify a
CPU policy then the image can specify whatever policy it wants.

The issue around CPU threading policy is more complicated.

Back in Mitaka, if the flavor specified a CPU threading policy of either
None or "prefer" then we would use the threading policy specified by the
image (if it was set).  If the flavor specified a CPU threading policy
of "isolate" or "require" and the image specified a different CPU
threading policy then we raised
exception.ImageCPUThreadPolicyForbidden(), otherwise we used the CPU
threading policy specified by the flavor.  This behaviour is described
in the spec at https://specs.openstack.org/openstack/nova-
specs/specs/mitaka/implemented/virt-driver-cpu-thread-pinning.html

In git commit 24997343 (which went into Newton) Nikola Dipanov made a
code change that doesn't match the intent in the git commit message:

 if flavor_thread_policy in [None, fields.CPUThreadAllocationPolicy.PREFER]:
-cpu_thread_policy = image_thread_policy
+cpu_thread_policy = flavor_thread_policy or image_thread_policy

The effect of this is that if the flavor specifies a CPU threading
policy of "prefer" then we will use a policy of "prefer" regardless of
the policy from the image.  If the flavor specifies a CPU threading
policy of None then we will use the policy from the image.

This is a bug, because the original intent was to treat None and
"prefer" identically, since "prefer" was just an explicit way to specify
the default behaviour.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1687067

Title:
  problems with cpu and cpu-thread policy where flavor/image specify
  different settings

Status in OpenStack Compute (nova):
  New

Bug description:
  There are a number of issues related to CPU policy and CPU thread
  policy where the flavor extra-spec and image properties do not match
  up.

  The docs at https://docs.openstack.org/admin-guide/compute-cpu-
  topologies.html say the following:

  "Image metadata takes precedence over flavor extra specs. Thus,
  configuring competing policies causes an exception. By setting a
  shared policy through image metadata, administrators can prevent users
  configuring CPU policies in flavors and impacting resource
  utilization."

  For the CPU policy this is exactly backwards based on the code.  The
  flavor is specified by the admin, and so it generally takes priority
  over the image which is specified by the end user.  If the flavor
  specifies "dedicated" then the result is dedicated regardless of what
  the image specifies.  If the flavor specifies "shared" then the result
  depends on the image--if it specifies "dedicated" then we will raise
  an exception, otherwise we use "shared".  If the flavor doesn't
  specify a CPU policy then the image can specify whatever policy it
  wants.

  The issue around CPU threading policy is more complicated.

  Back in Mitaka, if the flavor specified a CPU threading policy of
  either None or "prefer" then we would use the threading policy
  specified by the image (if it was set).  If the flavor specified a CPU
  threading policy of "isolate" or "require" and the image specified a
  different CPU threading policy then we raised
  exception.ImageCPUThreadPolicyForbidden(), otherwise we used the CPU
  threading policy specified by the flavor.  This behaviour is described
  in the spec at https://specs.openstack.org/openstack/nova-
  specs/specs/mitaka/implemented/virt-driver-cpu-thread-pinning.html

  In git commit 24997343 (which went into Newton) Nikola Dipanov made a
  code change that doesn't match the intent in the git commit message:

   if flavor_thread_policy in [None, 
fields.CPUThreadAllocationPolicy.PREFER]:
  -cpu_thread_policy = image_thread_policy
  +

[Yahoo-eng-team] [Bug 1687065] [NEW] functional tests are filled with POLLIN messages from ovs even when it's not using ovs itself

2017-04-28 Thread Ihar Hrachyshka
Public bug reported:

Example: http://logs.openstack.org/27/451527/5/check/gate-neutron-dsvm-
functional-ubuntu-xenial/da67f5f/logs/dsvm-functional-
logs/neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_protection_update.txt.gz

This test has nothing to do with ovs, but still, it's trashed with
POLLIN messages. Probably because some previous test case in the worker
initialized ovslib and got a logging thread spinned up.

Ideally, we would not have the thread running in non-ovs scope, meaning
we would need some way to kill/disable it when not needed. Maybe a
fixture in ovsdbapp for that matter (or ovs lib itself) that would
restore the state to pre-init could help. Then we could use the fixture
in our base test classes.

** Affects: neutron
 Importance: Low
 Status: Confirmed

** Affects: ovsdbapp
 Importance: Undecided
 Status: Confirmed


** Tags: functional-tests usability

** Changed in: ovsdbapp
   Status: New => Confirmed

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: functional-tests usability

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687065

Title:
  functional tests are filled with POLLIN messages from ovs even when
  it's not using ovs itself

Status in neutron:
  Confirmed
Status in ovsdbapp:
  Confirmed

Bug description:
  Example: http://logs.openstack.org/27/451527/5/check/gate-neutron-
  dsvm-functional-ubuntu-xenial/da67f5f/logs/dsvm-functional-
  
logs/neutron.tests.functional.agent.linux.test_linuxbridge_arp_protect.LinuxBridgeARPSpoofTestCase.test_arp_protection_update.txt.gz

  This test has nothing to do with ovs, but still, it's trashed with
  POLLIN messages. Probably because some previous test case in the
  worker initialized ovslib and got a logging thread spinned up.

  Ideally, we would not have the thread running in non-ovs scope,
  meaning we would need some way to kill/disable it when not needed.
  Maybe a fixture in ovsdbapp for that matter (or ovs lib itself) that
  would restore the state to pre-init could help. Then we could use the
  fixture in our base test classes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1687065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687064] [NEW] ovs logs are trashed with healthcheck messages from ovslib

2017-04-28 Thread Ihar Hrachyshka
Public bug reported:

Those messages are all over the place:

2017-04-28 14:34:06.478 16259 DEBUG ovsdbapp.backend.ovs_idl.vlog [-]
[POLLIN] on fd 14 __log_wakeup /usr/local/lib/python2.7/dist-
packages/ovs/poller.py:246

We should probably suppress them, they don't seem to carry any value. If
there is value in knowing when something stopped working, maybe consider
erroring in this failure mode instead of logging in happy path.

** Affects: neutron
 Importance: Low
 Status: Confirmed

** Affects: ovsdbapp
 Importance: Undecided
 Status: New


** Tags: ovs usability

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: ovs usability

** Also affects: ovsdbapp
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687064

Title:
  ovs logs are trashed with healthcheck messages from ovslib

Status in neutron:
  Confirmed
Status in ovsdbapp:
  New

Bug description:
  Those messages are all over the place:

  2017-04-28 14:34:06.478 16259 DEBUG ovsdbapp.backend.ovs_idl.vlog [-]
  [POLLIN] on fd 14 __log_wakeup /usr/local/lib/python2.7/dist-
  packages/ovs/poller.py:246

  We should probably suppress them, they don't seem to carry any value.
  If there is value in knowing when something stopped working, maybe
  consider erroring in this failure mode instead of logging in happy
  path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1687064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673637] Re: cloud-init - Hosts in softlayer receiving warning

2017-04-28 Thread Chad Smith
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1673637

Title:
  cloud-init - Hosts in softlayer receiving warning

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  New

Bug description:
  Newly provisioned Xenial hosts in softlayer bootstrapped via cloud-init are 
getting the following message.
  **
  # A new feature in cloud-init identified possible datasources for#
  # this system as:#
  #   ['Ec2', 'None']  #
  # However, the datasource used was: NoCloud  #
  ##
  # In the future, cloud-init will only attempt to use datasources that#
  # are identified or specifically configured. #
  # For more information see   #
  #   https://bugs.launchpad.net/bugs/1669675  #
  ##
  # If you are seeing this message, please file a bug against  #
  # cloud-init at  #
  #https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
  # Make sure to include the cloud provider your instance is   #
  # running on.#
  ##
  # After you have filed a bug, you can disable this warning by launching  #
  # your instance with the cloud-config below, or putting that content #
  # into /etc/cloud/cloud.cfg.d/99-warnings.cfg#
  ##
  # #cloud-config  #
  # warnings:  #
  #   dsid_missing_source: off #
  **

  I'm running what I believe is the latest version of cloud-init:
  $ dpkg -s cloud-init
  Package: cloud-init
  Status: install ok installed
  Priority: extra
  Section: admin
  Installed-Size: 1417
  Maintainer: Scott Moser 
  Architecture: all
  Version: 0.7.9-48-g1c795b9-0ubuntu1~16.04.1

  I'm able to get rid of the message following the instructions
  provided, but posting the bug report as instructed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1673637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1636531] Re: unittests blkid command fails on slave s390x

2017-04-28 Thread Chad Smith
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1636531

Title:
  unittests blkid command fails on slave s390x

Status in cloud-init:
  Invalid
Status in cloud-init package in Ubuntu:
  New

Bug description:
  Running the unittests on our slave s390x system, the blkid command
  fails. Running it manually returns the following:

  jenkins@s1lp04:~$ blkid -tLABEL=CDROM -odevice
  jenkins@s1lp04:~$ echo $?
  2
  jenkins@s1lp04:~$ lsblk
  NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
  dasda 94:00  20.6G  0 disk 
  |-dasda1  94:10  19.7G  0 part /
  `-dasda2  94:20 953.5M  0 part [SWAP]

  Full run output:
  https://jenkins.ubuntu.com/server/job/cloud-init-ci/nodes=s390x/53/console

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1636531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644064] Re: sshd_config file permission changed to 644 if ssh_pwauth value is true or false

2017-04-28 Thread Chad Smith
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1644064

Title:
  sshd_config file permission changed to 644 if ssh_pwauth value is true
  or false

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  New

Bug description:
  In my deploy image, the default permission of sshd_config file is 600.
  It always be changed to 644 after cloud-init run. After debug, it is
  caused by cloud-config item:

  ssh_pwauth: true

  The related code is:

  lines = [str(l) for l in new_lines]
  util.write_file(ssh_util.DEF_SSHD_CFG, "\n".join(lines))
  of file cc_set_passwords.py.

  write_file function use default mask 644 to write sshd_config. So my
  file permission changed.

  It shall be enhanced to read old sshd_config permission and write new
  sshd_config with old permission to avoid security issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1644064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1676908] Re: DigitalOcean network improvements

2017-04-28 Thread Chad Smith
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1676908

Title:
  DigitalOcean network improvements

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  This is a request to merge the improvements to the linked PR that
  improves the DigitalOcean datasource.

  The changes:
  - No longer bind the nameservers to a specific interface to bring it inline 
with the other DataSources like OpenStack and SmartOS.
  - Fix mis-binding the IPV4all address to a secondary interface by considering 
'eth0' or 'ens3' first
  - Consider all network definitions, not just 'public' or 'private'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1676908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630664] Re: Intermittent failure in n-api connecting to neutron to list ports after TLS was enabled in CI

2017-04-28 Thread Ihar Hrachyshka
I am seeing that still happening with keystone token fetch. It just hit
this Ocata patch: https://review.openstack.org/#/c/460909/

In http://logs.openstack.org/09/460909/2/check/gate-tempest-dsvm-
neutron-linuxbridge-ubuntu-xenial/67904c9/logs/apache/tls-
proxy_error.txt.gz we see:

[Fri Apr 28 12:46:47.763965 2017] [proxy_http:error] [pid 30068:tid 
140271090042624] (20014)Internal error (specific information not available): 
[client 104.130.119.120:50002] [frontend 104.130.119.120:443] AH01102: error 
reading status line from remote server 104.130.119.120:80
[Fri Apr 28 12:46:47.764003 2017] [proxy:error] [pid 30068:tid 140271090042624] 
[client 104.130.119.120:50002] [frontend 104.130.119.120:443] AH00898: Error 
reading from remote server returned by /identity_admin/v3/auth/tokens

The request that triggered the failure doesn't seem to show up in
keystone log.

** Project changed: nova => devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630664

Title:
  Intermittent failure in n-api connecting to neutron to list ports
  after TLS was enabled in CI

Status in devstack:
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/00/382000/2/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/07e5243/logs/screen-n-api.txt.gz?level=TRACE#_2016-10-05_14_35_04_333

  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack 
[req-c1bbc78f-89e4-4de2-956d-9b71f8ad1a87 
tempest-TestNetworkAdvancedServerOps-960076899 
tempest-TestNetworkAdvancedServerOps-960076899] Caught error: Unable to 
establish connection to 
https://127.0.0.1:9696/v2.0/ports.json?device_id=bf9a5908-ebdd-4f67-aae4-a0a3e0cf0d09
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack Traceback (most recent 
call last):
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 89, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
req.get_response(self.application)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in 
call_application
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 323, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack response = 
req.get_response(self._app)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1299, in send
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1263, in 
call_application
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack return 
resp(environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 141, in 
__call__
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack response = 
self.app(environ, start_response)
  2016-10-05 14:35:04.333 18048 ERROR nova.api.openstack   File 

[Yahoo-eng-team] [Bug 1531022] Re: libvirt driver doesn't cleanup the tap interface on vm re-schedule

2017-04-28 Thread Sridhar Venkat
** Also affects: nova-powervm
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1531022

Title:
  libvirt driver doesn't cleanup the tap interface on vm re-schedule

Status in OpenStack Compute (nova):
  In Progress
Status in nova-powervm:
  New

Bug description:
  Here when you use libvirt driver with tap interfaces, it creates a tap
  interface on the host but doesn't clean up the interface and leaves
  in-tact and creates another same named interface on the new host.

  In _do_build_and_run_instance when RescheduledException is called,
  manager checks if the network port needs to be de-allocated for a
  different host or not using  deallocate_networks_on_reschedule() which
  is hard coded to return False. If this is changed to return true or
  set via conf file configuration to allow being changed for specific
  mech drivers in neutron then it would be helpful to not only clean up
  the tap interface properly but also also mech drivers in neutron to
  re-create new ports on new host instead of shifting and re-using same
  ports which fails.

  tested on master and stable/liberty and fails in both cases, so may
  need back porting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1531022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686514] Re: Azure: cloud-init does not handle reformatting GPT partition ephemeral disks

2017-04-28 Thread Scott Moser
** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1686514

Title:
  Azure: cloud-init does not handle reformatting GPT partition ephemeral
  disks

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  Some Azure instances such as L32 or G5 have very large ephemeral disks
  which are partitioned via GPT vs. smaller ephemeral disks that have
  dos disklabels.

  At first boot of an instance the ephemeral disk is prepared and
  formatted properly. But if the instance is deallocated and then
  reallocated (thus receiving a new ephemeral disk) then cloud-init does
  not handle reformatting GPT partition ephemeral disks properly.
  Therefore /mnt is never mounted again.

  Test cases:
   1. Deploy an L32(s) VM on Azure
   2. Log in and ensure that the ephemeral disk is formatted and mounted to /mnt
   3. Via the portal you can "Redeploy" the VM to a new Azure Host (or 
alternatively stop and deallocate the VM for some time, and then 
restart/reallocate the VM).

  Expected Results:
   - After reallocation we expect the ephemeral disk to be formatted and 
mounted to /mnt.

  Actual Results:
   - After reallocation /mnt is not mounted and there are errors in the 
cloud-init log.

  *This was tested on Ubuntu 16.04 - but may affect other releases.

  Note: This bug a regression from previous cloud-init releases. GPT
  support for Azure ephemeral disk handling was added to cloud-init via
  this bug: https://bugs.launchpad.net/ubuntu/+source/cloud-
  init/+bug/1422919.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1686514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687012] [NEW] flavor-delete notification should not try to lazy-load projects

2017-04-28 Thread Matt Riedemann
Public bug reported:

When we destroy a flavor from the api database we send a notification:

https://github.com/openstack/nova/blob/5a363a0d72e7dd8d79d7e950effc1d8a5fdc801b/nova/objects/flavor.py#L608

However if flavor.projects isn't loaded we try to lazy-load it:

https://github.com/openstack/nova/blob/5a363a0d72e7dd8d79d7e950effc1d8a5fdc801b/nova/objects/flavor.py#L617

Which is going to result in a FlavorNotFound error because we just
deleted the flavor from the API database:

https://github.com/openstack/nova/blob/5a363a0d72e7dd8d79d7e950effc1d8a5fdc801b/nova/objects/flavor.py#L65

This doesn't blow everything up because we fallback to the main cell
database to get the flavor projects:

https://github.com/openstack/nova/blob/5a363a0d72e7dd8d79d7e950effc1d8a5fdc801b/nova/db/sqlalchemy/api.py#L5194

Which just returns an empty list.

I noticed this when removing the main db fallback paths in this change
and had to workaround it:

https://review.openstack.org/#/c/460377/

But it's really a separate bug fix.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: notifications

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Tags added: notifications

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1687012

Title:
  flavor-delete notification should not try to lazy-load projects

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  When we destroy a flavor from the api database we send a notification:

  
https://github.com/openstack/nova/blob/5a363a0d72e7dd8d79d7e950effc1d8a5fdc801b/nova/objects/flavor.py#L608

  However if flavor.projects isn't loaded we try to lazy-load it:

  
https://github.com/openstack/nova/blob/5a363a0d72e7dd8d79d7e950effc1d8a5fdc801b/nova/objects/flavor.py#L617

  Which is going to result in a FlavorNotFound error because we just
  deleted the flavor from the API database:

  
https://github.com/openstack/nova/blob/5a363a0d72e7dd8d79d7e950effc1d8a5fdc801b/nova/objects/flavor.py#L65

  This doesn't blow everything up because we fallback to the main cell
  database to get the flavor projects:

  
https://github.com/openstack/nova/blob/5a363a0d72e7dd8d79d7e950effc1d8a5fdc801b/nova/db/sqlalchemy/api.py#L5194

  Which just returns an empty list.

  I noticed this when removing the main db fallback paths in this change
  and had to workaround it:

  https://review.openstack.org/#/c/460377/

  But it's really a separate bug fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1687012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687002] [NEW] Nova API failure ("could not find resource cirros" error)

2017-04-28 Thread DmitryKhoruzhenko
Public bug reported:

When trying to build Openstack environment 
according to guide 
https://docs.openstack.org/mitaka/install-guide-rdo/launch-instance-provider.html
 
I've got "could not find resource cirros" message after trying
"openstack server create --flavor m1.tiny --image cirros   --nic 
net-id=33989c28-af9d-46dc-a009-f5a0294c785b --security-group default   
--key-name mykey provider-instance"
Additional info:
 openstack network list
+--+--+--+
| ID   | Name | Subnets 
 |
+--+--+--+
| 33989c28-af9d-46dc-a009-f5a0294c785b | provider | 
16f53142-4244-41f0-831a-5e1fbe7cc400 |
+--+--+--+
 openstack image list
+--+++
| ID   | Name   | Status |
+--+++
| bd0ca95e-8b32-437b-8b91-99f7d4e671e2 | cirros | active |
+--+++

nova-api.log:
 INFO nova.api.openstack.wsgi [req-e9347579-4377-4f45-b3f7-7434d7010914 
f0b83e16f08b4b789c83650d8ef7b19d 7fb5eb98d35b482a9ef993a1e670ccd2 - - -] HTTP 
exception thrown: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.

 Exact version of OpenStack
 nova-manage --version
13.1.2

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova-api.log"
   
https://bugs.launchpad.net/bugs/1687002/+attachment/4869146/+files/nova-api.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1687002

Title:
  Nova API failure ("could not find resource cirros" error)

Status in OpenStack Compute (nova):
  New

Bug description:
  When trying to build Openstack environment 
  according to guide 
https://docs.openstack.org/mitaka/install-guide-rdo/launch-instance-provider.html
 
  I've got "could not find resource cirros" message after trying
  "openstack server create --flavor m1.tiny --image cirros   --nic 
net-id=33989c28-af9d-46dc-a009-f5a0294c785b --security-group default   
--key-name mykey provider-instance"
  Additional info:
   openstack network list
  
+--+--+--+
  | ID   | Name | Subnets   
   |
  
+--+--+--+
  | 33989c28-af9d-46dc-a009-f5a0294c785b | provider | 
16f53142-4244-41f0-831a-5e1fbe7cc400 |
  
+--+--+--+
   openstack image list
  +--+++
  | ID   | Name   | Status |
  +--+++
  | bd0ca95e-8b32-437b-8b91-99f7d4e671e2 | cirros | active |
  +--+++

  nova-api.log:
   INFO nova.api.openstack.wsgi [req-e9347579-4377-4f45-b3f7-7434d7010914 
f0b83e16f08b4b789c83650d8ef7b19d 7fb5eb98d35b482a9ef993a1e670ccd2 - - -] HTTP 
exception thrown: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.

   Exact version of OpenStack
   nova-manage --version
  13.1.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1687002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686999] [NEW] Instance with two attached volumes fails to start with error: Duplicate ID 'drive-ide0-0-0' for drive

2017-04-28 Thread Saverio Proto
Public bug reported:

  
I imported into Openstack a Linux Centos machine. The instance does not have 
support for VirtIO. I had to import the boot disk as hda. Now I have this 
instance with two volumes attached, but when I try to boot the following XML is 
generated.


  
  [..CUT...]
  
  c3841ee3-3f9a-457e-b504-d35e367a1193
  


  
  [..CUT...]
  
  63e05c59-8de1-4908-a3dd-3f2261c82ea9
  


The machine does not boot, and in the nova-compute.log I find a
stacktrace.

2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 185, 
in _dispatch
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 127, 
in _do_dispatch
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 110, in wrapped
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher payload)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 89, in wrapped
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 359, in 
decorated_function
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance=instance)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 328, in 
decorated_function
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 409, in 
decorated_function
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 387, in 
decorated_function
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 375, in 
decorated_function
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2017-04-28 12:56:35.356 43378 ERROR oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1674374] Re: remove i18n methods because logs are not being translated

2017-04-28 Thread Ngo Quoc Cuong
** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1674374

Title:
  remove i18n methods because logs are not being translated

Status in Ironic:
  In Progress
Status in Ironic Inspector:
  Fix Released
Status in ironic-lib:
  Fix Released
Status in Python client for Ironic Inspector:
  Fix Released
Status in python-ironicclient:
  Fix Released

Bug description:
  The i18n team has decided not to translate the logs because it seems
  like it not very useful; operators prefer to have them in English so
  that they can search for those strings on the internet.

  See http://lists.openstack.org/pipermail/openstack-
  dev/2017-March/thread.html#113365.

  Translation cleanup is being organized here.
  https://etherpad.openstack.org/p/ironic-translation-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1674374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686921] [NEW] There are wrong unit tests about config option usage

2017-04-28 Thread ChangBo Guo(gcb)
Public bug reported:

We enforce config option type check in oslo.config [1][2]. This leads some unit 
tests of Keystone failed [3]. There are many types of failure, so use this bug 
record each type and fix.
Should list failure details in order.

[1] https://review.openstack.org/328692
[2] https://review.openstack.org/#/c/455522/
[3] 
http://logs.openstack.org/periodic/periodic-keystone-py27-with-oslo-master/0868d74/testr_results.html.gz


1. keystone.tests.unit.test_v3_oauth1.MaliciousOAuth1Tests
Traceback (most recent call last):
  File "keystone/tests/unit/test_v3_oauth1.py", line 866, in 
test_expired_creating_keystone_token
self.config_fixture.config(group='oauth1', access_token_duration=-1)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/fixture.py",
 line 73, in config
self.conf.set_override(k, v, group, enforce_type=enforce_type)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/debtcollector/debtcollector/removals.py",
 line 261, in wrapper
return f(*args, **kwargs)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/cfg.py",
 line 2314, in __inner
result = f(self, *args, **kwargs)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/cfg.py",
 line 2638, in set_override
opt_info['opt'], override, enforce_type)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/cfg.py",
 line 2667, in _get_enforced_type_value
converted = self._convert_value(value, opt)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/cfg.py",
 line 2945, in _convert_value
return opt.type(value)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/types.py",
 line 287, in __call__
self.min)
ValueError: Should be greater than or equal to 0

Traceback (most recent call last):
  File "keystone/tests/unit/test_v3_oauth1.py", line 842, in 
test_expired_authorizing_request_token
self.config_fixture.config(group='oauth1', request_token_duration=-1)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/fixture.py",
 line 73, in config
self.conf.set_override(k, v, group, enforce_type=enforce_type)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/debtcollector/debtcollector/removals.py",
 line 261, in wrapper
return f(*args, **kwargs)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/cfg.py",
 line 2314, in __inner
result = f(self, *args, **kwargs)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/cfg.py",
 line 2638, in set_override
opt_info['opt'], override, enforce_type)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/cfg.py",
 line 2667, in _get_enforced_type_value
converted = self._convert_value(value, opt)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/cfg.py",
 line 2945, in _convert_value
return opt.type(value)
  File 
"/home/jenkins/workspace/periodic-keystone-py27-with-oslo-master/.tox/py27-oslo-master/src/oslo.config/oslo_config/types.py",
 line 287, in __call__
self.min)
ValueError: Should be greater than or equal to 0

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1686921

Title:
  There are wrong unit tests about config option usage

Status in OpenStack Identity (keystone):
  New

Bug description:
  We enforce config option type check in oslo.config [1][2]. This leads some 
unit tests of Keystone failed [3]. There are many types of failure, so use this 
bug record each type and fix.
  Should list failure details in order.

  [1] https://review.openstack.org/328692
  [2] https://review.openstack.org/#/c/455522/
  [3] 
http://logs.openstack.org/periodic/periodic-keystone-py27-with-oslo-master/0868d74/testr_results.html.gz

  
  1. keystone.tests.unit.test_v3_oauth1.MaliciousOAuth1Tests
  Traceback (most recent call last):
File "keystone/tests/unit/test_v3_oauth1.py", line 866, in 
test_expired_creating_keystone_token
  self.config_fixture.config(group='oauth1', access_token_duration=-1)
File 

[Yahoo-eng-team] [Bug 1686917] [NEW] api-ref: unnecessary description in servers-admin-action.inc

2017-04-28 Thread Takashi NATSUME
Public bug reported:

https://developer.openstack.org/api-ref/compute/#servers-run-an-
administrative-action-servers-action

There is the following description, but the action to change the administrative 
password for a server is not included in servers-admin-action.inc. 
So it should be removed.

> You can change the administrative password for a server and inject
network information into a server.

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1686917

Title:
  api-ref: unnecessary description in servers-admin-action.inc

Status in OpenStack Compute (nova):
  New

Bug description:
  https://developer.openstack.org/api-ref/compute/#servers-run-an-
  administrative-action-servers-action

  There is the following description, but the action to change the 
administrative password for a server is not included in 
servers-admin-action.inc. 
  So it should be removed.

  > You can change the administrative password for a server and inject
  network information into a server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1686917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686898] [NEW] List of available QoS rules should be changed

2017-04-28 Thread Slawek Kaplonski
Public bug reported:

Currently Neutron has API call "qos-available-rule-types" which will return 
subset of qos rules supported by all loaded drivers (openvswitch, linuxbridge, 
etc.)
After https://bugs.launchpad.net/neutron/+bug/1586056 was closed it should be 
done in different way.
Neutron API in response to qos-available-rule-types should return subset of 
rules supported by ANY of loaded drivers, not by all drivers.
This should be changed because now if rule is supported by at least on of 
drivers than it can be used and applied to ports bound with this driver. 
Neutron will not allow to apply such rule to other ports.

** Affects: neutron
 Importance: Undecided
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: qos

** Changed in: neutron
 Assignee: (unassigned) => Slawek Kaplonski (slaweq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1686898

Title:
  List of available QoS rules should be changed

Status in neutron:
  New

Bug description:
  Currently Neutron has API call "qos-available-rule-types" which will return 
subset of qos rules supported by all loaded drivers (openvswitch, linuxbridge, 
etc.)
  After https://bugs.launchpad.net/neutron/+bug/1586056 was closed it should be 
done in different way.
  Neutron API in response to qos-available-rule-types should return subset of 
rules supported by ANY of loaded drivers, not by all drivers.
  This should be changed because now if rule is supported by at least on of 
drivers than it can be used and applied to ports bound with this driver. 
Neutron will not allow to apply such rule to other ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1686898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674374] Re: remove i18n methods because logs are not being translated

2017-04-28 Thread Ngo Quoc Cuong
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Ngo Quoc Cuong (cuongnq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1674374

Title:
  remove i18n methods because logs are not being translated

Status in Glance:
  New
Status in Ironic:
  In Progress
Status in Ironic Inspector:
  Fix Released
Status in ironic-lib:
  Fix Released
Status in Python client for Ironic Inspector:
  Fix Released
Status in python-ironicclient:
  Fix Released

Bug description:
  The i18n team has decided not to translate the logs because it seems
  like it not very useful; operators prefer to have them in English so
  that they can search for those strings on the internet.

  See http://lists.openstack.org/pipermail/openstack-
  dev/2017-March/thread.html#113365.

  Translation cleanup is being organized here.
  https://etherpad.openstack.org/p/ironic-translation-cleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1674374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp