[Yahoo-eng-team] [Bug 1316686] [NEW] ldap users unable to authenticate

2014-05-06 Thread Sarath Menon
Public bug reported:

We are running the latest keystone and see this in our logs for all ldap
users. Local authentication works, and the problem happened over the
last couple of weeks. Rolling back to a keystone from April 22nd works
for us. There have been a few ldap related commits since then, we are
trying to isolate the exact change which may be causing this.


2014-05-06 16:24:49,679 (keystone.common.ldap.core): DEBUG core unbind_s LDAP 
unbind
2014-05-06 16:24:49,679 (keystone.common.ldap.core): DEBUG core connect LDAP 
init: url=ldap://ldap.internal
2014-05-06 16:24:49,680 (keystone.common.ldap.core): DEBUG core connect LDAP 
init: use_tls=False tls_cacertfile=None tls_cacertdir=None tls_req_cert=2 
tls_avail=1
2014-05-06 16:24:49,680 (keystone.common.wsgi): ERROR wsgi __call__ 'str' 
object has no attribute 'iteritems'
Traceback (most recent call last):
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/common/wsgi.py,
 line 207, in __call__
result = method(context, **params)
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/token/controllers.py,
 line 98, in authenticate
context, auth)
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/token/controllers.py,
 line 279, in _authenticate_local
user_id, tenant_id)
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/token/controllers.py,
 line 358, in _get_project_roles_and_ref
user_id, tenant_id)
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/assignment/core.py,
 line 180, in get_roles_for_user_and_project
user_role_list = _get_user_project_roles(user_id, project_ref)
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/assignment/core.py,
 line 161, in _get_user_project_roles
tenant_id=project_ref['id'])
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/identity/backends/isg_ldap_svcuser.py,
 line 164, in _get_metadata
group_id=group_id)
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/assignment/backends/ldap.py,
 line 123, in _get_metadata
tenant_id)
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/assignment/backends/ldap.py,
 line 93, in _get_roles_for_just_user_and_project
(self.project._id_to_dn(tenant_id))
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/assignment/backends/ldap.py,
 line 550, in get_role_assignments
self.ldap_filter)
  File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/common/ldap/core.py,
 line 946, in _ldap_get_list
six.iteritems(query_params)])))
  File /usr/local/share-keystone.venv/lib/python2.6/site-packages/six.py, 
line 498, in iteritems
return iter(getattr(d, _iteritems)(**kw))
AttributeError: 'str' object has no attribute 'iteritems'

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1316686

Title:
  ldap users unable to authenticate

Status in OpenStack Identity (Keystone):
  New

Bug description:
  We are running the latest keystone and see this in our logs for all
  ldap users. Local authentication works, and the problem happened over
  the last couple of weeks. Rolling back to a keystone from April 22nd
  works for us. There have been a few ldap related commits since then,
  we are trying to isolate the exact change which may be causing this.

  
  2014-05-06 16:24:49,679 (keystone.common.ldap.core): DEBUG core unbind_s LDAP 
unbind
  2014-05-06 16:24:49,679 (keystone.common.ldap.core): DEBUG core connect LDAP 
init: url=ldap://ldap.internal
  2014-05-06 16:24:49,680 (keystone.common.ldap.core): DEBUG core connect LDAP 
init: use_tls=False tls_cacertfile=None tls_cacertdir=None tls_req_cert=2 
tls_avail=1
  2014-05-06 16:24:49,680 (keystone.common.wsgi): ERROR wsgi __call__ 'str' 
object has no attribute 'iteritems'
  Traceback (most recent call last):
File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/common/wsgi.py,
 line 207, in __call__
  result = method(context, **params)
File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/token/controllers.py,
 line 98, in authenticate
  context, auth)
File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/token/controllers.py,
 line 279, in _authenticate_local
  user_id, tenant_id)
File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/token/controllers.py,
 line 358, in _get_project_roles_and_ref
  user_id, tenant_id)
File 
/usr/local/share-keystone.venv/lib/python2.6/site-packages/keystone/assignment/core.py,
 line 180, in get_roles_for_user_and_project
  user_role_list = _get_user_project_roles(user_id, project_ref)
File 

[Yahoo-eng-team] [Bug 1288926] [NEW] incorrect error code when rebooting a rebooting_hard guest

2014-03-06 Thread Sarath Menon
Public bug reported:

This is using the latest nova from trunk. In our deployment, we had a
hypervisor go down and the tenant issued a hard reboot prior. When
attempting a reboot on a guest with the state HARD_REBOOT, nova
controller throws this error in it's logs and returns 'ERROR: The server
has either erred or is incapable of performing the requested operation.
(HTTP 500)' to the user:


2014-03-06 18:21:00,535 (routes.middleware): DEBUG middleware __call__ Matched 
POST /tenant1/servers/778032b2-469d-445e-abde-7b9b0b673324/action
2014-03-06 18:21:00,536 (routes.middleware): DEBUG middleware __call__ Route 
path: '/{project_id}/servers/:(id)/action', defaults: {'action': u'action', 
'controller': nova.api.openstack.wsgi.Resource object at 0x5242c90}
2014-03-06 18:21:00,536 (routes.middleware): DEBUG middleware __call__ Match 
dict: {'action': u'action', 'controller': nova.api.openstack.wsgi.Resource 
object at 0x5242c90, 'project_id': u'tenant1', 'id': 
u'778032b2-469d-445e-abde-7b9b0b673324'}
2014-03-06 18:21:00,537 (nova.api.openstack.wsgi): DEBUG wsgi _process_stack 
Action: 'action', body: {reboot: {type: SOFT}}
2014-03-06 18:21:00,538 (nova.api.openstack.wsgi): DEBUG wsgi _process_stack 
Calling method bound method Controller._action_reboot of 
nova.api.openstack.compute.contrib.keypairs.Controller object at 0x4c35a50
2014-03-06 18:21:00,747 (nova.api.openstack): ERROR __init__ _error Caught 
error: Unexpected task state: expecting [None, 'rebooting'] but the actual 
state is rebooting_hard
Traceback (most recent call last):
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/api/openstack/__init__.py,
 line 125, in __call__
return req.get_response(self.application)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/webob/request.py, 
line 1320, in send
application, catch_exc_info=False)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/webob/request.py, 
line 1284, in call_application
app_iter = application(self.environ, start_response)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py,
 line 598, in __call__
return self.app(env, start_response)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/routes/middleware.py,
 line 131, in __call__
response = self.app(environ, start_response)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/webob/dec.py, 
line 144, in __call__
return resp(environ, start_response)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/webob/dec.py, 
line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/webob/dec.py, 
line 195, in call_func
return self.func(req, *args, **kwargs)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/api/openstack/wsgi.py,
 line 925, in __call__
content_type, body, accept)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/api/openstack/wsgi.py,
 line 987, in _process_stack
action_result = self.dispatch(meth, request, action_args)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/api/openstack/wsgi.py,
 line 1074, in dispatch
return method(req=request, **action_args)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/api/openstack/compute/servers.py,
 line 1145, in _action_reboot
self.compute_api.reboot(context, instance, reboot_type)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/compute/api.py,
 line 199, in wrapped
return func(self, context, target, *args, **kwargs)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/compute/api.py,
 line 189, in inner
return function(self, context, instance, *args, **kwargs)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/compute/api.py,
 line 170, in inner
return f(self, context, instance, *args, **kw)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/compute/api.py,
 line 2073, in reboot
instance.save(expected_task_state=[None, task_states.REBOOTING])
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/objects/base.py,
 line 151, in wrapper
return fn(self, ctxt, *args, **kwargs)
  File 
/usr/local/gshare/csi-nova.venv/lib/python2.6/site-packages/nova/objects/instance.py,
 line 472, in save
columns_to_join=_expected_cols(expected_attrs))
  File 

[Yahoo-eng-team] [Bug 1285999] [NEW] neutron's extension loader behaviour is not consistent

2014-02-27 Thread Sarath Menon
Public bug reported:

We saw this in our startup logs:

2014-02-27 16:37:07,702 (neutron.api.extensions): INFO extensions add_extension 
Loaded extension: nvp-qos
2014-02-27 16:37:07,702 (neutron.api.extensions): INFO extensions 
_load_all_extensions_from_path Loading extension file: nvp_networkgw.py
2014-02-27 16:37:07,703 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext name: Neutron-NVP Network Gateway
2014-02-27 16:37:07,703 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext alias: network-gateway
2014-02-27 16:37:07,704 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext description: Connects Neutron networks with external 
networks at layer 2 (deprecated).
2014-02-27 16:37:07,704 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext namespace: 
http://docs.openstack.org/ext/neutron/network-gateway/api/v1.0
2014-02-27 16:37:07,705 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext updated: 2014-01-01T00:00:00-00:00
2014-02-27 16:37:07,705 (neutron.api.extensions): INFO extensions add_extension 
Loaded extension: network-gateway
2014-02-27 16:37:07,705 (neutron.api.extensions): WARNING extensions 
_load_all_extensions_from_path Extension file nvp_networkgw.py wasn't loaded 
due to Found duplicate extension: network-gateway

There are two neutron servers load balanced, the other server did not
show this. While running tempest against them, it was failing because
both servers advertised different names for their network-gateway
extension. Restarting neutron on the bad server did not show this
behavior.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285999

Title:
  neutron's extension loader behaviour is not consistent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We saw this in our startup logs:

  2014-02-27 16:37:07,702 (neutron.api.extensions): INFO extensions 
add_extension Loaded extension: nvp-qos
  2014-02-27 16:37:07,702 (neutron.api.extensions): INFO extensions 
_load_all_extensions_from_path Loading extension file: nvp_networkgw.py
  2014-02-27 16:37:07,703 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext name: Neutron-NVP Network Gateway
  2014-02-27 16:37:07,703 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext alias: network-gateway
  2014-02-27 16:37:07,704 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext description: Connects Neutron networks with external 
networks at layer 2 (deprecated).
  2014-02-27 16:37:07,704 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext namespace: 
http://docs.openstack.org/ext/neutron/network-gateway/api/v1.0
  2014-02-27 16:37:07,705 (neutron.api.extensions): DEBUG extensions 
_check_extension Ext updated: 2014-01-01T00:00:00-00:00
  2014-02-27 16:37:07,705 (neutron.api.extensions): INFO extensions 
add_extension Loaded extension: network-gateway
  2014-02-27 16:37:07,705 (neutron.api.extensions): WARNING extensions 
_load_all_extensions_from_path Extension file nvp_networkgw.py wasn't loaded 
due to Found duplicate extension: network-gateway

  There are two neutron servers load balanced, the other server did not
  show this. While running tempest against them, it was failing because
  both servers advertised different names for their network-gateway
  extension. Restarting neutron on the bad server did not show this
  behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1285999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp