[Yahoo-eng-team] [Bug 1394083] Re: ldap user_filter is not honored while authenticating

2017-01-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1394083

Title:
  ldap user_filter is not honored while authenticating

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  When full LDAP logging is enabled, we can see that the inital LDAP
  search query does not use the user_filter while it tries to find the
  user DN from the LDAP.

  This causes authentication to fail if we have two users with same name
  in the LDAP  in the same tree but with different ids. We use memberOf
  filter to limit which users are seen by Keystone.

  I traced the issue to keystone/common/ldap/core.py method get_by_name
  which only seems to filter by user name ignoring the filter set in the
  configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1394083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600393] Re: v2.0 catalog seen in v3 token

2017-01-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1600393

Title:
  v2.0 catalog seen in v3 token

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  During a Rally test of our deployment using Mitaka keystone we
  observed the following traceback in the logs. It seems that the v3
  catalog is returned as a list whereas the v2.0 catalog is a dict. But
  the format_catalog() function always expects a dict.

  
  2016-07-06 03:00:55.171 18314 INFO eventlet.wsgi.server 
[req-5ebbe11b-5efb-4606-a46c-58f100a8a550 5716d29278b8438a95f718ea926e4e7a 
954d6157b061441197b228ac7b4dd6ee - default default] 
10.111.109.191,10.111.109.89 - - [06/Jul/2016 03:00:55] "DELETE 
/v2.0/tenants/37b1a3bad0e54dc2a9824ac51ba02a9f HTTP/1.1" 204 212 0.070017
  2016-07-06 03:00:55.779 18323 DEBUG keystone.middleware.auth 
[req-9843ab92-0f8f-42b9-8b56-a5fc6d011569 - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. _build_auth_context 
/usr/lib/python2.7/dist-packages/keystone/middleware/auth.py:71
  2016-07-06 03:00:55.781 18323 INFO keystone.common.wsgi 
[req-9843ab92-0f8f-42b9-8b56-a5fc6d011569 - - - - -] POST 
http://10.111.109.81:5000/v2.0/tokens
  2016-07-06 03:00:55.879 18323 INFO keystone.token.providers.fernet.utils 
[req-9843ab92-0f8f-42b9-8b56-a5fc6d011569 - - - - -] Loaded 2 encryption keys 
(max_active_keys=3) from: /etc/keystone/fernet-keys/
  2016-07-06 03:00:55.882 18323 INFO eventlet.wsgi.server 
[req-9843ab92-0f8f-42b9-8b56-a5fc6d011569 - - - - -] 
10.111.109.191,10.111.109.89 - - [06/Jul/2016 03:00:55] "POST /v2.0/tokens 
HTTP/1.1" 200 3585 0.102872
  2016-07-06 03:00:57.450 18323 DEBUG keystone.middleware.auth 
[req-57632939-e139-4dc7-a1f4-833ce4e84665 - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. _build_auth_context 
/usr/lib/python2.7/dist-packages/keystone/middleware/auth.py:71
  2016-07-06 03:00:57.452 18323 INFO keystone.common.wsgi 
[req-57632939-e139-4dc7-a1f4-833ce4e84665 - - - - -] POST 
http://10.111.109.81:5000/v2.0/tokens
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi 
[req-57632939-e139-4dc7-a1f4-833ce4e84665 - - - - -] 'list' object has no 
attribute 'items'
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 249, in 
__call__
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi result = 
method(context, **params)
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/oslo_log/versionutils.py", line 165, in 
wrapped
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi return 
func_or_cls(*args, **kwargs)
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/controllers.py", line 144, in 
authenticate
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi auth_token_data, 
roles_ref=roles_ref, catalog_ref=catalog_ref)
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 124, in 
wrapped
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/provider.py", line 360, in 
issue_v2_token
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi token_ref, 
roles_ref, catalog_ref)
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/fernet/core.py", 
line 38, in issue_v2_token
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi *args, **kwargs)
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", line 
570, in issue_v2_token
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi token_ref, 
roles_ref, catalog_ref, trust_ref)
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", line 
163, in format_token
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi catalog_ref)
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", line 
214, in format_catalog
  2016-07-06 03:00:57.565 18323 ERROR keystone.common.wsgi for region, 
region_ref in 

[Yahoo-eng-team] [Bug 1632924] Re: Lingering sql backend role assignments after deletion of ldap user.

2017-01-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1632924

Title:
  Lingering sql backend role assignments after deletion of ldap user.

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  Greetings all,

  
  There is currently an issue in an Openstack Liberty environment where the 
keystone configuration is using a ldap driver for users and the sql driver for 
role assignments.  The issue being encountered is when a ldap user is removed, 
the id for that user(actor_id) remains in the keystone.assignment table.  The 
way this was discovered was that if we attempt to perform a user list on a 
specific project where a former ldap user existed the openstack client abruptly 
exits with an exception[1] regarding the resource or in this case the user id 
no longer being found as it was deleted from ldap while its role assignment for 
the user remains in the keystone.assignments table.  There was a similar bug 
found [2], however that one deals by both identity and assignment driver using 
ldap whereas this particular case identity is ldap and assignment is sql.  

  
  Environment details:
  Openstack Version: 12.2.0(Liberty)
  Keystone Version: 8.1.2
  identity driver: ldap
  assignment driver: sql


  
  [0]

  MariaDB [keystone]> select * from assignment where 
actor_id='50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47';
  
+-+--+--+--+---+
  | type| actor_id  
   | target_id| role_id  | 
inherited |
  
+-+--+--+--+---+
  | UserProject | 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47 | 
14b2bc91832e455491a9fd4a42c8b19c | 9fe2ff9ee4384b1894a90878d3e92bab | 0 
|
  | UserProject | 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47 | 
14b2bc91832e455491a9fd4a42c8b19c | bffeb621920e40feb18ce2c28b07d1a1 | 0 
|
  
+-+--+--+--+---+

  [1]

  Request returned failure status: 401
  Could not find resource 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 374, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 92, in 
run
  column_names, data = self.take_action(parsed_args)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/utils.py", line 
45, in wrapper
  return func(self, *args, **kwargs)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/identity/v3/user.py", 
line 251, in take_action
  user = utils.find_resource(identity_client.users, user_id)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/utils.py", line 
141, in find_resource
  raise exceptions.CommandError(msg)
  CommandError: Could not find resource 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47
  clean_up ListUser: Could not find resource 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/openstackclient/shell.py", 
line 112, in run
  ret_val = super(OpenStackShell, self).run(argv)
File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 255, in run
  result = self.run_subcommand(remainder)
File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 374, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/local/lib/python2.7/dist-packages/cliff/display.py", line 92, in 
run
  column_names, data = self.take_action(parsed_args)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/utils.py", line 
45, in wrapper
  return func(self, *args, **kwargs)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/identity/v3/user.py", 
line 251, in take_action
  user = utils.find_resource(identity_client.users, user_id)
File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/utils.py", line 
141, in find_resource
  raise exceptions.CommandError(msg)
  CommandError: Could not find resource 
50327bfee89ace875a8ffbe4040cdbc9ec712859f5c8c39a73b36003407f9a47

  END return 

[Yahoo-eng-team] [Bug 1629167] Re: HEAD request blocked cause of response Content-Length more than 0

2017-01-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1629167

Title:
  HEAD request blocked cause of  response Content-Length more than 0

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  version: keystone9.2.0
  api: curl -i -X HEAD http://**:35357/v2.0/tokens/**   -H "X-Auth-Token:**"

  result:

  HTTP/1.1 200 OK
  Vary: X-Auth-Token
  Content-Type: application/json
  Content-Length: 5420
  X-Openstack-Request-Id: req-c0db94a5-9078-4181-947c-924dfca65a7a
  Date: Fri, 30 Sep 2016 03:31:51 GMT

  and we found block at here.

  I think beause resp body has set to b'', content-lenth not set to 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1629167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640616] Re: Add healthcheck middleware to pipelines

2017-01-14 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1640616

Title:
  Add healthcheck middleware to pipelines

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  https://review.openstack.org/387731
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/keystone" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit eeac2cb6d1bdb7b5330cf24b1d98151876e9f672
  Author: Jesse Keating 
  Date:   Mon Oct 17 17:20:54 2016 -0700

  Add healthcheck middleware to pipelines
  
  This introduces the oslo healt check middleware
  
http://docs.openstack.org/developer/oslo.middleware/healthcheck_plugins.html
  into the pipelines. This middleware is useful for load balancers and
  http servers, which can use it to validate that the keystone services are
  operational. This middleware is being used in other services such as
  glance and magnum. This patch provides it for keystone, in an effort to
  spread the usage across all major projects.
  
  This is one less item that operators will have to patch locally.
  
  DocImpact
  
  Change-Id: I19e4fc8f6c6a227068ba7191c1e9c453fc08f061

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1640616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656075] Re: DiscoveryFailure when trying to get resource providers from the scheduler report client

2017-01-14 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656075

Title:
  DiscoveryFailure when trying to get resource providers from the
  scheduler report client

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  I noticed this in a TripleO job:

  http://logs.openstack.org/04/419604/1/check/gate-puppet-openstack-
  integration-4-scenario004-tempest-centos-7/5d95a8c/logs/nova/nova-
  compute.txt.gz#_2017-01-12_18_53_43_459

  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
[req-59098025-7c99-41b2-aaa9-0e5714770b3a - - - - -] Error updating resources 
for node centos-7-osic-cloud1-s3500-6631948.
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager Traceback (most 
recent call last):
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6537, in 
update_available_resource_for_node
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
rt.update_available_resource(context, nodename)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 540, 
in update_available_resource
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 
inner
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return f(*args, 
**kwargs)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 564, 
in _update_available_resource
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
self._init_compute_node(context, resources)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 451, 
in _init_compute_node
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
self.scheduler_client.update_resource_stats(self.compute_node)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 60, 
in update_resource_stats
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
self.reportclient.update_resource_stats(compute_node)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, 
in __run_method
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return 
getattr(self.instance, __name)(*args, **kwargs)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 476, 
in update_resource_stats
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
compute_node.hypervisor_hostname)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 296, 
in _ensure_resource_provider
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager rp = 
self._get_resource_provider(uuid)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 53, in 
wrapper
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return f(self, 
*a, **k)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 209, 
in _get_resource_provider
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager resp = 
self.get("/resource_providers/%s" % uuid)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 174, 
in get
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 
endpoint_filter=self.ks_filter, raise_exc=False)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 710, in get
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return 
self.request(url, 'GET', **kwargs)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager return 
wrapped(*args, **kwargs)
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 467, in 
request
  2017-01-12 18:53:43.459 15495 ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 1656276] Re: Error running nova-manage cell_v2 simple_cell_setup when configuring nova with puppet-nova

2017-01-14 Thread Alan Pevec
** Also affects: packstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656276

Title:
  Error running nova-manage  cell_v2 simple_cell_setup when configuring
  nova with puppet-nova

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  New
Status in Packstack:
  New
Status in puppet-nova:
  New
Status in tripleo:
  Triaged

Bug description:
  When installing and configuring nova with puppet-nova (with either
  tripleo, packstack or puppet-openstack-integration), we are getting
  following errors:

  Debug: Executing: '/usr/bin/nova-manage  cell_v2 simple_cell_setup 
--transport-url=rabbit://guest:guest@172.19.2.159:5672/?ssl=0'
  Debug: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Sleeping for 5 seconds between tries
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Cell0 is already setup.
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 No hosts found to map to cell, exiting.

  The issue seems to be that it's running "nova-manage  cell_v2
  simple_cell_setup" as part of the nova database initialization when no
  compute nodes have been created but it returns 1 in that case [1].
  However, note that the previous steps (Cell0 mapping and schema
  migration) were successfully run.

  I think for nova bootstrap a reasonable orchestrated workflow would
  be:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. nova cell0 mapping and schema creation.
  4. Adding compute nodes
  5. mapping compute nodes (by running nova-manage cell_v2 discover_hosts)

  For step 3 we'd need to get simple_cell_setup to return 0 when not
  having compute nodes, or having a different command.

  With current behavior of nova-manage the only working workflow we can
  do is:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. Adding all compute nodes
  4. nova cell0 mapping and schema creation with "nova-manage cell_v2 
simple_cell_setup".

  Am I right?, Is there any better alternative?

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L1112-L1114

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656026] Re: Exception don't follow a punctuation convention

2017-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/420187
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=ee2747bac2e6eeb86e6340a2bb3509bc25f115bc
Submitter: Jenkins
Branch:master

commit ee2747bac2e6eeb86e6340a2bb3509bc25f115bc
Author: Gage Hugo 
Date:   Fri Jan 13 15:34:59 2017 -0600

Corrected punctuation on multiple exceptions

As mentioned in the bug report, keystone/exceptions.py has many
exceptions with messages that are not consistant with each other
and have various punctuational differences, such as some ending
with a period while others do not.

This change adds a '.' to the end of many exception messages and
adds small changes to other messages in order to keep all of the
exception messages consistant.

Change-Id: I21cac56ff70dbc2693c6090b887537a7c1f303e1
Closes-Bug: #1656026


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656026

Title:
  Exception don't follow a punctuation convention

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  If you happen to take a look through keystone exception module [0].
  You'll notice that some of the exceptions use proper punctuation,
  while other do not. David Stanek mentioned this in a review [1], and
  we thought it was appropriate to track it as a low-hanging-fruit bug.

  We should decide what that convention should be for keystone, then
  apply it to all of our exceptions consistently.

  [0] 
https://github.com/openstack/keystone/blob/f8ee249bf08cefd8468aa15c589dab48bd5c4cd8/keystone/exception.py#L105-L118
  [1] https://review.openstack.org/#/c/415895/8/keystone/exception.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1656026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656076] Re: The keystone server auth plugin methods could mismatch user_id in auth_context

2017-01-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/419693
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=0f3f08c3df0dd6c32e685dae6726e945b01ea8c7
Submitter: Jenkins
Branch:master

commit 0f3f08c3df0dd6c32e685dae6726e945b01ea8c7
Author: Morgan Fainberg 
Date:   Thu Jan 12 15:19:48 2017 -0800

Force use of AuthContext object in .authentcate()

Force the keystone.auth.controllers.Auth.authenticate method to
require the use of an AuthContext object instead of something
duck-typed (dictionary). This is done to ensure the security and
integrity of IDENTITY_KEYS are covered and values are not changed
by a plugin due to the security built into AuthContext being
circumvented since it was not used. This is not pythonic, this
is being done for hardening purposes.

Change-Id: I013846af59587d17b15ca4cf546e6372231f576e
Closes-Bug: #1656076


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1656076

Title:
  The keystone server auth plugin methods could mismatch user_id in
  auth_context

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) mitaka series:
  Invalid
Status in OpenStack Identity (keystone) newton series:
  Invalid
Status in OpenStack Identity (keystone) ocata series:
  Fix Released

Bug description:
  The keystone server blindly overwrites the auth_context.user_id in
  each auth method that is run. This means that the last auth_method
  that is run for a given authentication request dictates the user_id.

  While this is not exploitable externally without misconfiguration of
  the external plugin methods and supporting services, this is a bad
  state that could relatively easily result in someone ending up
  authenticated with the wrong user_id.

  The simplest fix will be to have the for loop in the authentication
  controller (that iterates over the methods) to verify the user_id does
  not change between auth_methods executed.

  
https://github.com/openstack/keystone/blob/f8ee249bf08cefd8468aa15c589dab48bd5c4cd8/keystone/auth/controllers.py#L550-L557

  This has been marked as public security for hardening purposes, likely
  a "Class D" https://security.openstack.org/vmt-process.html#incident-
  report-taxonomy

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1656076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp