[Yahoo-eng-team] [Bug 1507078] [NEW] arping for floating IPs fail on newer kernels

2015-10-16 Thread Brian Haley
Public bug reported:

The code to send gratuitous ARPs changed in Liberty to be simpler
because we started setting the sysctl net.ipv4.ip_nonlocal_bind to 1 in
the root namespace.  It seems like in newer kernels (3.19 or so) that
this sysctl attribute was added to the namespaces, so now that arping
call fails because we are only enabling non-local binds in the root
namespace.

This is an example when run by hand:

$ sudo ip netns exec fip-311e3d4a-00ec-46cc-9928-dbc1a2fe3f9a arping -A -I 
fg-bb6b6721-78 -c 3 -w 4.5 172.18.128.7
bind: Cannot assign requested address

Failing to get that ARP out can affect connectivity to the floating IP.

In order to support either kernel, the code should change to try setting
it in the namespace, and if it fails, then set it in the root namespace.

This is backport potential to stable/liberty.

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507078

Title:
  arping for floating IPs fail on newer kernels

Status in neutron:
  New

Bug description:
  The code to send gratuitous ARPs changed in Liberty to be simpler
  because we started setting the sysctl net.ipv4.ip_nonlocal_bind to 1
  in the root namespace.  It seems like in newer kernels (3.19 or so)
  that this sysctl attribute was added to the namespaces, so now that
  arping call fails because we are only enabling non-local binds in the
  root namespace.

  This is an example when run by hand:

  $ sudo ip netns exec fip-311e3d4a-00ec-46cc-9928-dbc1a2fe3f9a arping -A -I 
fg-bb6b6721-78 -c 3 -w 4.5 172.18.128.7
  bind: Cannot assign requested address

  Failing to get that ARP out can affect connectivity to the floating
  IP.

  In order to support either kernel, the code should change to try
  setting it in the namespace, and if it fails, then set it in the root
  namespace.

  This is backport potential to stable/liberty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507055] [NEW] LBaaS 2.0: Listener create with no tenant id test issue

2015-10-16 Thread Franklin Naval
Public bug reported:

Steps:
1.  Pull down and run the following test code in Tempest: 
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/tests/tempest/v2/api/test_listeners_admin.py#L98
2.  Examine logs for actual request/response.

Result:
The test test_create_listener_missing_tenant_id  creates a tenant_id with 
special characters, instead of leaving it empty.

Expected:
The test should pass in an empty tenant_id into the listener create call.   
This is happening because the class variables are being reused for the next 
test.Specifically self.create_listener_kwargs should be reinitialized prior 
to the proceeding test method.

Log Snippet:
2015-10-15 23:12:27.397 | 2015-10-15 23:12:27.370 | Captured pythonlogging:
2015-10-15 23:12:27.399 | 2015-10-15 23:12:27.371 | ~~~
2015-10-15 23:12:27.400 | 2015-10-15 23:12:27.373 | 2015-10-15 23:12:19,307 
3297 INFO [tempest_lib.common.rest_client] Request 
(ListenersTestJSON:test_create_listener_missing_tenant_id): 201 POST 
http://127.0.0.1:9696/v2.0/lbaas/listeners 0.274s
2015-10-15 23:12:27.401 | 2015-10-15 23:12:27.374 | 2015-10-15 23:12:19,307 
3297 DEBUG[tempest_lib.common.rest_client] Request - Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
2015-10-15 23:12:27.432 | 2015-10-15 23:12:27.375 | Body: 
{"listener": {"protocol": "HTTP", "protocol_port": 8081, "loadbalancer_id": 
"5109250c-e964-4c1b-80d4-c25996fba62f", "tenant_id": "&^%123"}}
2015-10-15 23:12:27.433 | 2015-10-15 23:12:27.376 | Response - Headers: 
{'content-type': 'application/json; charset=UTF-8', 'connection': 'close', 
'content-length': '358', 'x-openstack-request-id': 
'req-abb60691-16f6-4c70-9260-21e8aa79ddfe', 'date': 'Thu, 15 Oct 2015 23:12:19 
GMT', 'status': '201'}
2015-10-15 23:12:27.433 | 2015-10-15 23:12:27.378 | Body: 
{"listener": {"protocol_port": 8081, "protocol": "HTTP", "description": "", 
"default_tls_container_ref": null, "admin_state_up": true, "loadbalancers": 
[{"id": "5109250c-e964-4c1b-80d4-c25996fba62f"}], "tenant_id": "&^%123", 
"sni_container_refs": [], "connection_limit": -1, "default_pool_id": null, 
"id": "c9a3a3f0-6c70-4e6d-b53c-268468a9641b", "name": ""}}


Logs:
http://logs.openstack.org/75/230875/12/gate/gate-neutron-lbaasv2-dsvm-listener/18155a8/console.html#_2015-10-15_23_12_27_433

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas-2.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507055

Title:
  LBaaS 2.0: Listener create with no tenant id  test issue

Status in neutron:
  New

Bug description:
  Steps:
  1.  Pull down and run the following test code in Tempest: 
  
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/tests/tempest/v2/api/test_listeners_admin.py#L98
  2.  Examine logs for actual request/response.

  Result:
  The test test_create_listener_missing_tenant_id  creates a tenant_id with 
special characters, instead of leaving it empty.

  Expected:
  The test should pass in an empty tenant_id into the listener create call.   
This is happening because the class variables are being reused for the next 
test.Specifically self.create_listener_kwargs should be reinitialized prior 
to the proceeding test method.

  Log Snippet:
  2015-10-15 23:12:27.397 | 2015-10-15 23:12:27.370 | Captured pythonlogging:
  2015-10-15 23:12:27.399 | 2015-10-15 23:12:27.371 | ~~~
  2015-10-15 23:12:27.400 | 2015-10-15 23:12:27.373 | 2015-10-15 
23:12:19,307 3297 INFO [tempest_lib.common.rest_client] Request 
(ListenersTestJSON:test_create_listener_missing_tenant_id): 201 POST 
http://127.0.0.1:9696/v2.0/lbaas/listeners 0.274s
  2015-10-15 23:12:27.401 | 2015-10-15 23:12:27.374 | 2015-10-15 
23:12:19,307 3297 DEBUG[tempest_lib.common.rest_client] Request - Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': ''}
  2015-10-15 23:12:27.432 | 2015-10-15 23:12:27.375 | Body: 
{"listener": {"protocol": "HTTP", "protocol_port": 8081, "loadbalancer_id": 
"5109250c-e964-4c1b-80d4-c25996fba62f", "tenant_id": "&^%123"}}
  2015-10-15 23:12:27.433 | 2015-10-15 23:12:27.376 | Response - 
Headers: {'content-type': 'application/json; charset=UTF-8', 'connection': 
'close', 'content-length': '358', 'x-openstack-request-id': 
'req-abb60691-16f6-4c70-9260-21e8aa79ddfe', 'date': 'Thu, 15 Oct 2015 23:12:19 
GMT', 'status': '201'}
  2015-10-15 23:12:27.433 | 2015-10-15 23:12:27.378 | Body: 
{"listener": {"protocol_port": 8081, "protocol": "HTTP", "description": "", 
"default_tls_container_ref": null, "admin_state_up": true, "loadbalancers": 
[{"id": "5109250c-e964-4c1b-80d4-c25996fba62f"}], "tenant_id": "&^%123", 
"sni_container_refs": [], "connection_limit": -1, "default_pool_id

[Yahoo-eng-team] [Bug 1507050] [NEW] LBaaS 2.0: Operating Status Tempest Test Changes

2015-10-16 Thread Franklin Naval
Public bug reported:

SUMMARY:
A gate job for Neutron-LBaaS failed today (20141016).  It was identified that 
the failure occurred due to the introduction of new operating statues; namely, 
"DEGRADED".   

Per the following document, we will see the following valid types for 
operating_status: (‘ONLINE’, ‘OFFLINE’, ‘DEGRADED’, ‘ERROR’)
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/lbaas-api-and-objmodel-improvement.html


LOGS/STACKTRACE:
refer: 
http://logs.openstack.org/75/230875/12/gate/gate-neutron-lbaasv2-dsvm-listener/18155a8/console.html#_2015-10-15_23_12_27_433

Captured traceback:
2015-10-15 23:12:27.507 | 2015-10-15 23:12:27.462 | ~~~
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.463 | Traceback (most recent 
call last):
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.464 |   File 
"neutron_lbaas/tests/tempest/v2/api/test_listeners_admin.py", line 113, in 
test_create_listener_missing_tenant_id
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.465 | 
listener_ids=[self.listener_id, admin_listener_id])
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.466 |   File 
"neutron_lbaas/tests/tempest/v2/api/base.py", line 288, in _check_status_tree
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.467 | assert 'ONLINE' == 
load_balancer['operating_status']
2015-10-15 23:12:27.509 | 2015-10-15 23:12:27.469 | AssertionError


RECOMMENDED ACTION:
1.  Modify the method, _check_status_tree, in  
neutron_lbaas/tests/tempest/v2/api/base.py  to accept 'DEGRADED" as a valid 
type.
2.  Add a wait for status/poller to check that a "DEGRADED" operating_status 
would transition over to "ONLINE".A timeout Exception should be thrown if 
we do not reach that state after some amount of seconds.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas-2.0

** Project changed: astara => f5openstackcommunitylbaas

** Project changed: f5openstackcommunitylbaas => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507050

Title:
  LBaaS 2.0: Operating Status Tempest Test Changes

Status in neutron:
  New

Bug description:
  SUMMARY:
  A gate job for Neutron-LBaaS failed today (20141016).  It was identified that 
the failure occurred due to the introduction of new operating statues; namely, 
"DEGRADED".   

  Per the following document, we will see the following valid types for 
operating_status: (‘ONLINE’, ‘OFFLINE’, ‘DEGRADED’, ‘ERROR’)
  
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/lbaas-api-and-objmodel-improvement.html

  
  LOGS/STACKTRACE:
  refer: 
http://logs.openstack.org/75/230875/12/gate/gate-neutron-lbaasv2-dsvm-listener/18155a8/console.html#_2015-10-15_23_12_27_433

  Captured traceback:
  2015-10-15 23:12:27.507 | 2015-10-15 23:12:27.462 | ~~~
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.463 | Traceback (most 
recent call last):
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.464 |   File 
"neutron_lbaas/tests/tempest/v2/api/test_listeners_admin.py", line 113, in 
test_create_listener_missing_tenant_id
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.465 | 
listener_ids=[self.listener_id, admin_listener_id])
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.466 |   File 
"neutron_lbaas/tests/tempest/v2/api/base.py", line 288, in _check_status_tree
  2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.467 | assert 'ONLINE' 
== load_balancer['operating_status']
  2015-10-15 23:12:27.509 | 2015-10-15 23:12:27.469 | AssertionError

  
  RECOMMENDED ACTION:
  1.  Modify the method, _check_status_tree, in  
neutron_lbaas/tests/tempest/v2/api/base.py  to accept 'DEGRADED" as a valid 
type.
  2.  Add a wait for status/poller to check that a "DEGRADED" operating_status 
would transition over to "ONLINE".A timeout Exception should be thrown if 
we do not reach that state after some amount of seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507050] [NEW] LBaaS 2.0: Operating Status Tempest Test Changes

2015-10-16 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

SUMMARY:
A gate job for Neutron-LBaaS failed today (20141016).  It was identified that 
the failure occurred due to the introduction of new operating statues; namely, 
"DEGRADED".   

Per the following document, we will see the following valid types for 
operating_status: (‘ONLINE’, ‘OFFLINE’, ‘DEGRADED’, ‘ERROR’)
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/lbaas-api-and-objmodel-improvement.html


LOGS/STACKTRACE:
refer: 
http://logs.openstack.org/75/230875/12/gate/gate-neutron-lbaasv2-dsvm-listener/18155a8/console.html#_2015-10-15_23_12_27_433

Captured traceback:
2015-10-15 23:12:27.507 | 2015-10-15 23:12:27.462 | ~~~
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.463 | Traceback (most recent 
call last):
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.464 |   File 
"neutron_lbaas/tests/tempest/v2/api/test_listeners_admin.py", line 113, in 
test_create_listener_missing_tenant_id
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.465 | 
listener_ids=[self.listener_id, admin_listener_id])
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.466 |   File 
"neutron_lbaas/tests/tempest/v2/api/base.py", line 288, in _check_status_tree
2015-10-15 23:12:27.508 | 2015-10-15 23:12:27.467 | assert 'ONLINE' == 
load_balancer['operating_status']
2015-10-15 23:12:27.509 | 2015-10-15 23:12:27.469 | AssertionError


RECOMMENDED ACTION:
1.  Modify the method, _check_status_tree, in  
neutron_lbaas/tests/tempest/v2/api/base.py  to accept 'DEGRADED" as a valid 
type.
2.  Add a wait for status/poller to check that a "DEGRADED" operating_status 
would transition over to "ONLINE".A timeout Exception should be thrown if 
we do not reach that state after some amount of seconds.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas-2.0
-- 
LBaaS 2.0: Operating Status Tempest Test Changes
https://bugs.launchpad.net/bugs/1507050
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461266] Re: Failed logon does not state where user is from (REMOTE_IP)

2015-10-16 Thread Lin Hua Cheng
The fix has to be made to the horizon logger.

The application of the feature is not limited to login, for examples:
user trying to access resources that they don't have privileges on.

Closing on DOA, and moving to Horizon

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
   Importance: Undecided => Wishlist

** Changed in: django-openstack-auth
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461266

Title:
  Failed logon does not state where user is from (REMOTE_IP)

Status in django-openstack-auth:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a user logs on to horizon the status of their logon is logged to
  the apache error.log file.  However this log data does not provide
  anything useful for the configuration of monitoring or security
  controls because it does not provide the REMOTE_IP.

  Since some configurations use ha_proxy and some don't the logging will
  need to be able to determine if the user is accessing via a proxy or
  not.  There are several issues with this as pointed out in this
  article: http://esd.io/blog/flask-apps-heroku-real-ip-spoofing.html.
  I would recommend using a function similar to what is in that post,
  however to get things working I have used the following code to get
  the log to display the end-user IP address:

  /usr/lib/python2.7/dist-packages/openstack_auth/forms.py

  27a28,34
  > def get_client_ip(request):
  > x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR')
  > if x_forwarded_for:
  > ip = x_forwarded_for
  > else:
  > ip = request.META.get('REMOTE_ADDR')
  > return ip
  94,95c101,102
  < msg = 'Login successful for user "%(username)s".' % \
  < {'username': username}
  ---
  > msg = '$(remote_ip)s - Login successful for user 
"%(username)s".' % \
  > {'username': username, 'remote_ip': 
get_client_ip(self.request) }
  98,99c105,106
  < msg = 'Login failed for user "%(username)s".' % \
  < {'username': username}
  ---
  > msg = '%(remote_ip)s - Login failed for user "%(username)s".' % 
\
  > {'username': username, 'remote_ip': 
get_client_ip(self.request) }

  It's defiantly not the best answer, in fact it may not even be fully
  functional :), but something is needed to be able to monitor invalid
  attempts; unless something in django can be used to have some logic
  (beyond locking accounts) where it is able to send a user to a sink
  hole or something based on # of exceptions per session or something.
  But that's beyond the scope of this request :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1461266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507046] [NEW] Launch Instance: Selected images don't reappear when changing sources

2015-10-16 Thread Coleman
Public bug reported:

On Select Source, while 'Image' is picked for 'Select Boot Source', add
one from Avail --> Allocated (I only have one image there). Now switch
'Select Boot Source' to 'Volume'. Now switch back to 'Image' for the
source. It is no longer there.

** Affects: horizon
 Importance: Undecided
 Status: New

** Tags removed: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1507046

Title:
  Launch Instance: Selected images don't reappear when changing sources

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On Select Source, while 'Image' is picked for 'Select Boot Source',
  add one from Avail --> Allocated (I only have one image there). Now
  switch 'Select Boot Source' to 'Volume'. Now switch back to 'Image'
  for the source. It is no longer there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507043] [NEW] workflow.js should be named as workflow.module.js

2015-10-16 Thread Shaoquan Chen
Public bug reported:

According to Horizon's angular code file naming convention, a JS file
that define a new angular module should be named as xxx.module.js so
that it can be correctly sorted in JS file list.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1507043

Title:
  workflow.js should be named as workflow.module.js

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  According to Horizon's angular code file naming convention, a JS file
  that define a new angular module should be named as xxx.module.js so
  that it can be correctly sorted in JS file list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507031] Re: Add and then delete a user, results in unexpected error on the Openstack UI

2015-10-16 Thread Steve Martinelli
if this is a keystone bug, which by the sound of it, isn't likely, we
would need some logs

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1507031

Title:
  Add and then delete a user, results in unexpected error on the
  Openstack UI

Status in OpenStack Dashboard (Horizon):
  New
Status in Keystone:
  Incomplete

Bug description:
  Here is how to reproduce it:

  1. Install an Ubuntu Openstack on a VM.

  2. Login to the horizon for that VM.

  3. Add a new user role using the following command using CLI:
  keystone user-role-add --user  --tenant 
 --role < MEMBER_ROLE>

  4. Remove the user role, using the following command:
  keystone user-role-remove --user  --tenant 
  --role < SAME_MEMBER_ROLE>

  5. Refresh the horizon, UI redirects to an error message page, with the 
following error message:
  "Something went wrong! An unexpected error has occurred. Try refreshing the 
page. If that doesn't help, contact your local administrator."

  
  A screenshot of the UI is attached.

  Please note that refreshing the page does not resolve the issue.
  However, either clearing the browser's cookies/history for that
  session or opening the horizon on an "Incognito" mode may resolve the
  issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1507031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507031] [NEW] Add and then delete a user, results in unexpected error on the Openstack UI

2015-10-16 Thread Nergal Issaie
Public bug reported:

Here is how to reproduce it:

1. Install an Ubuntu Openstack on a VM.

2. Login to the horizon for that VM.

3. Add a new user role using the following command using CLI:
keystone user-role-add --user  --tenant  
--role < MEMBER_ROLE>

4. Remove the user role, using the following command:
keystone user-role-remove --user  --tenant 
  --role < SAME_MEMBER_ROLE>

5. Refresh the horizon, UI redirects to an error message page, with the 
following error message:
"Something went wrong! An unexpected error has occurred. Try refreshing the 
page. If that doesn't help, contact your local administrator."


A screenshot of the UI is attached.

Please note that refreshing the page does not resolve the issue.
However, either clearing the browser's cookies/history for that session
or opening the horizon on an "Incognito" mode may resolve the issue.

** Affects: keystone
 Importance: Undecided
 Status: New

** Attachment added: "Openstack horizon bug.png"
   
https://bugs.launchpad.net/bugs/1507031/+attachment/4497268/+files/Openstack%20horizon%20bug.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1507031

Title:
  Add and then delete a user, results in unexpected error on the
  Openstack UI

Status in Keystone:
  New

Bug description:
  Here is how to reproduce it:

  1. Install an Ubuntu Openstack on a VM.

  2. Login to the horizon for that VM.

  3. Add a new user role using the following command using CLI:
  keystone user-role-add --user  --tenant 
 --role < MEMBER_ROLE>

  4. Remove the user role, using the following command:
  keystone user-role-remove --user  --tenant 
  --role < SAME_MEMBER_ROLE>

  5. Refresh the horizon, UI redirects to an error message page, with the 
following error message:
  "Something went wrong! An unexpected error has occurred. Try refreshing the 
page. If that doesn't help, contact your local administrator."

  
  A screenshot of the UI is attached.

  Please note that refreshing the page does not resolve the issue.
  However, either clearing the browser's cookies/history for that
  session or opening the horizon on an "Incognito" mode may resolve the
  issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1507031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1221579] Re: Disabling a tenant/project with ldap is silently ignored

2015-10-16 Thread Henrique Truta
Marked as invalid, as was solved in
https://bugs.launchpad.net/keystone/+bug/1241134

** Changed in: keystone
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1221579

Title:
  Disabling a tenant/project with ldap is silently ignored

Status in Keystone:
  Invalid

Bug description:
  stack@devstack:~$ grep '^driver.*identity' /etc/keystone/keystone.conf
  driver = keystone.identity.backends.ldap.Identity
  stack@devstack:~$ keystone tenant-update --enabled false demo
  stack@devstack:~$ keystone tenant-get demo
  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |   foo|
  |   enabled   |   True   |
  |  id | d9de3a4e7dc440b78f7c5009ce77cd89 |
  | name|   demo   |
  +-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1221579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507016] [NEW] openstack nova dont boot with iso and attached volume with block-device

2015-10-16 Thread reiko
Public bug reported:

Hello it's not resolved in Openstack Kilo Unbutu 14.04.3 LTS:

Package: python-novaclient
Version: 1:2.22.0-0ubuntu1~cloud0
Priority: optional
Section: python

*** /usr/lib/python2.7/dist-packages/novaclient/v2/servers.py   2015-10-16 
12:45:11.464930514 -0300
--- /usr/lib/python2.7/dist-packages/novaclient/v2/servers.py.old   
2015-10-16 12:43:01.084767772 -0300
*** class ServerManager(base.BootingManagerW
*** 526,540 
  body['server']['block_device_mapping'] = \
  self._parse_block_device_mapping(block_device_mapping)
  elif block_device_mapping_v2:
! # Following logic can't be removed because it will leaves.
! # a valid boot with both --image and --block-device
! # failed , see bug 1433609 for more info
! if image:
! bdm_dict = {'uuid': image.id, 'source_type': 'image',
!'destination_type': 'local', 'boot_index': 0,
!'delete_on_termination': True}
! block_device_mapping_v2.insert(0, bdm_dict)
! #body['server']['block_device_mapping_v2'] = 
block_device_mapping_v2

  if nics is not None:
  # NOTE(tr3buchet): nics can be an empty list
--- 526,532 
  body['server']['block_device_mapping'] = \
  self._parse_block_device_mapping(block_device_mapping)
  elif block_device_mapping_v2:
! body['server']['block_device_mapping_v2'] = 
block_device_mapping_v2

  if nics is not None:
  # NOTE(tr3buchet): nics can be an empty list

When boot by image and block-device dont attach volume. In debian
version it work without patch:

for example:

nova boot --image ubuntu-rescue-remix-12-04.iso --flavor 10
--availability-zone nova --block-device
source=volume,id=2d734ca2-6cb7-4e42-b060-94298ab6c2b8,dest=volume,size=10,shutdown=preserve
--nic net-id=be9c8e43-6bea-4904-b807-9ddbace19ec7  ejemplo_disco_recover

boot image with iso for install in volume...

in Ubuntu 14.04.3 LTS dont work..

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- Hello it's not resolved:
+ Hello it's not resolved in Openstack Kilo Unbutu 14.04.3 LTS:
  
- Package: python-novaclient   
+ Package: python-novaclient
  Version: 1:2.22.0-0ubuntu1~cloud0
  Priority: optional
  Section: python
- 
  
  *** /usr/lib/python2.7/dist-packages/novaclient/v2/servers.py 2015-10-16 
12:45:11.464930514 -0300
  --- /usr/lib/python2.7/dist-packages/novaclient/v2/servers.py.old 
2015-10-16 12:43:01.084767772 -0300
  *** class ServerManager(base.BootingManagerW
  *** 526,540 
-   body['server']['block_device_mapping'] = \
-   self._parse_block_device_mapping(block_device_mapping)
-   elif block_device_mapping_v2:
+   body['server']['block_device_mapping'] = \
+   self._parse_block_device_mapping(block_device_mapping)
+   elif block_device_mapping_v2:
  ! # Following logic can't be removed because it will leaves.
  ! # a valid boot with both --image and --block-device
- ! # failed , see bug 1433609 for more info 
+ ! # failed , see bug 1433609 for more info
  ! if image:
  ! bdm_dict = {'uuid': image.id, 'source_type': 'image',
  !'destination_type': 'local', 'boot_index': 0,
  !'delete_on_termination': True}
  ! block_device_mapping_v2.insert(0, bdm_dict)
  ! #body['server']['block_device_mapping_v2'] = 
block_device_mapping_v2
-   
-   if nics is not None:
-   # NOTE(tr3buchet): nics can be an empty list
+ 
+   if nics is not None:
+   # NOTE(tr3buchet): nics can be an empty list
  --- 526,532 
-   body['server']['block_device_mapping'] = \
-   self._parse_block_device_mapping(block_device_mapping)
-   elif block_device_mapping_v2:
+   body['server']['block_device_mapping'] = \
+   self._parse_block_device_mapping(block_device_mapping)
+   elif block_device_mapping_v2:
  ! body['server']['block_device_mapping_v2'] = 
block_device_mapping_v2
-   
-   if nics is not None:
-   # NOTE(tr3buchet): nics can be an empty list
+ 
+   if nics is not None:
+   # NOTE(tr3buchet): nics can be an empty list
  
  When boot by image and block-device dont attach volume. In debian
  version it work without patch:
  
  for example:
  
  nova boot --image ubuntu-rescue-remix-12-04.iso --flavor 10
  --availability-zone nova --block-device
  
source=volume,id=2d734ca2-6cb7-4e42-b060-94298ab6c2b8,dest=volume,size=10,shutdown=preserve
  --nic net-id=be9c8e43-6bea-4904-b807-9ddbace19ec7  ejemplo_disco_recover
  
  bo

[Yahoo-eng-team] [Bug 1473489] Re: Identity API v3 does not accept more than one query parameter

2015-10-16 Thread David Stanek
Since this can't be reproduced and we haven't heard back from the
reporter I'm marking this as Won't Fix. If you feel this is incorrect
please reopen with an explanation.

** Changed in: keystone
   Status: Incomplete => Won't Fix

** Changed in: keystone
   Status: Won't Fix => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1473489

Title:
  Identity API v3 does not accept more than one query parameter

Status in Keystone:
  Invalid

Bug description:
  When GET /v3/users?name="blah"&enabled="true" is called the API would
  only take the "name" query and omits the "enabled" query. This is also
  reproduced across many different queries, including /v3/credentials.

  This looks like a repeat of
  https://bugs.launchpad.net/keystone/+bug/1424745

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1473489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507005] Re: Glance reports "400 Bad Request" if URL is "ftp://"

2015-10-16 Thread Ian Cordasco
Glance allows you to use an HTTP store, but FTP is not HTTP. It is a
different protocol entirely. As far as I know, Glance has never had
support to use an FTP server to copy from. This would need an entirely
different code-path for the copy-from case.

** Changed in: glance
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1507005

Title:
  Glance reports "400 Bad Request" if URL is "ftp://";

Status in Glance:
  Opinion

Bug description:
  Guys,

  Glance doesn't import an image from FTP, look:

  ---
  glance image-create --location 
ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2
 --is-public true --disk-format qcow2 --container-format bare --name "Debian 
8.2.0 - Jessie - 64-bit - Cloud Based Image"
  
   
400 Bad Request
   
   
400 Bad Request
External sources are not supported: 
'ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2'

   
   (HTTP 400)
  ---

  I tried it with "--copy-from" instead of "--location":

  ---
  glance image-create --copy-from 
ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2
 --is-public true --disk-format qcow2 --container-format bare --name "Debian 
8.2.0 - Jessie - 64-bit - Cloud Based Image"
  ---

  But, it doesn't work either...

  However, if I replace "ftp://"; by "http://";, then, it works.

  Nevertheless, my private images are hosted ONLY under FTP, so, I
  really need to use "ftp://";...

  I think that this is a bug on Glance.

  NOTE: You can test it by running:

  ---
  wget 
ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2
  ---

  So, the remote URL is fine...

  Thanks!
  Thiago

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1507005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497461] Re: Fernet tokens fail for some users with LDAP identity backend

2015-10-16 Thread Dolph Mathews
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Also affects: keystone/liberty
   Importance: Undecided
   Status: New

** Changed in: keystone/kilo
   Status: New => Triaged

** Changed in: keystone/kilo
   Importance: Undecided => High

** Changed in: keystone/liberty
   Importance: Undecided => High

** Changed in: keystone/liberty
   Status: New => In Progress

** Changed in: keystone/liberty
 Assignee: (unassigned) => Eric Brown (ericwb)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1497461

Title:
  Fernet tokens fail for some users with LDAP identity backend

Status in Keystone:
  Fix Committed
Status in Keystone kilo series:
  Triaged
Status in Keystone liberty series:
  In Progress

Bug description:
  The following bug fixed most situations where when using Fernet + LDAP 
identify backend.
  https://bugs.launchpad.net/keystone/+bug/1459382

  However, some users have trouble, resulting in a UserNotFound exception in 
the logs with a UUID.  Here's the error:
  2015-09-18 20:04:47.313 12979 WARNING keystone.common.wsgi [-] Could not find 
user: 457269632042726f776e203732363230

  So the issue is this.  The user DN query + filter will return my user as:
 CN=Eric Brown 
72620,OU=PAO_Users,OU=PaloAlto_California_USA,OU=NALA,OU=SITES,OU=Engineering,DC=vmware,DC=com

  Therefore, I have to use CN as the user id attribute.  My user id
  would therefore be "Eric Brown 72620".  The fernet token_formatters.py
  attempts to convert this user id into a UUID.  And in my case that is
  successful.  It results in UUID of 457269632042726f776e203732363230.
  Of course, a user id of 457269632042726f776e203732363230 doesn't exist
  in LDAP, so as a result I get a UserNotFound.  So I don't understand
  why the convert_uuid_bytes_to_hex is ever used in the case of LDAP
  backend.

  For other users, the token_formatters.convert_uuid_bytes_to_hex()
  raises a ValueError and everything works.  Here's an example that
  illustrates the behavior

  >>> import uuid
  >>> uuid_obj = uuid.UUID(bytes='Eric Brown 72620')
  >>> uuid_obj.hex
  '457269632042726f776e203732363230'

  >>> import uuid
  >>> uuid_obj = uuid.UUID(bytes='Your Mama')
  Traceback (most recent call last):
File "", line 1, in 
File "/usr/lib/python2.7/uuid.py", line 144, in __init__
  raise ValueError('bytes is not a 16-char string')
  ValueError: bytes is not a 16-char string



  Here's the complete traceback (after adding some additional debug):

  2015-09-18 20:04:47.312 12979 WARNING keystone.common.wsgi [-] EWB Traceback 
(most recent call last):
File "/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 449, 
in __call__
  response = self.process_request(request)
File "/usr/lib/python2.7/dist-packages/keystone/middleware/core.py", line 
238, in process_request
  auth_context = self._build_auth_context(request)
File "/usr/lib/python2.7/dist-packages/keystone/middleware/core.py", line 
218, in _build_auth_context
  token_data=self.token_provider_api.validate_token(token_id))
File "/usr/lib/python2.7/dist-packages/keystone/token/provider.py", line 
198, in validate_token
  token = self._validate_token(unique_id)
File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1013, 
in decorate
  should_cache_fn)
File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 640, 
in get_or_create
  async_creator) as value:
File "/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158, 
in __enter__
  return self._enter()
File "/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 98, 
in _enter
  generated = self._enter_create(createdtime)
File "/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 149, 
in _enter_create
  created = self.creator()
File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 612, 
in gen_value
  created_value = creator()
File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1009, 
in creator
  return fn(*arg, **kw)
File "/usr/lib/python2.7/dist-packages/keystone/token/provider.py", line 
261, in _validate_token
  return self.driver.validate_v3_token(token_id)
File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/fernet/core.py", 
line 258, in validate_v3_token
  audit_info=audit_ids)
File "/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", 
line 441, in get_token_data
  self._populate_user(token_data, user_id, trust)
File "/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", 
line 275, in _populate_user
  user_ref = self.identity_api.get_user(user_id)
File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 
342, in wrapper
  return f(self, *args, **kwargs)
File "/usr/lib/p

[Yahoo-eng-team] [Bug 1507005] [NEW] Glance reports "400 Bad Request" if URL is "ftp://"

2015-10-16 Thread Thiago Martins
Public bug reported:

Guys,

Glance doesn't import an image from FTP, look:

---
glance image-create --location 
ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2
 --is-public true --disk-format qcow2 --container-format bare --name "Debian 
8.2.0 - Jessie - 64-bit - Cloud Based Image"

 
  400 Bad Request
 
 
  400 Bad Request
  External sources are not supported: 
'ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2'

 
 (HTTP 400)
---

I tried it with "--copy-from" instead of "--location":

---
glance image-create --copy-from 
ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2
 --is-public true --disk-format qcow2 --container-format bare --name "Debian 
8.2.0 - Jessie - 64-bit - Cloud Based Image"
---

But, it doesn't work either...

However, if I replace "ftp://"; by "http://";, then, it works.

Nevertheless, my private images are hosted ONLY under FTP, so, I really
need to use "ftp://";...

I think that this is a bug on Glance.

NOTE: You can test it by running:

---
wget 
ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2
---

So, the remote URL is fine...

Thanks!
Thiago

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1507005

Title:
  Glance reports "400 Bad Request" if URL is "ftp://";

Status in Glance:
  New

Bug description:
  Guys,

  Glance doesn't import an image from FTP, look:

  ---
  glance image-create --location 
ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2
 --is-public true --disk-format qcow2 --container-format bare --name "Debian 
8.2.0 - Jessie - 64-bit - Cloud Based Image"
  
   
400 Bad Request
   
   
400 Bad Request
External sources are not supported: 
'ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2'

   
   (HTTP 400)
  ---

  I tried it with "--copy-from" instead of "--location":

  ---
  glance image-create --copy-from 
ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2
 --is-public true --disk-format qcow2 --container-format bare --name "Debian 
8.2.0 - Jessie - 64-bit - Cloud Based Image"
  ---

  But, it doesn't work either...

  However, if I replace "ftp://"; by "http://";, then, it works.

  Nevertheless, my private images are hosted ONLY under FTP, so, I
  really need to use "ftp://";...

  I think that this is a bug on Glance.

  NOTE: You can test it by running:

  ---
  wget 
ftp://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2
  ---

  So, the remote URL is fine...

  Thanks!
  Thiago

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1507005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506986] [NEW] documentation needs to be clarified about differences between subtree_as_ids and subtree_as_list

2015-10-16 Thread Raildo Mascena de Sousa Filho
Public bug reported:

The current documentation in the idendity API V3 just explain what is the API 
returns about subtree_as_ids and subtree_as_list.
The same documentation needs to be add for parents_as_ids and parents_as_list
We need to explain what is the difference in the API response between this two 
operations and what is the excepted use for it.

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- documentation needs to be clarified about differences between subtree_as_ids 
and subtree_as_list  and the same for parents_as_ids and  subtree_aslist
+ documentation needs to be clarified about differences between subtree_as_ids 
and subtree_as_list

** Description changed:

  The current documentation in the idendity API V3 just explain what is the API 
returns about subtree_as_ids and subtree_as_list.
- We need to explain what is the difference in the APi response between this 
two operations and what is the excepted use for it.
+ The same documentation needs to be add for parents_as_ids and parents_as_list
+ We need to explain what is the difference in the API response between this 
two operations and what is the excepted use for it.

** Changed in: keystone
 Assignee: (unassigned) => Raildo Mascena de Sousa Filho (raildo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1506986

Title:
  documentation needs to be clarified about differences between
  subtree_as_ids and subtree_as_list

Status in Keystone:
  New

Bug description:
  The current documentation in the idendity API V3 just explain what is the API 
returns about subtree_as_ids and subtree_as_list.
  The same documentation needs to be add for parents_as_ids and parents_as_list
  We need to explain what is the difference in the API response between this 
two operations and what is the excepted use for it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1506986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2015-10-16 Thread Nikolay Makhotkin
** Changed in: python-mistralclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Barbican:
  In Progress
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Keystone:
  Fix Committed
Status in Magnum:
  Fix Committed
Status in Manila:
  Fix Committed
Status in Mistral:
  Fix Released
Status in murano:
  Fix Committed
Status in OpenStack Compute (nova):
  Won't Fix
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  In Progress
Status in python-mistralclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Committed
Status in Sahara:
  Fix Released
Status in zaqar:
  Fix Committed

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499555] Re: You can crash keystone or make the DB very slow by assigning many roles

2015-10-16 Thread Jeremy Stanley
Since this report concerns a possible security risk, an incomplete
security advisory task has been added while the core security reviewers
for the affected project or projects confirm the bug and discuss the
scope of any vulnerability along with potential solutions.

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1499555

Title:
  You can crash keystone or make the DB very slow by assigning many
  roles

Status in Keystone:
  Triaged
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  This is applicable for UUID and PKI tokens.

  Token table has extra column where we store role information.  It is a
  blob with 64K limit. Basically we can do the following to fill the
  BLOB

     Say user is U, and Project is P
     for i =1  to  1000 ( or any large number)
  role x = create role i  with some large name
  assign role x for user U and Project P
     create a project scoped token for user U

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1499555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490497] Re: pep8-incompliant filenames missing in gate console logs

2015-10-16 Thread David Stanek
Marking as invalid because there isn't anything that can be fixed in
Python. Maybe this is a flake8 issue?

** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1490497

Title:
  pep8-incompliant filenames missing in gate console logs

Status in hacking:
  Incomplete
Status in Keystone:
  Invalid

Bug description:
  Jenkins reported gate-keystone-pep8 failure on patch set 12 @ 
https://review.openstack.org/#/c/209524/  .
  But the console logs didn't contain the filenames that are incompliant with 
pep8.
  
http://logs.openstack.org/24/209524/12/check/gate-keystone-pep8/b2b7500/console.html
  
  ...
  2015-08-30 22:34:11.101 | pep8 runtests: PYTHONHASHSEED='3894393079'
  2015-08-30 22:34:11.102 | pep8 runtests: commands[0] | flake8
  2015-08-30 22:34:11.102 |   /home/jenkins/workspace/gate-keystone-pep8$ 
/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8 
  2015-08-30 22:34:16.619 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8'
  2015-08-30 22:34:16.620 | ___ summary 

  2015-08-30 22:34:16.620 | ERROR:   pep8: commands failed
  ...
  

  Typically, it contains the filenames as well.
  Eg. Console logs pf patchset 1 contains the filenames.
  
http://logs.openstack.org/24/209524/1/check/gate-keystone-pep8/19f2885/console.html
  
  ...
  2015-08-05 14:45:15.247 | pep8 runtests: PYTHONHASHSEED='1879982710'
  2015-08-05 14:45:15.247 | pep8 runtests: commands[0] | flake8
  2015-08-05 14:45:15.247 |   /home/jenkins/workspace/gate-keystone-pep8$ 
/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8 
  2015-08-05 14:45:20.518 | ./keystone/assignment/backends/ldap.py:37:5: E301 
expected 1 blank line, found 0
  2015-08-05 14:45:20.518 | @versionutils.deprecated(
  2015-08-05 14:45:20.518 | ^
  ...
  2015-08-05 14:45:20.872 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8'
  2015-08-05 14:45:20.872 | ___ summary 

  2015-08-05 14:45:20.873 | ERROR:   pep8: commands failed
  ...
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/hacking/+bug/1490497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506958] [NEW] TypeError: object.__new__(thread.lock) is not safe, use thread.lock.__new__()

2015-10-16 Thread Dimitri John Ledkov
Public bug reported:

When using /usr/bin/nova-api, running $ openstack  availability zone
list -> works fine.

If using the wsgi scripts, and running nova-api via e.g. uwsgi, the same
client command fails as following:

2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions 
[req-184fd1f3-ae97-49d0-85dd-05ef08800238 0e56b818bc9c4eaea4b8d6a2f5da6227 
906359c0c71749ceb27e46612e0419ce - - -] Unexpected exception in API method
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/availability_zone.py",
 line 115, in detail
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions return 
self._describe_availability_zones_verbose(context)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/availability_zone.py",
 line 61, in _describe_availability_zones_verbose
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions ctxt = 
context.elevated()
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/context.py", line 198, in elevated
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions context = 
copy.deepcopy(self)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 190, in deepcopy
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
_reconstruct(x, rv, 1, memo)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 334, in _reconstruct
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions state = 
deepcopy(state, memo)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 163, in deepcopy
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
copier(x, memo)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 257, in _deepcopy_dict
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions 
y[deepcopy(key, memo)] = deepcopy(value, memo)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 190, in deepcopy
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
_reconstruct(x, rv, 1, memo)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 334, in _reconstruct
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions state = 
deepcopy(state, memo)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 163, in deepcopy
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
copier(x, memo)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 257, in _deepcopy_dict
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions 
y[deepcopy(key, memo)] = deepcopy(value, memo)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 190, in deepcopy
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
_reconstruct(x, rv, 1, memo)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 329, in _reconstruct
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
callable(*args)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy_reg.py", line 93, in __newobj__
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions return 
cls.__new__(cls, *args)
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions TypeError: 
object.__new__(thread.lock) is not safe, use thread.lock.__new__()
2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions
2015-10-16 16:58:20.721 18938 INFO nova.api.openstack.wsgi 
[req-184fd1f3-ae97-49d0-85dd-05ef08800238 0e56b818bc9c4eaea4b8d6a2f5da6227 
906359c0c71749ceb27e46612e0419ce - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.



Looks like a dejavu of 
https://bugs.launchpad.net/python-novaclient/+bug/1123561 but I am not certain.

This is with liberty final release (or so i believe at the moment).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this 

[Yahoo-eng-team] [Bug 1506948] [NEW] Release request of networking-cisco on stable/kilo: 2015.1.1

2015-10-16 Thread Brian Demers
Public bug reported:


Branch:   stable/kilo
>From Commit:  d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6
New Tag:  2015.1.1

This release contains the following changes:

d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6Set default branch for 
stable/kilo
f08fb31f20c2d8cc1e6b71784cdfd9604895e16dML2 cisco_nexus MD: 
VLAN not created on switch
d400749e43e9d5a1fc92683b40159afce81edc95Create knob to prevent 
caching ssh connection
0050ea7f1fb3c22214d7ca49cfe641da86123e2cBubble up exceptions 
when Nexus replay enabled
54fca8a047810304c69990dce03052e45f21cc23Quick retry connect to 
resolve stale ncclient handle
0c496e1d7425984bf9686b11b5c0c9c8ece23bf3Update requirements.txt 
for ML2 Nexus
393254fcfbe3165e4253801bc3be03e15201c36dUpdate requirements.txt
75fd522b36f7b67dc4152e461f4e5dfa26b4ff31Remove duplicate 
entrypoints in setup.cfg
178f40f2a43192687188661d5fcedf394321e191Cisco UCSM driver 
updates to handle duplicate creations
11f5f29af3e5c4a2ed4b42471e32db49180693dfClean up of UCS Manager 
connection handle management.
ad010718f978763e399f0bf9a0976ba51d3334ebFix Cisco CSR1KV script 
issues
a8c4bd753ba254b062612c1bcd85000656ebfa44Replace retry count 
with replay failure stats
db1bd250b95abfc267c8a75891ba56105cbeed8cAdd scripts to enable 
CSR FWaaS service
f39c6a55613a274d6d0e67409533edefbca6f9a7Fix N1kv trunk driver: 
same mac assigned to ports created
a118483327f7a217dfedfe69da3ef91f9ec6a169Update netorking-cisco 
files for due to neutrons port dictionary subnet being replaced with
b60296644660303fb2341ca6495611621fc486e7ML2 cisco_nexus MD: 
Config hangs when replay enabled
76f7be8758145c61e960ed37e5c93262252f56ffMove UCSM from 
extension drivers to mech drivers
ffabc773febb9a8df7853588ae27a4fe3bc4069bML2 cisco_nexus MD: 
Multiprocess replay issue
77d4a60fbce7f81275c3cdd9fec3b28a1ca0c57cML2 cisco_nexus MD: If 
configured, close ssh sessions
825cf6d1239600917f8fa545cc3745517d363838Part II-Detect switch 
failure earlier-Port Create
9b7b57097b2bd34f42ca5adce1e3342a91b4d3f8Retry count not reset 
on successful replay
6afe5d8a6d11db4bc2db29e6a84dc709672b1d69ML2 Nexus decomposition 
not complete for Nexus
ac84fcb861bd594a5a3773c32e06b3e58a729308Delete fails after 
switch reset (replay off)
97720feb4ef4d75fa190a23ac10038d29582b001Call to get nexus type 
for Nexus 9372PX fails
87fb3d6f75f9b0ae574df17b494421126a636199Detect switch failure 
earlier during port create
b38e47a37977634df14846ba38aa38d7239a1adcEnable the CSR1kv 
devstack plugin for Kilo
365cd0f94e579a4c885e6ea9c94f5df241fb2288Sanitize policy profile 
table on neutron restart
4a6a4040a71096b31ca5c283fd0df15fb87aeb38Cisco Nexus1000V: Retry 
mechanism for VSM REST calls
7bcec734cbc658f4cd0792c625aff1a3edc73208Moved N1kv section from 
neutron tree to stackforge
4970a3e279995faf9aff402c96d4b16796a00ef5N1Kv: Force syncing BDs 
on neutron restart
f078a701931986a2755d340d5f4a7cc2ab095bb3s/stackforge/openstack/g
151f6f6836491b77e0e788089e0cf9edbe9b7e00Update .gitreview file 
for project rename
876c25fbf7e3aa7f8a44dd88560a030e609648d5Bump minor version 
number to enable development
a5e7f6a3f0f824ec313449273cf9b283cf1fd3b9Sync notification to 
VSM & major sync refactoring


NOTE: this is a kilo release, so i'm not sure if we should follow the post 
versioning step in from: 
http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html#sub-project-release-process

** Affects: networking-cisco
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506948

Title:
  Release request of networking-cisco on stable/kilo: 2015.1.1

Status in networking-cisco:
  New
Status in neutron:
  New

Bug description:
  
  Branch:   stable/kilo
  From Commit:  d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6
  New Tag:  2015.1.1

  This release contains the following changes:

d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6Set default branch for 
stable/kilo
f08fb31f20c2d8cc1e6b71784cdfd9604895e16dML2 cisco_nexus MD: 
VLAN not created on switch
d400749e43e9d5a1fc92683b40159afce81edc95Create knob to prevent 
caching ssh connection
0050ea7f1fb3c22214d7ca49cfe641da86123e2cBubb

[Yahoo-eng-team] [Bug 1506942] [NEW] JS unit test statement coverage is low

2015-10-16 Thread Matt Borland
Public bug reported:

The JS unit test statement coverage in the ./horizon project has a
threshold of 68%...way too low.

There are a few places where coverage is particularly bad that should be
given more tests.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506942

Title:
  JS unit test statement coverage is low

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The JS unit test statement coverage in the ./horizon project has a
  threshold of 68%...way too low.

  There are a few places where coverage is particularly bad that should
  be given more tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506935] [NEW] Release Liberty

2015-10-16 Thread Salvatore Orlando
Public bug reported:

The vmware-nsx subproject owners kindly ask the neutron release team to
push a tag for the Liberty release.

Thanks in advance.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: vmware-nsx
 Importance: Undecided
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: release-subproject

** Also affects: neutron
   Importance: Undecided
   Status: New

** Tags added: release-subproject

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506935

Title:
  Release Liberty

Status in neutron:
  New
Status in vmware-nsx:
  New

Bug description:
  The vmware-nsx subproject owners kindly ask the neutron release team
  to push a tag for the Liberty release.

  Thanks in advance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506934] [NEW] The exception type is wrong and makes the except block not work

2015-10-16 Thread Hong Hui Xiao
Public bug reported:

With many ha routers, I restart the l3-agent.  And find error in the
log:

2015-10-14 22:24:19.640 31246 INFO eventlet.wsgi.server [-] Traceback (most 
recent call last):
  File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 442, in 
handle_one_response
result = self.application(self.environ, start_response)
  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
return self.func(req, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 59, in 
__call__
self.enqueue(router_id, state)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 65, in 
enqueue
self.agent.enqueue_state_change(router_id, state)
  File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 119, in 
enqueue_state_change
ri = self.router_info[router_id]
KeyError: 'aec00e20-ebe0-4979-858b-cb411dcd1bb6'

Checking code, and find that

def enqueue_state_change(self, router_id, state):
LOG.info(_LI('Router %(router_id)s transitioned to %(state)s'),
 {'router_id': router_id,
  'state': state})

try:
ri = self.router_info[router_id]
except AttributeError:
LOG.info(_LI('Router %s is not managed by this agent. It was '
 'possibly deleted concurrently.'), router_id)
return

KeyError should be expected here according to the context.

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506934

Title:
  The exception type is wrong and makes the except block not work

Status in neutron:
  New

Bug description:
  With many ha routers, I restart the l3-agent.  And find error in the
  log:

  2015-10-14 22:24:19.640 31246 INFO eventlet.wsgi.server [-] Traceback (most 
recent call last):
File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 442, in 
handle_one_response
  result = self.application(self.environ, start_response)
File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  return self.func(req, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 59, in 
__call__
  self.enqueue(router_id, state)
File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 65, in 
enqueue
  self.agent.enqueue_state_change(router_id, state)
File "/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 119, 
in enqueue_state_change
  ri = self.router_info[router_id]
  KeyError: 'aec00e20-ebe0-4979-858b-cb411dcd1bb6'

  Checking code, and find that

  def enqueue_state_change(self, router_id, state):
  LOG.info(_LI('Router %(router_id)s transitioned to %(state)s'),
   {'router_id': router_id,
'state': state})

  try:
  ri = self.router_info[router_id]
  except AttributeError:
  LOG.info(_LI('Router %s is not managed by this agent. It was '
   'possibly deleted concurrently.'), router_id)
  return

  KeyError should be expected here according to the context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506926] [NEW] Containers page generates ajax request for every container on page

2015-10-16 Thread Paul Karikh
Public bug reported:

project/containers/ page generetaes ajax query for every single container when 
page is loaded.
If there are a lot of containers on the page, this behaviour makes page 
irresponsible and very slow.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506926

Title:
  Containers page generates ajax request for every container on page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  project/containers/ page generetaes ajax query for every single container 
when page is loaded.
  If there are a lot of containers on the page, this behaviour makes page 
irresponsible and very slow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506910] [NEW] Nova DB deadlock

2015-10-16 Thread Andrea Frittoli
Public bug reported:

I hit the following deadlock in a dsvm job:

http://paste.openstack.org/show/476503/

The full log is here:
http://logs.openstack.org/00/234200/5/experimental/gate-tempest-dsvm-neutron-full-test-accounts/4dccd24/logs/screen-n-api.txt.gz#_2015-10-16_13_23_36_379

The exception is:
2015-10-16 13:23:36.379 27391 ERROR nova.api.openstack.extensions DBDeadlock: 
(pymysql.err.InternalError) (1213, u'Deadlock found when trying to get lock; 
try restarting transaction') [SQL: u'INSERT INTO instance_extra (created_at, 
updated_at, deleted_at, deleted, instance_uuid, numa_topology, pci_requests, 
flavor, vcpu_model, migration_context) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, 
%s, %s)'] [parameters: (datetime.datetime(2015, 10, 16, 13, 23, 36, 360147), 
None, None, 0, 'b4091b06-48bf-4cc1-9348-54574f1c8537', None, '[]', '{"new": 
null, "old": null, "cur": {"nova_object.version": "1.1", "nova_object.name": 
"Flavor", "nova_object.data": {"disabled": false, "root_gb": 0, "name": 
"m1.nano", "flavorid": "42", "deleted": false, "created_at": 
"2015-10-16T13:21:28Z", "ephemeral_gb": 0, "updated_at": null, "memory_mb": 64, 
"vcpus": 1, "extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "is_public": 
true, "deleted_at": null, "vcpu_weight": 0, "id": 6}, "nova_object.namespace": 
"nova"}}', No
 ne, None)]

I have no details on how to reproduce - it's a random failure on a test
that otherwise normally passes.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506910

Title:
  Nova DB deadlock

Status in OpenStack Compute (nova):
  New

Bug description:
  I hit the following deadlock in a dsvm job:

  http://paste.openstack.org/show/476503/

  The full log is here:
  
http://logs.openstack.org/00/234200/5/experimental/gate-tempest-dsvm-neutron-full-test-accounts/4dccd24/logs/screen-n-api.txt.gz#_2015-10-16_13_23_36_379

  The exception is:
  2015-10-16 13:23:36.379 27391 ERROR nova.api.openstack.extensions DBDeadlock: 
(pymysql.err.InternalError) (1213, u'Deadlock found when trying to get lock; 
try restarting transaction') [SQL: u'INSERT INTO instance_extra (created_at, 
updated_at, deleted_at, deleted, instance_uuid, numa_topology, pci_requests, 
flavor, vcpu_model, migration_context) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, 
%s, %s)'] [parameters: (datetime.datetime(2015, 10, 16, 13, 23, 36, 360147), 
None, None, 0, 'b4091b06-48bf-4cc1-9348-54574f1c8537', None, '[]', '{"new": 
null, "old": null, "cur": {"nova_object.version": "1.1", "nova_object.name": 
"Flavor", "nova_object.data": {"disabled": false, "root_gb": 0, "name": 
"m1.nano", "flavorid": "42", "deleted": false, "created_at": 
"2015-10-16T13:21:28Z", "ephemeral_gb": 0, "updated_at": null, "memory_mb": 64, 
"vcpus": 1, "extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "is_public": 
true, "deleted_at": null, "vcpu_weight": 0, "id": 6}, "nova_object.namespace": 
"nova"}}', 
 None, None)]

  I have no details on how to reproduce - it's a random failure on a
  test that otherwise normally passes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506891] [NEW] JS Unit Test Branch Coverage is Low

2015-10-16 Thread Matt Borland
Public bug reported:

The JS unit test branch coverage is within 0.25% of its threshold,
meaning slight variations may cause patches to trigger coverage
failures.

There are a few places where coverage is particularly bad that should be
given more tests.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506891

Title:
  JS Unit Test Branch Coverage is Low

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The JS unit test branch coverage is within 0.25% of its threshold,
  meaning slight variations may cause patches to trigger coverage
  failures.

  There are a few places where coverage is particularly bad that should
  be given more tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480334] Re: can't use "$" in password for ldap authentication

2015-10-16 Thread Boris Bobrov
I'm marking this as invalid for keystone since it affects all components
that use oslo_config.

** Changed in: keystone
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1480334

Title:
  can't use "$" in password for ldap authentication

Status in Keystone:
  Invalid
Status in oslo.config:
  Won't Fix

Bug description:
  keystone can't connect to ldap server if "$" used in password.

  keystone.tld.conf

  [identity]
  driver = keystone.identity.backends.ldap.Identity

  [assignment]
  driver = keystone.assignment.backends.sql.Assignment

  [ldap]
  url=ldap://172.16.56.46:389
  user=admin...@keystone.tld
  password=Pa$$w0rd
  suffix=dc=keystone,dc=tld
  query_scope = sub

  user_tree_dn=dc=keystone,dc=tld
  user_objectclass=person
  user_id_attribute=cn
  #user_name_attribute=userPrincipalName
  user_name_attribute=cn

  
  use_pool = true
  pool_size = 10
  pool_retry_max = 3
  pool_retry_delay = 0.1
  pool_connection_timeout = -1
  pool_connection_lifetime = 600

  
  use_auth_pool = true
  auth_pool_size = 100
  auth_pool_connection_lifetime = 60

  debug_level = 4095

  
  Debug from log:
  <15>Jul 31 14:00:04 node-1 keystone-all LDAP init: url=ldap://172.16.56.46:389
  <15>Jul 31 14:00:04 node-1 keystone-all LDAP init: use_tls=False 
tls_cacertfile=None tls_cacertdir=None tls_req_cert=2 tls_avail=1
  <15>Jul 31 14:00:04 node-1 keystone-all LDAP bind: 
who=CN=admin_ad,CN=Users,DC=keystone,DC=tld
  <15>Jul 31 14:00:04 node-1 keystone-all arg_dict: {}
  <14>Jul 31 14:00:04 node-1 keystone-all 192.168.0.2 - - [31/Jul/2015 
14:00:04] "OPTIONS / HTTP/1.0" 300 919 0.143915
  <15>Jul 31 14:00:04 node-1 keystone-all arg_dict: {}
  <14>Jul 31 14:00:05 node-1 keystone-all 192.168.0.2 - - [31/Jul/2015 
14:00:05] "OPTIONS / HTTP/1.0" 300 921 0.155419
  <11>Jul 31 14:00:05 node-1 keystone-all {'info': '80090308: LdapErr: 
DSID-0C0903C5, comment: AcceptSecurityContext error, data 52e, v2580', 'desc': 
'Invalid credentials'}

  while I can connect to server with ldapsearch

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1480334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489260] Re: trust details unavailable for admin token

2015-10-16 Thread David Stanek
Closing based on Steve's comments. Please reopen if you don't think this
is reasonable.

** Changed in: keystone
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1489260

Title:
  trust details unavailable for admin token

Status in Keystone:
  Won't Fix

Bug description:
  When authenticated via admin token, trusts details are not available.

  Trusts can be listed:
  ---
  # openstack trust list -f csv
  "ID","Expires At","Impersonation","Project ID","Trustee User ID","Trustor 
User ID"
  
"259d57b4998c484892ae3bdd7a84f147","2101-01-01T01:01:01.00Z",False,"a41030cd0872497893c0f00a29996961","64eea97a9ea54981a41cc7e40944a181","6bb8aef337134b948dcbc0bd6ac34633"
  ---

  But details cannot be shown:
  ---
  # openstack trust show 259d57b4998c484892ae3bdd7a84f147
  ERROR: openstack No trust with a name or ID of 
'259d57b4998c484892ae3bdd7a84f147' exists.
  ---

  From the debug mode we can see the rejected authorization to perform the 
requested action:
  http://paste.openstack.org/raw/427927/

  I discussed the issue with jamielennox who confirmed that the trust details 
are visible only by the trustor/trustee:
  
https://github.com/openstack/keystone/blob/master/keystone/trust/controllers.py#L75

  
  But I believe (and jamielennox) that the admin token should have access to it 
too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1489260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503755] Re: Admin with project-scoped token unable to grant, check, list, revoke roles for domain group/user

2015-10-16 Thread Boris Bobrov
Given Dolph's commen I'm marking this bug as invalid. Feel free to
reopen if you still think there is a bug.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1503755

Title:
  Admin with project-scoped token unable to grant, check, list, revoke
  roles for domain group/user

Status in Keystone:
  Invalid

Bug description:
  Prerequisites:
  1)Create group and user in some domain
  2)Create some test role
  3)Grant test role to domain group and to domain user

  Steps to reproduce:
  1)Get project-scoped token for admin user (using API: 
http://address:port/v3/auth/tokens) with header "Content-Type: 
application/json" and body
  { "auth": {
  "identity": {
    "methods": ["password"],
    "password": {
  "user": {"
    "name": "admin",
    "domain": { "id": "default" },
    "password": "adminpwd"
  }
    }
  },
  "scope": {
    "project": {
  "name": "project_name",
  "domain": { "id": "default" }
    }
  }
    }
  }

  2)Using token from step 1 (from header "X-Subject-Token") check role
  for domain group/user (HEAD type of request, API:
  http://address:port/v3/domains/{domain_id}/groups/{group_id}/roles/{role_id}
  and ​API:
  http://address:port/v3/domains/{domain_id}/users/{user_id}/roles/{role_id})
  with headers "Content-Type: application/json" and "X-Auth-Token:
  token_from_step_1"

  Expected result:
  Admin with project-scoped should be able to check role for domain group/user

  Actual result:
  Admin with project-scoped can't check role for domain group/user - there is 
403 HTTP code (Forbidden) and "No response received" in body of response

  3)Using token from step 1 (from header "X-Subject-Token") list roles
  for domain group/user (HEAD type of request, API:
  http://address:port/v3/domains/{domain_id}/groups/{group_id}/roles and
  ​API:
  http://address:port/v3/domains/{domain_id}/users/{user_id}/roles) with
  headers "Content-Type: application/json" and "X-Auth-Token:
  token_from_step_1"

  Expected result:
  Admin with project-scoped should be able to list roles for domain group/user

  Actual result:
  Admin with project-scoped can't list roles for domain group/user - there is 
403 HTTP code (Forbidden) and following body of response:
  {
    "error": {
  "message": "You are not authorized to perform the requested action: 
identity:list_grants (Disable debug mode to suppress these details.)",
  "code": 403,
  "title": "Forbidden"
    }
  }

  But admin with domain-scoped token can check and list roles for domain
  group/user. And can check and list roles for project group/user.

  The same for grant and revoke roles for/from domain group/user.

  In policy.json are following:
  "admin_on_project_filter" : "rule:cloud_admin or (rule:admin_required
  and (project_id:%(scope.project.id)s or
  domain_id:%(target.project.domain_id)s))",
  "create_grant": "rule:cloud_admin or rule:domain_admin_for_grants or 
rule:project_admin_for_grants",
  "check_grant": "rule:cloud_admin or rule:domain_admin_for_grants or 
rule:project_admin_for_grants",
  "list_grants": "rule:cloud_admin or rule:domain_admin_for_grants or 
rule:project_admin_for_grants",
  "revoke_grant": "rule:cloud_admin or rule:domain_admin_for_grants or 
rule:project_admin_for_grants",

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1503755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506862] [NEW] VethFixture: smart veth delete

2015-10-16 Thread Cedric Brandily
Public bug reported:

VethFixture cleanup tries to delete veths from their namespaces even if
they don't exist (without crashing the cleanup), it implies an extra
trace if an error is raised which can be confusing:

Command: ['sudo', '-n', 'ip', 'netns', 'exec', 
'test-ddc6fc89-5159-44e2-ba15-099a10adf5bc', 'ip', 'link', 'del', 
'test-veth114005']
Exit code: 1
Stdin: 
Stdout: 
Stderr: Cannot open network namespace 
"test-ddc6fc89-5159-44e2-ba15-099a10adf5bc": No such file or directory

** Affects: neutron
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: In Progress


** Tags: functional-tests low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506862

Title:
  VethFixture: smart veth delete

Status in neutron:
  In Progress

Bug description:
  VethFixture cleanup tries to delete veths from their namespaces even
  if they don't exist (without crashing the cleanup), it implies an
  extra trace if an error is raised which can be confusing:

  Command: ['sudo', '-n', 'ip', 'netns', 'exec', 
'test-ddc6fc89-5159-44e2-ba15-099a10adf5bc', 'ip', 'link', 'del', 
'test-veth114005']
  Exit code: 1
  Stdin: 
  Stdout: 
  Stderr: Cannot open network namespace 
"test-ddc6fc89-5159-44e2-ba15-099a10adf5bc": No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2015-10-16 Thread hardik
** Changed in: mistral
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Barbican:
  In Progress
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Keystone:
  Fix Committed
Status in Magnum:
  Fix Committed
Status in Manila:
  Fix Committed
Status in Mistral:
  Fix Released
Status in murano:
  Fix Committed
Status in OpenStack Compute (nova):
  Won't Fix
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  In Progress
Status in python-mistralclient:
  In Progress
Status in Python client library for Zaqar:
  Fix Committed
Status in Sahara:
  Fix Released
Status in zaqar:
  Fix Committed

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504666] Re: RemoteError not properly caught during live migraion

2015-10-16 Thread Lauren Taylor
There is no error. Made a mistake in debugging. Canceling this bug.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504666

Title:
  RemoteError not properly caught during live migraion

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  API fails during live migration with a 500 internal server error.

  
https://127.0.0.1:8774/v2/8c87f173ba7c47cbb4f57eebe85479c1/servers/d53b954a-7323-4d88-a5fc-14c0672a704e/action
  {
  "os-migrateLive": {
  "host": "8231E2D_109EFCT",
  "block_migration": false,
  "disk_over_commit": false
  }
  }

  The correct error should be 400 BadRequest as the error raise should
  be RemoteError, not a MigrationError

  Nova-api logs:

  MigrationError(u'Migration error: Remote error:error message)
  [u'Traceback (most recent call last):\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 142, in _dispatch_and_reply\nexecutor_callback))\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 186, in _dispatch\nexecutor_callback)\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 129, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in 
wrapped\npayload)\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/exception.py", line 72, in 
wrapped\nreturn f(self, context, *args, **kw)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 352, in 
decorated_function\nLOG.warning(msg, e, instance=instance)\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 325, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
 
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/exception.py", line 89, in 
wrapped\npayload)\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/exception.py", line 72, in 
wrapped\nreturn f(self, context, *args, **kw)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 402, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 380, in 
decorated_function\nkwargs[\'instance\'], e, sys.exc_info())\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 368, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5023, 
in check_can_live_migrate_destination\nblock_migration, 
disk_over_commit)\n', u'  
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 119, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'   
  raise exception.MigrationError(reason=six.text_type(ex))\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504686] Re: Keystone errors on token requests for users in recreated tenants when using memcache

2015-10-16 Thread Boris Bobrov
I agree with the above. You are supposed to put all the servers you have
to [cache]memcache_servers, comma-separated

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1504686

Title:
  Keystone errors on token requests for users in recreated tenants when
  using memcache

Status in Keystone:
  Invalid

Bug description:
  With memcache set up for resource caching, when a tenant is created,
  deleted, and recreated with the same name, users within that project
  get intermittent errors when requesting tokens.

  You can recreate this by having memcache with resource caching
  enabled.  Then create a tenant, delete it, and then recreate it making
  sure the name is the same as the first one.  Then create a user in
  this tenant and continually request tokens.  It will gradually start
  generating tokens while also failing until the cache is cleaned out.

  I believe the intermittent errors we experienced were due to our
  environment having a memcache on each keystone node and having the
  keystone nodes behind a load balancer.

  As I ran this scenario, I was seeing more failures in the beginning
  and then it gradually started having more successes until a little
  after the cache expiration_time where I was seeing all successes.

  We investigated and when this error was originally hit it threw 404 or
  401s.  The 404s were complaining about not being able to find a
  certain project, but when I tried to recreate I was receiving all
  401s.

  The 404 errors led me to believe that this was due to memcache not
  marking cache entries as deleted.  Since, when running our tests we
  used the name of the project and it would auto resolve the id.  So the
  entry for the project name in the cache was conflicting with the entry
  in the database, but once the cache is expired it isn't an issue.

  So it seems that reusing names of projects causes problems with the
  resolution of the project id when memcache is enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1504686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483937] Re: version conflict encountered while running with stable/kilo branch

2015-10-16 Thread Flavio Percoco
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1483937

Title:
  version conflict encountered while running with stable/kilo branch

Status in Glance:
  Invalid
Status in Glance kilo series:
  New

Bug description:
  I hit the following error while running unit test under glance stable/kilo 
branch.
  This is the command line I used: ./run_tests.sh -f -V

  This is the error information: 
  error: python-keystoneclient 1.3.2 is installed but 
python-keystoneclient>=1.6.0 is required by set(['python-cinderclient'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1483937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506842] [NEW] Glanceclient + SSL - Show warnings in console

2015-10-16 Thread Alexey Galkin
Public bug reported:

If we use glanceclient with ssl, console displays few "extra" warnings
like this:

/usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
/usr/lib/python2.7/dist-packages/urllib3/connection.py:251: SecurityWarning: 
Certificate has no `subjectAltName`, falling back to check for a `commonName` 
for now. This feature is being removed by major browsers and deprecated by RFC 
2818. (See https://github.com/shazow/urllib3/issues/497 for details.)
  SecurityWarning

or that:

/usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:770: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.org/en/latest/security.html
  InsecureRequestWarning)

Affected: python-glanceclient (and CLI).

Steps to reproduce:

1. Deploy openstack with enabling services in HTTPS mode (using TLS).
2. Try to use this command: glance image-list

Actual result: Displays a list of images with some warnings.

root@node-1:~# glance image-list
/usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
/usr/lib/python2.7/dist-packages/urllib3/connection.py:251: SecurityWarning: 
Certificate has no `subjectAltName`, falling back to check for a `commonName` 
for now. This feature is being removed by major browsers and deprecated by RFC 
2818. (See https://github.com/shazow/urllib3/issues/497 for details.)
  SecurityWarning
+--++
| ID   | Name   |
+--++
| 43c99677-94b4-4356-b3ee-cd3690f26fdc | TestVM |
+--++

Excepted result: Displays a list of images without any warnings.

root@node-1:~# glance image-list

+--++
| ID   | Name   |
+--++
| 43c99677-94b4-4356-b3ee-cd3690f26fdc | TestVM |
+--++

** Affects: glance
 Importance: Undecided
 Assignee: Kairat Kushaev (kkushaev)
 Status: New

** Affects: python-glanceclient
 Importance: Undecided
 Assignee: Kairat Kushaev (kkushaev)
 Status: New


** Tags: cli

** Tags added: cli

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1506842

Title:
  Glanceclient + SSL - Show warnings in console

Status in Glance:
  New
Status in python-glanceclient:
  New

Bug description:
  If we use glanceclient with ssl, console displays few "extra" warnings
  like this:

  /usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
  /usr/lib/python2.7/dist-packages/urllib3/connection.py:251: SecurityWarning: 
Certificate has no `subjectAltName`, falling back to check for a `commonName` 
for now. This feature is being removed by major browsers and deprecated by RFC 
2818. (See https://github.com/shazow/urllib3/issues/497 for details.)
SecurityWarning

  or that:

  /usr/lib/python2.7/dist-packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
  /usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:770: 
InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification i

[Yahoo-eng-team] [Bug 1483937] Re: version conflict encountered while running with stable/kilo branch

2015-10-16 Thread Erno Kuvaja
Now same failing with:
error: pbr 0.11.0 is installed but pbr<2.0,>=1.6 is required by 
set(['python-cinderclient'])

** Changed in: glance
   Status: New => Triaged

** Changed in: glance
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1483937

Title:
  version conflict encountered while running with stable/kilo branch

Status in Glance:
  Invalid

Bug description:
  I hit the following error while running unit test under glance stable/kilo 
branch.
  This is the command line I used: ./run_tests.sh -f -V

  This is the error information: 
  error: python-keystoneclient 1.3.2 is installed but 
python-keystoneclient>=1.6.0 is required by set(['python-cinderclient'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1483937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486373] Re: Glance Sample configurations should be sync'd with generator outputs

2015-10-16 Thread Erno Kuvaja
I'm really sorry,

I didn't check well enough and opened a new one for this when I did the
work for Liberty release.

Commits:
fa30891cf659360207b71d9345666478d4554582
and
b7fb5bf0f89f657ead98024ee0168f2c2fa7a776

closes this bug.

** Changed in: glance
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1486373

Title:
  Glance Sample configurations should be sync'd with generator outputs

Status in Glance:
  Fix Released

Bug description:
  Glance sample configurations should be sync'd.

  There is lots of changes in configurations what has not been sync'd to
  the example files.

  Including (was):

  The show_multiple_locations configuration option should be included in
  the sample glance-api.conf that is shipped with glance packages.
  Otherwise it is not clear in which config section this option belongs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1486373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498233] Re: No error thrown while importing an image which does not have read permission

2015-10-16 Thread Erno Kuvaja
Even this is annoyance for the user, I must agree with Kairat. I don't
think this warrants for API change to v1.

** Changed in: glance
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1498233

Title:
  No error thrown while importing an image which does not have read
  permission

Status in Glance:
  Won't Fix

Bug description:
  Started devstack from master branch on git.

  I tried to create an image on horizon by  choosing "Image location" option 
for Image source and passed an URL to ova file.
  "Copy data" option was selected. There were no errors thrown after submitting 
the request.
  Also, after refreshing the page, I could not find the new image as well.

  Found the following exception on g-api.log. It turned out to be wrong
  file permission for the ova being imported. It did not have read
  permission (set to 600).

  2015-09-21 17:23:34.191 18326 DEBUG glance.common.client 
[req-6edbf424-2bc9-472c-8cef-e9d12762a55e 86ba66edc4b24e639c37e4ce992d9384 
3d5d5d98dde249f08298210cb2e45866 - - -] Constructed URL: 
http://10.161.71.96:9191/images/detail?sort_key=created_at&is_public=None&limit=21&sort_dir=desc
 _construct_url /opt/stack/glance/glance/common/client.py:402
  2015-09-21 17:23:34.216 18327 DEBUG glance.registry.client.v1.client [-] 
Registry request PUT /images/d13562fb-ffd7-40e9-9910-bb99fe751332 HTTP 200 
request id req-dfa0c604-8cd9-4f3f-9837-4c03192bdb9a do_request 
/opt/stack/glance/glance/registry/client/v1/client.py:128
  2015-09-21 17:23:34.219 18326 DEBUG glance.registry.client.v1.client 
[req-6edbf424-2bc9-472c-8cef-e9d12762a55e 86ba66edc4b24e639c37e4ce992d9384 
3d5d5d98dde249f08298210cb2e45866 - - -] Registry request GET /images/detail 
HTTP 200 request id req-6edbf424-2bc9-472c-8cef-e9d12762a55e do_request 
/opt/stack/glance/glance/registry/client/v1/client.py:128
  2015-09-21 17:23:34.221 18326 INFO eventlet.wsgi.server 
[req-6edbf424-2bc9-472c-8cef-e9d12762a55e 86ba66edc4b24e639c37e4ce992d9384 
3d5d5d98dde249f08298210cb2e45866 - - -] 10.161.71.96 - - [21/Sep/2015 17:23:34] 
"GET 
/v1/images/detail?sort_key=created_at&sort_dir=desc&limit=21&is_public=None 
HTTP/1.1" 200 805 0.035794
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images [-] Copy from 
external source 'vsphere' failed for image: d13562fb-ffd7-40e9-9910-bb99fe751332
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images Traceback (most 
recent call last):
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/opt/stack/glance/glance/api/v1/images.py", line 619, in _upload
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images dest=store)
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/opt/stack/glance/glance/api/v1/images.py", line 471, in _get_from_store
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images image_data, 
image_size = src_store.get(loc, context=context)
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/usr/local/lib/python2.7/dist-packages/glance_store/capabilities.py", line 
226, in op_checker
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images return 
store_op_fun(store, *args, **kwargs)
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/usr/local/lib/python2.7/dist-packages/glance_store/_drivers/http.py", line 
130, in get
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images conn, resp, 
content_length = self._query(location, 'GET')
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images   File 
"/usr/local/lib/python2.7/dist-packages/glance_store/_drivers/http.py", line 
196, in _query
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images raise 
exceptions.BadStoreUri(message=reason)
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images BadStoreUri: HTTP 
URL /gjayavelu/ovf/dsl-4-4-10.ova returned a 403 status code.
  2015-09-21 17:23:34.217 18327 ERROR glance.api.v1.images
  2015-09-21 17:23:34.230 18327 INFO glance.api.v1.images [-] Uploaded data of 
image d13562fb-ffd7-40e9-9910-bb99fe751332 from request payload successfully.

  It would be good to catch this exception and throw error.

  Attached g-api.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1498233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499390] Re: Need to add a more useful tests for 'Unicode support shell client '

2015-10-16 Thread Erno Kuvaja
** Project changed: glance => python-glanceclient

** Changed in: python-glanceclient
   Status: In Progress => Fix Committed

** Changed in: python-glanceclient
Milestone: None => 1.2.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1499390

Title:
  Need to add a more useful tests for 'Unicode support shell client '

Status in python-glanceclient:
  Fix Committed

Bug description:
  We need a more useful tests for 'Unicode support shell client ', based
  on this https://review.openstack.org/#/c/206037/ .

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1499390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506825] [NEW] Inside multi (Keystone) endpoint environment Horizon logs into incorrect region

2015-10-16 Thread Timur Sufiev
Public bug reported:

A. Consider a Horizon setup which knows about 2 Keystone endpoints
(setting AVAILABLE_REGIONS, I'm refraining from using it because it'll
change in future, see bug 1494251). And each of these Keystone endpoints
has 2 service region within it, but these service regions a different,
for example RegionOne and RegionTwo in Keystone1 and RegionNorth and
RegionSouth in Keystone2. Currently last service region selected is
stored in cookies, that means that if User first selects RegionSouth in
Keystone2, then signs out and logs in into Keystone1 where he by default
placed into RegionOne (effectively saving this new region in cookies),
then, he returns back to Keystone2 his RegionSouth choice is lost.

B. Another specific setup with multi-endpoint Keystone is when within
Keystone1 Region1 is the own Keystone1 cloud and Region2 are the
resources of the Keystone2 own cloud, and for Keystone2 situation is the
same - Region1 are foreign resources, Region2 are local ones. In that
case most deployers would like to default to Region1 when logging into
Keystone1 endpoint and default to Region2 when logging into Keystone2
endpoint.

The proposed solution is to 
* make a default selection of a service region based on the endpoint User is 
logging in (fixes B)
* save last service region in a per-endpoint cookie (fixes A)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1506825

Title:
  Inside multi (Keystone) endpoint environment Horizon logs into
  incorrect region

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  A. Consider a Horizon setup which knows about 2 Keystone endpoints
  (setting AVAILABLE_REGIONS, I'm refraining from using it because it'll
  change in future, see bug 1494251). And each of these Keystone
  endpoints has 2 service region within it, but these service regions a
  different, for example RegionOne and RegionTwo in Keystone1 and
  RegionNorth and RegionSouth in Keystone2. Currently last service
  region selected is stored in cookies, that means that if User first
  selects RegionSouth in Keystone2, then signs out and logs in into
  Keystone1 where he by default placed into RegionOne (effectively
  saving this new region in cookies), then, he returns back to Keystone2
  his RegionSouth choice is lost.

  B. Another specific setup with multi-endpoint Keystone is when within
  Keystone1 Region1 is the own Keystone1 cloud and Region2 are the
  resources of the Keystone2 own cloud, and for Keystone2 situation is
  the same - Region1 are foreign resources, Region2 are local ones. In
  that case most deployers would like to default to Region1 when logging
  into Keystone1 endpoint and default to Region2 when logging into
  Keystone2 endpoint.

  The proposed solution is to 
  * make a default selection of a service region based on the endpoint User is 
logging in (fixes B)
  * save last service region in a per-endpoint cookie (fixes A)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1506825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506819] [NEW] TestHAL3Agent.test_ha_router failure

2015-10-16 Thread YAMAMOTO Takashi
Public bug reported:

it seems neutron server is crashing with None dereference.

http://logs.openstack.org/87/219187/9/check/gate-neutron-dsvm-
fullstack/9d63c76/logs/TestHAL3Agent.test_ha_router/neutron-server--
2015-10-15--22-49-11-883412.log.txt.gz

2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource 
[req-18955e90-cc49-4f25-addd-b04e4ef5f7f6 - - - - -] index failed
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/extensions/l3agentscheduler.py", line 104, in 
index
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource request.context, 
kwargs['router_id'])
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_hascheduler_db.py", line 82, in 
list_l3_agents_hosting_router
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource return 
self._get_agents_dict_for_router(bindings)
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_hascheduler_db.py", line 66, in 
_get_agents_dict_for_router
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource l3_agent_dict = 
self._make_agent_dict(agent)
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/agents_db.py", line 219, in _make_agent_dict
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource res = dict((k, 
agent[k]) for k in attr
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/agents_db.py", line 220, in 
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource if k not in 
['alive', 'configurations'])
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource TypeError: 
'NoneType' object has no attribute '__getitem__'
2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506819

Title:
  TestHAL3Agent.test_ha_router failure

Status in neutron:
  New

Bug description:
  it seems neutron server is crashing with None dereference.

  http://logs.openstack.org/87/219187/9/check/gate-neutron-dsvm-
  fullstack/9d63c76/logs/TestHAL3Agent.test_ha_router/neutron-server--
  2015-10-15--22-49-11-883412.log.txt.gz

  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource 
[req-18955e90-cc49-4f25-addd-b04e4ef5f7f6 - - - - -] index failed
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/extensions/l3agentscheduler.py", line 104, in 
index
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource 
request.context, kwargs['router_id'])
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_hascheduler_db.py", line 82, in 
list_l3_agents_hosting_router
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource return 
self._get_agents_dict_for_router(bindings)
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_hascheduler_db.py", line 66, in 
_get_agents_dict_for_router
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource l3_agent_dict 
= self._make_agent_dict(agent)
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/agents_db.py", line 219, in _make_agent_dict
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource res = dict((k, 
agent[k]) for k in attr
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/agents_db.py", line 220, in 
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource if k not in 
['alive', 'configurations'])
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource TypeError: 
'NoneType' object has no attribute '__getitem__'
  2015-10-15 22:49:21.721 9550 ERROR neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng

[Yahoo-eng-team] [Bug 1506818] [NEW] RetryRequest need a arg in _lock_subnetpool method

2015-10-16 Thread yalei wang
Public bug reported:

as defined in oslo db, RetryRequest need a arg

https://github.com/openstack/oslo.db/blob/master/oslo_db/exception.py#L206

** Affects: neutron
 Importance: Undecided
 Assignee: yalei wang (yalei-wang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yalei wang (yalei-wang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506818

Title:
  RetryRequest need a arg in _lock_subnetpool method

Status in neutron:
  New

Bug description:
  as defined in oslo db, RetryRequest need a arg

  https://github.com/openstack/oslo.db/blob/master/oslo_db/exception.py#L206

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506794] [NEW] VPNaaS: Active VPN connection goes down after controller shutdown/start

2015-10-16 Thread Elena Ezhova
Public bug reported:

Ubuntu 14.04 + OpenSwan 1:2.6.38-1

Environment with 3 controllers and 2 computes

Steps to reproduce:
1. Create VPN connection between tenant1 and tenant2 and check that it's active
2. Find a controller where one of the routers-participants of VPN connection is 
scheduled (tenant1's router, for example)
3. Shutdown this controller, wait some time and check that tenant1's router is 
rescheduled successfully, and VPN connection is restored
4. Start the controller which was shut downed and wait some time while it's 
completely booted
5. Reschedule tenant1's router back to its origin controller, which was under 
shutdown/start, wait some time and check that tenant1's router is rescheduled 
successfully, and VPN connection is restored

Actual result: tenant1's router is rescheduled, VMs can ping external
hosts, but VPN connection goes to DOWN state on tenant1's side with the
following error in vpn-agent.log on a controller where tenant1's router
was rescheduled back in p.5: http://paste.openstack.org/show/476459/

Analysis:
Pluto processes are running in qrouter namespace (or snat in case of DVR). When 
a controller is being shut down all namespaces get deleted (as they are stored 
in tmpfs), but pluto .pid and .ctl files remain as they are stored in 
/opt/stack/data/neutron/ipsec//var/run/.

Then, when router is rescheduled back to the origin controller, vpn
agent attempts to start pluto process and pluto fails when it finds that
a .pid file already exists. Such behavior of pluto is determined by the
flags that are used to open this file [1],[2] and it is most probably a
defense against accidental rewriting of .pid file .

As it is not a pluto bug, the solution might be to add a workaround to VPNaaS 
that will clean-up .ctl and .pid files on start-up.
Essentially, the same approach was used for LibreSwan driver [3] so we just 
need to do some refactoring to make this approach shared for OpenSwan and 
LibreSwan .

[1] 
https://github.com/xelerance/Openswan/blob/master/programs/pluto/plutomain.c#L258-L259
[2] 
https://github.com/libreswan/libreswan/blob/master/programs/pluto/plutomain.c#L231-L232
[3] 
https://github.com/openstack/neutron-vpnaas/commit/00b633d284f0f21aa380fa47a270c612ebef0795

P.S.
Another way to reproduce this failure is to replace steps 3-5 with:
3. Send kill -9 to the pluto process on that controller
4. Remove tenant1's router from agent running on that controller and then 
schedule it back.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506794

Title:
  VPNaaS: Active VPN connection goes down after controller
  shutdown/start

Status in neutron:
  New

Bug description:
  Ubuntu 14.04 + OpenSwan 1:2.6.38-1

  Environment with 3 controllers and 2 computes

  Steps to reproduce:
  1. Create VPN connection between tenant1 and tenant2 and check that it's 
active
  2. Find a controller where one of the routers-participants of VPN connection 
is scheduled (tenant1's router, for example)
  3. Shutdown this controller, wait some time and check that tenant1's router 
is rescheduled successfully, and VPN connection is restored
  4. Start the controller which was shut downed and wait some time while it's 
completely booted
  5. Reschedule tenant1's router back to its origin controller, which was under 
shutdown/start, wait some time and check that tenant1's router is rescheduled 
successfully, and VPN connection is restored

  Actual result: tenant1's router is rescheduled, VMs can ping external
  hosts, but VPN connection goes to DOWN state on tenant1's side with
  the following error in vpn-agent.log on a controller where tenant1's
  router was rescheduled back in p.5:
  http://paste.openstack.org/show/476459/

  Analysis:
  Pluto processes are running in qrouter namespace (or snat in case of DVR). 
When a controller is being shut down all namespaces get deleted (as they are 
stored in tmpfs), but pluto .pid and .ctl files remain as they are stored in 
/opt/stack/data/neutron/ipsec//var/run/.

  Then, when router is rescheduled back to the origin controller, vpn
  agent attempts to start pluto process and pluto fails when it finds
  that a .pid file already exists. Such behavior of pluto is determined
  by the flags that are used to open this file [1],[2] and it is most
  probably a defense against accidental rewriting of .pid file .

  As it is not a pluto bug, the solution might be to add a workaround to VPNaaS 
that will clean-up .ctl and .pid files on start-up.
  Essentially, the same approach was used for LibreSwan driver [3] so we just 
need to do some refactoring to make this approach shared for OpenSwan and 
LibreSwan .

  [1] 
https://github.com/xelerance/Openswan/blob/master/programs/pluto/plutomain.c#L258-L259
  [2] 
https://github.com/libreswan/libres

[Yahoo-eng-team] [Bug 1506786] [NEW] Incorrect name of 'tag' and 'tag-any' filters

2015-10-16 Thread Sergey Nikitin
Public bug reported:

According to spec http://specs.openstack.org/openstack/nova-
specs/specs/mitaka/approved/tag-instances.html these filters should
named 'tags' and 'tags-any'

** Affects: nova
 Importance: Low
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506786

Title:
  Incorrect name of 'tag' and 'tag-any' filters

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  According to spec http://specs.openstack.org/openstack/nova-
  specs/specs/mitaka/approved/tag-instances.html these filters should
  named 'tags' and 'tags-any'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505476] Re: when live-migrate failed, remove_volume_connection function accept incorrect arguments order in kilo

2015-10-16 Thread jingtao liang
** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505476

Title:
  when live-migrate failed,remove_volume_connection function  accept
  incorrect arguments order  in kilo

Status in OpenStack Compute (nova):
  New

Bug description:
  Openstack Version : kilo 2015.1.0

  Reproduce steps:

  please see the paths of codes:openstack/nova/nova/compute/manager.py

  def _rollback_live_migration(self, context, instance,dest,
  block_migration, migrate_data=None):

  ..
  for bdm in bdms:
  if bdm.is_volume:
  self.compute_rpcapi.remove_volume_connection(
  context, instance, bdm.volume_id, dest)
  ..
   
  Actual result:

  def remove_volume_connection(self, context, volume_id, instance):
  ..
  ..

  Expected result:

  def remove_volume_connection(self, context, instance, volume_id):

  
  pelease check this bug , thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1505476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506760] [NEW] fwaas tempest tests should be moved to fwaas repo

2015-10-16 Thread YAMAMOTO Takashi
Public bug reported:

fwaas tempest tests should be moved to fwaas repo.

discussion: http://lists.openstack.org/pipermail/openstack-
dev/2015-October/077107.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506760

Title:
  fwaas tempest tests should be moved to fwaas repo

Status in neutron:
  New

Bug description:
  fwaas tempest tests should be moved to fwaas repo.

  discussion: http://lists.openstack.org/pipermail/openstack-
  dev/2015-October/077107.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp