[Yahoo-eng-team] [Bug 1398656] Re: ceilometer import oslo.concurrency failed issue

2014-12-05 Thread ZhiQiang Fan
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => ZhiQiang Fan (aji-zqfan)

** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
 Assignee: (unassigned) => ZhiQiang Fan (aji-zqfan)

** Changed in: nova
   Status: New => In Progress

** Changed in: keystone
   Status: New => In Progress

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => ZhiQiang Fan (aji-zqfan)

** Changed in: cinder
   Status: New => In Progress

** Also affects: designate
   Importance: Undecided
   Status: New

** Changed in: designate
 Assignee: (unassigned) => ZhiQiang Fan (aji-zqfan)

** Changed in: designate
   Status: New => In Progress

** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: ironic
   Status: New => In Progress

** Changed in: ironic
 Assignee: (unassigned) => ZhiQiang Fan (aji-zqfan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398656

Title:
  ceilometer import oslo.concurrency failed issue

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  during ceilometer review and find jenkis failed with following
  message:

  2014-12-03 01:28:04.969 | pep8 runtests: PYTHONHASHSEED='0'
  2014-12-03 01:28:04.969 | pep8 runtests: commands[0] | flake8
  2014-12-03 01:28:04.970 |   /home/jenkins/workspace/gate-ceilometer-pep8$ 
/home/jenkins/workspace/gate-ceilometer-pep8/.tox/pep8/bin/flake8 
  2014-12-03 01:28:21.508 | ./ceilometer/utils.py:30:1: H302  import only 
modules.'from oslo.concurrency import processutils' does not import a module
  2014-12-03 01:28:21.508 | from oslo.concurrency import processutils
  2014-12-03 01:28:21.508 | ^
  2014-12-03 01:28:21.508 | ./ceilometer/ipmi/platform/ipmitool.py:19:1: H302  
import only modules.'from oslo.concurrency import processutils' does not import 
a module
  2014-12-03 01:28:21.508 | from oslo.concurrency import processutils
  2014-12-03 01:28:21.508 | ^
  2014-12-03 01:28:21.696 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-ceilometer-pep8/.tox/pep8/bin/flake8'
  2014-12-03 01:28:21.697 | pep8 runtests: commands[1] | flake8 
--filename=ceilometer-* bin

  
  This seems 

  
https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/processutils.py

  should change to

  from  oslo_concurrency import processutils

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1398656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377374] Re: test_launch_instance_exception_on_flavors fails

2014-12-05 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1377374

Title:
  test_launch_instance_exception_on_flavors fails

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  When building Horizon Juno RC1, I have the below Python stack dump. If
  you need more details on how to reproduce this in Debian Sid, please
  let me know, and I'll explain, though the package is currently in
  Debian Experimental, so just doing dpkg-buildpackage using it should
  be enough.

  ==
  FAIL: test_launch_instance_exception_on_flavors 
(openstack_dashboard.dashboards.project.databases.tests.DatabaseTests)
  --
  Traceback (most recent call last):
File 
"/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~rc1/openstack_dashboard/test/helpers.py",
 line 80, in instance_stub_out
  return fn(self, *args, **kwargs)
File 
"/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~rc1/openstack_dashboard/dashboards/project/databases/tests.py",
 line 159, in test_launch_instance_exception_
  self.client.get(LAUNCH_URL)
  AssertionError: Http302 not raised

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1377374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399857] [NEW] endpoint_policy has typo in delete

2014-12-05 Thread Adam Young
Public bug reported:

When activating the endpoint_policy extension, and then deleting a
policy file, get the following error:

keystoneclient.openstack.common.apiclient.exceptions.InternalServerError:
An unexpected error prevented the server from fulfilling your request:
'EndpointPolicy' object has no attribute 'delete_association_by_polcy'
(Disable debug mode to suppress these details.) (HTTP 500)


There is a typo in the controller that can be fixed by this change.

diff --git a/keystone/contrib/endpoint_policy/controllers.py 
b/keystone/contrib/endpoint_policy/controllers.py
index c1533f7..569fe9b 100644
--- a/keystone/contrib/endpoint_policy/controllers.py
+++ b/keystone/contrib/endpoint_policy/controllers.py
@@ -46,7 +46,7 @@ class EndpointPolicyV3Controller(controller.V3Controller):
 payload['resource_info'])
 
 def _on_policy_delete(self, service, resource_type, operation, payload):
-self.endpoint_policy_api.delete_association_by_polcy(
+self.endpoint_policy_api.delete_association_by_policy(
 payload['resource_info'])

** Affects: keystone
 Importance: Undecided
 Assignee: Adam Young (ayoung)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1399857

Title:
  endpoint_policy has typo in delete

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  When activating the endpoint_policy extension, and then deleting a
  policy file, get the following error:

  keystoneclient.openstack.common.apiclient.exceptions.InternalServerError:
  An unexpected error prevented the server from fulfilling your request:
  'EndpointPolicy' object has no attribute 'delete_association_by_polcy'
  (Disable debug mode to suppress these details.) (HTTP 500)

  
  There is a typo in the controller that can be fixed by this change.

  diff --git a/keystone/contrib/endpoint_policy/controllers.py 
b/keystone/contrib/endpoint_policy/controllers.py
  index c1533f7..569fe9b 100644
  --- a/keystone/contrib/endpoint_policy/controllers.py
  +++ b/keystone/contrib/endpoint_policy/controllers.py
  @@ -46,7 +46,7 @@ class EndpointPolicyV3Controller(controller.V3Controller):
   payload['resource_info'])
   
   def _on_policy_delete(self, service, resource_type, operation, payload):
  -self.endpoint_policy_api.delete_association_by_polcy(
  +self.endpoint_policy_api.delete_association_by_policy(
   payload['resource_info'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1399857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399851] [NEW] Glance should return 206 when fulfilling a request using the Range header

2014-12-05 Thread Ian Cordasco
Public bug reported:

When downloading a specific image from glance, a user can specify a
Range to download. If the server successfully fulfills that request it
should return a 206 (Partial Content) response described in RFC 7233
[1]. Currently, the v2 controller [2]  does not properly set the 206
response code for a partial response.

[1]: https://tools.ietf.org/html/rfc7233#section-4 
[2]: 
https://github.com/openstack/glance/blob/038cf5403e5b7b3620884db6d7534541b5515eac/glance/api/v2/image_data.py#L196..L228

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1399851

Title:
  Glance should return 206 when fulfilling a request using the Range
  header

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When downloading a specific image from glance, a user can specify a
  Range to download. If the server successfully fulfills that request it
  should return a 206 (Partial Content) response described in RFC 7233
  [1]. Currently, the v2 controller [2]  does not properly set the 206
  response code for a partial response.

  [1]: https://tools.ietf.org/html/rfc7233#section-4 
  [2]: 
https://github.com/openstack/glance/blob/038cf5403e5b7b3620884db6d7534541b5515eac/glance/api/v2/image_data.py#L196..L228

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1399851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398128] Re: ironic tempest tests periodically failing: No valid host was found

2014-12-05 Thread Adam Gandelman
OK, so we did some more digging here.  Devananda caught that the hosts
ssh credentials to access local libvirt are created after the nodes are
enrolled.  Ironic can't validate power state of the nodes until it can
connect to libvirt, nova wont take into account a nodes resources until
its power state has been validated, causing a delay in schedule-able
nodes.

** Changed in: devstack
   Status: Fix Released => Confirmed

** Changed in: ironic
   Status: New => Invalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398128

Title:
  ironic tempest tests periodically failing: No valid host was found

Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This was noticed on the stable/juno ironic sideways grenade jobs, but
  is also confirmed to be happening on the check-tempest-dsvm-ironic-
  parallel-nv job, which runs a similarly configured tempest run against
  Ironic:

  http://logs.openstack.org/84/137684/1/check/check-grenade-dsvm-ironic-
  sideways/6d118bc/

  A number of the early compute tests will fail to spawn an instance,
  getting a scheduling error on the client side:

  BuildErrorException: Server %(server_id)s failed to build and is in ERROR 
status
  Details: Server eb81ee40-ceba-484d-b665-92ec3bf4fedd failed to build and is 
in ERROR status
  Details: {u'message': u'No valid host was found. ', u'created': 
u'2014-11-27T17:44:05Z', u'code': 500}

  Looking through the nova logs, the request never even makes to the
  nova-scheduler.  The last error is reported in conductor:

  2014-11-27 17:44:01.005 WARNING nova.scheduler.driver [req-a3c046e5
  -66db-4bca-a6f8-2263763e49a6 SecurityGroupsTestJSON-2119055496
  SecurityGroupsTestJSON-1381566740] [instance: 9008811a-f400-42ae-
  98d5-caf828fa34dc] NoValidHost exception with message: 'No valid host
  was found.'

  Looking at the time stamps of the requests, the first instance is
  requested at 17:44:00

  2014-11-27 17:44:00.944 24730 DEBUG tempest.common.rest_client [req-
  a3c046e5-66db-4bca-a6f8-2263763e49a6 None] Request
  (SecurityGroupsTestJSON:test_server_security_groups): 202 POST
  http://127.0.0.1:8774/v2/adf4838f0d15462da4601a5d853eafbf/servers
  0.515s

  However, on the nova-compute side, the resource tracker has not been
  updated to include the enlisted Ironic nodes until much later.  This
  first time the tracker contains any of the ironic resources is at
  17:44:06:

  2014-11-27 17:44:06.224 21645 AUDIT nova.compute.resource_tracker [-]
  Total physical ram (MB): 512, total allocated virtual ram (MB): 0

  So there's a race between the resource tracker's initial inclusion of
  available resources and Tempest running the first set of tests that
  require an instance.   This can be worked around in a couple of ways:

  * Adjust the periodic task interval on nova-compute to update much more 
frequently, tho this will just narrow the window.  
  * Have tempest run an admin 'nova hypervisor-stats' call on the client side 
and wait for resources before running any instances (in the case of baremetal 
only)
  * Adjust devstack's nova cpu deployment to spin until hypervisor-stats 
reflect the ironic node parameters

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1398128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399840] [NEW] Make template headers more DRY

2014-12-05 Thread Lin Hua Cheng
Public bug reported:


The bug is for tracking follow-up work related to the discussion in  
https://review.openstack.org/#/c/136056/

Some work:
Revisit how the page title is injected in the HTML template.
Figure out how the page header template can be more DRY, maybe move it from the 
template into the views class?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399840

Title:
  Make template headers more DRY

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  The bug is for tracking follow-up work related to the discussion in  
https://review.openstack.org/#/c/136056/

  Some work:
  Revisit how the page title is injected in the HTML template.
  Figure out how the page header template can be more DRY, maybe move it from 
the template into the views class?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399830] [NEW] Power sync periodic task does $node_count API calls for Ironic driver

2014-12-05 Thread Jim Rollenhagen
Public bug reported:

The power sync periodic task calls driver.get_info() for each instance
in the database. This is typically fine; however in the Ironic driver,
get_info() is an API call. We should bring this down to one API call.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399830

Title:
  Power sync periodic task does $node_count API calls for Ironic driver

Status in OpenStack Compute (Nova):
  New

Bug description:
  The power sync periodic task calls driver.get_info() for each instance
  in the database. This is typically fine; however in the Ironic driver,
  get_info() is an API call. We should bring this down to one API call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376615] Re: Libvirt volumes are copied in case of block live migration

2014-12-05 Thread melanie witt
Based on the comments in the proposed patch:

https://review.openstack.org/125616

the root cause of this issue requires enhancement in libvirt before
anything can be done about it in nova. So, marking this as
Opinion/Wishlist as an enhancement that can be addressed if libvirt is
first enhanced.

** Changed in: nova
   Importance: Low => Wishlist

** Changed in: nova
   Status: New => Opinion

** Tags added: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376615

Title:
  Libvirt volumes are copied in case of block live migration

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  When doing a block live migration with volumes attached, the libvirt
  drive-mirror operation tries to sync all the disks attached to the
  domain, even if they are external volumes. It should only do so for
  the ephemeral storage, so we need to be able to pass that information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399817] [NEW] 403 Forbidden - Timestamp failed validation in EC2 unit tests

2014-12-05 Thread Johannes Erdfelt
Public bug reported:

EC2 unit tests can sometimes fail:

Traceback (most recent call last):
  File "/home/johannes/openstack/nova/nova/tests/unit/api/ec2/test_api.py", 
line 287, in test_xmlns_version_matches_request_version
self.ec2.get_all_instances()
  File 
"/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/boto/ec2/connection.py",
 line 586, in get_all_instances
max_results=max_results)
  File 
"/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/boto/ec2/connection.py",
 line 682, in get_all_reservations
[('item', Reservation)], verb='POST')
  File 
"/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/boto/connection.py",
 line 1182, in get_list
raise self.ResponseError(response.status, response.reason, body)
EC2ResponseError: EC2ResponseError: 403 Forbidden

 
  403 Forbidden
 
 
  403 Forbidden
  Timestamp failed validation.


 


It can happen in one of a number of EC2 unit tests. My latest test run
failed in these tests:

nova.tests.unit.api.ec2.test_api.ApiEc2TestCase.test_authorize_revoke_security_group_cidr
nova.tests.unit.api.ec2.test_api.ApiEc2TestCase.test_create_delete_security_group
nova.tests.unit.api.ec2.test_api.ApiEc2TestCase.test_xmlns_version_matches_request_version

I've seen it in other EC2 related test cases too. Usually running again
will produce a failure in a different test case or none.

** Affects: nova
 Importance: Undecided
 Assignee: Johannes Erdfelt (johannes.erdfelt)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399817

Title:
  403 Forbidden - Timestamp failed validation in EC2 unit tests

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  EC2 unit tests can sometimes fail:

  Traceback (most recent call last):
File "/home/johannes/openstack/nova/nova/tests/unit/api/ec2/test_api.py", 
line 287, in test_xmlns_version_matches_request_version
  self.ec2.get_all_instances()
File 
"/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/boto/ec2/connection.py",
 line 586, in get_all_instances
  max_results=max_results)
File 
"/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/boto/ec2/connection.py",
 line 682, in get_all_reservations
  [('item', Reservation)], verb='POST')
File 
"/home/johannes/virtualenvs/migrations/local/lib/python2.7/site-packages/boto/connection.py",
 line 1182, in get_list
  raise self.ResponseError(response.status, response.reason, body)
  EC2ResponseError: EC2ResponseError: 403 Forbidden
  
   
403 Forbidden
   
   
403 Forbidden
Timestamp failed validation.


   
  

  It can happen in one of a number of EC2 unit tests. My latest test run
  failed in these tests:

  
nova.tests.unit.api.ec2.test_api.ApiEc2TestCase.test_authorize_revoke_security_group_cidr
  
nova.tests.unit.api.ec2.test_api.ApiEc2TestCase.test_create_delete_security_group
  
nova.tests.unit.api.ec2.test_api.ApiEc2TestCase.test_xmlns_version_matches_request_version

  I've seen it in other EC2 related test cases too. Usually running
  again will produce a failure in a different test case or none.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399815] [NEW] server group policy not honored for targeted migrations

2014-12-05 Thread Jennifer Mulsow
Public bug reported:

This was observed in the Juno release.

Because targeted live and cold migrations do not go through the
scheduler for policy-based decision making, a VM could be migrated to a
host that would violate the policy of the server-group.

If a VM belongs to a server group, the group policy will need to be checked in 
the compute manager at the time of migration to ensure that:
1. VMs in a server group with affinity rule can't be migrated.
2. VMs in a server group with anti-affinity rule don't move to a host that 
would violate the rule.

** Affects: nova
 Importance: Undecided
 Assignee: Jennifer Mulsow (jmulsow)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jennifer Mulsow (jmulsow)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399815

Title:
  server group policy not honored for targeted migrations

Status in OpenStack Compute (Nova):
  New

Bug description:
  This was observed in the Juno release.

  Because targeted live and cold migrations do not go through the
  scheduler for policy-based decision making, a VM could be migrated to
  a host that would violate the policy of the server-group.

  If a VM belongs to a server group, the group policy will need to be checked 
in the compute manager at the time of migration to ensure that:
  1. VMs in a server group with affinity rule can't be migrated.
  2. VMs in a server group with anti-affinity rule don't move to a host that 
would violate the rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398128] Re: ironic tempest tests periodically failing: No valid host was found

2014-12-05 Thread Adam Gandelman
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398128

Title:
  ironic tempest tests periodically failing: No valid host was found

Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  This was noticed on the stable/juno ironic sideways grenade jobs, but
  is also confirmed to be happening on the check-tempest-dsvm-ironic-
  parallel-nv job, which runs a similarly configured tempest run against
  Ironic:

  http://logs.openstack.org/84/137684/1/check/check-grenade-dsvm-ironic-
  sideways/6d118bc/

  A number of the early compute tests will fail to spawn an instance,
  getting a scheduling error on the client side:

  BuildErrorException: Server %(server_id)s failed to build and is in ERROR 
status
  Details: Server eb81ee40-ceba-484d-b665-92ec3bf4fedd failed to build and is 
in ERROR status
  Details: {u'message': u'No valid host was found. ', u'created': 
u'2014-11-27T17:44:05Z', u'code': 500}

  Looking through the nova logs, the request never even makes to the
  nova-scheduler.  The last error is reported in conductor:

  2014-11-27 17:44:01.005 WARNING nova.scheduler.driver [req-a3c046e5
  -66db-4bca-a6f8-2263763e49a6 SecurityGroupsTestJSON-2119055496
  SecurityGroupsTestJSON-1381566740] [instance: 9008811a-f400-42ae-
  98d5-caf828fa34dc] NoValidHost exception with message: 'No valid host
  was found.'

  Looking at the time stamps of the requests, the first instance is
  requested at 17:44:00

  2014-11-27 17:44:00.944 24730 DEBUG tempest.common.rest_client [req-
  a3c046e5-66db-4bca-a6f8-2263763e49a6 None] Request
  (SecurityGroupsTestJSON:test_server_security_groups): 202 POST
  http://127.0.0.1:8774/v2/adf4838f0d15462da4601a5d853eafbf/servers
  0.515s

  However, on the nova-compute side, the resource tracker has not been
  updated to include the enlisted Ironic nodes until much later.  This
  first time the tracker contains any of the ironic resources is at
  17:44:06:

  2014-11-27 17:44:06.224 21645 AUDIT nova.compute.resource_tracker [-]
  Total physical ram (MB): 512, total allocated virtual ram (MB): 0

  So there's a race between the resource tracker's initial inclusion of
  available resources and Tempest running the first set of tests that
  require an instance.   This can be worked around in a couple of ways:

  * Adjust the periodic task interval on nova-compute to update much more 
frequently, tho this will just narrow the window.  
  * Have tempest run an admin 'nova hypervisor-stats' call on the client side 
and wait for resources before running any instances (in the case of baremetal 
only)
  * Adjust devstack's nova cpu deployment to spin until hypervisor-stats 
reflect the ironic node parameters

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1398128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399804] [NEW] fixture module from oslo-incubator should be dropped

2014-12-05 Thread Ihar Hrachyshka
Public bug reported:

Now that we're able to migrate to oslo.concurrency, the only remaining
usage of any fixture from incubator is for config one, which is also
available from oslo.config itself. So we're open to drop yet another
unused module from neutron tree.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399804

Title:
  fixture module from oslo-incubator should be dropped

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now that we're able to migrate to oslo.concurrency, the only remaining
  usage of any fixture from incubator is for config one, which is also
  available from oslo.config itself. So we're open to drop yet another
  unused module from neutron tree.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399768] Re: migration ofr endpoint_filter failes due to foreign key constraint

2014-12-05 Thread Adam Young
Looks like I had an old .pyc file causing this

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1399768

Title:
  migration ofr endpoint_filter failes due to foreign key constraint

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
   keystone-manage db_sync --extension endpoint_filter 2

  fails with

  
  2014-12-05 13:54:39.295 11241 TRACE keystone OperationalError: 
(OperationalError) (1005, "Can't create table 'keystone.project_endpoint_group' 
(errno: 150)") '\nCREATE TABLE project_endpoint_group (\n\tendpoint_group_id 
VARCHAR(64) NOT NULL, \n\tproject_id VARCHAR(64) NOT NULL, \n\tPRIMARY KEY 
(endpoint_group_id, project_id), \n\tFOREIGN KEY(endpoint_group_id) REFERENCES 
endpoint_group (id)\n)\n\n' ()


  Migration 1 fails executing the below sql.

  
  CREATE TABLE project_endpoint_group (endpoint_group_id VARCHAR(64) NOT NULL, 
project_id VARCHAR(64) NOT NULL, PRIMARY KEY (endpoint_group_id, project_id), 
FOREIGN KEY(endpoint_group_id) REFERENCES endpoint_group (id));
  ERROR 1005 (HY000): Can't create table 'keystone.project_endpoint_group' 
(errno: 150)

  Removing the clause  FOREIGN KEY(endpoint_group_id) REFERENCES
  endpoint_group (id)) makes it work.

  THis is on Fedora 20 and I mariadb flavor of MySQL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1399768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399788] [NEW] neutron doesn't log tenant_id and user_id along side req-id in logs

2014-12-05 Thread Joe Gordon
Public bug reported:

neutron logs:  [req-94a39f87-e470-4032-82af-9a6b429b60fa None]
while nova logs: [req-c0b4dfb9-8af3-40eb-b0dd-7b576cfd1d55 
AggregatesAdminTestJSON-917687995 AggregatesAdminTestJSON-394398414]


Nova uses the format: #logging_context_format_string=%(asctime)s.%(msecs)03d 
%(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] 
%(instance)s%(message)s

Without knowing the user and tenant its hard to understand what the logs
are doing when multiple tenants are using the cloud.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399788

Title:
  neutron doesn't log tenant_id and user_id along side req-id in logs

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  neutron logs:  [req-94a39f87-e470-4032-82af-9a6b429b60fa None]
  while nova logs: [req-c0b4dfb9-8af3-40eb-b0dd-7b576cfd1d55 
AggregatesAdminTestJSON-917687995 AggregatesAdminTestJSON-394398414]

  
  Nova uses the format: #logging_context_format_string=%(asctime)s.%(msecs)03d 
%(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] 
%(instance)s%(message)s

  Without knowing the user and tenant its hard to understand what the
  logs are doing when multiple tenants are using the cloud.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399782] [NEW] Python glance-client image-create validation error

2014-12-05 Thread Diogo Monteiro
Public bug reported:

When using the python-glance client to create an image the schema
validator fails when validating the locations object.

Basing from the format provided on the image schema:
{
"properties": {
"locations": {
"items": {
"required": ["url", "metadata"],
"type": "object",
"properties": {
"url": {
"type": "string",
"maxLength": 255
},
"metadata": {
"type": "object"
}
}
},
"type": "array",
"description": "A set of URLs to access the image file kept in 
external store"
},
}
}

The locations attribute is an array of objects containing two attributes, url 
and metadata, eg;
locations: [
  {
 url: 'image.url',
metadata: {}
  }
]

However, when trying to set an image location the following validation error is 
raised:
glance --debug --os-image-api-version 2  image-create --locations 
"https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img";

Failed validating 'type' in schema['properties']['locations']['items']:
{'properties': {'metadata': {'type': 'object'},
'url': {'maxLength': 255, 'type': 'string'}},
 'required': ['url', 'metadata'],
 'type': 'object'}

On instance['locations'][0]:

'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/v2/images.py",
 line 154, in create
setattr(image, key, value)
  File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/warlock/model.py",
 line 75, in __setattr__
self.__setitem__(key, value)
  File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/warlock/model.py",
 line 50, in __setitem__
raise exceptions.InvalidOperation(msg)
warlock.exceptions.InvalidOperation: Unable to set 'locations' to 
'['https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img']'.
 Reason: 
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'
 is not of type 'object'

Failed validating 'type' in schema['properties']['locations']['items']:
{'properties': {'metadata': {'type': 'object'},
'url': {'maxLength': 255, 'type': 'string'}},
 'required': ['url', 'metadata'],
 'type': 'object'}

On instance['locations'][0]:

'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/shell.py",
 line 620, in main
args.func(client, args)
  File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/v2/shell.py",
 line 68, in do_image_create
image = gc.images.create(**fields)
  File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/v2/images.py",
 line 156, in create
raise TypeError(utils.exception_to_str(e))
TypeError: Unable to set 'locations' to 
'['https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img']'.
 Reason: 
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'
 is not of type 'object'

Failed validating 'type' in schema['properties']['locations']['items']:
{'properties': {'metadata': {'type': 'object'},
'url': {'maxLength': 255, 'type': 'string'}},
 'required': ['url', 'metadata'],
 'type': 'object'}

On instance['locations'][0]:

'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'
Unable to set 'locations' to 
'['https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img']'.
 Reason: 
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'
 is not of type 'object'

Failed validating 'type' in schema['properties']['locations']['items']:
{'properties': {'metadata': {'type': 'object'},
'url': {'maxLength': 255, 'type': 'string'}},
 'required': ['url', 'metadata'],
 'type': 'object'}

On instance['locations'][0]:

'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1399782

Title:
  Python glance-client image-create validation error

Status in OpenStack Ima

[Yahoo-eng-team] [Bug 1399779] [NEW] Update glance REST api docs

2014-12-05 Thread Diogo Monteiro
Public bug reported:

The glance version 2 API docs are not up to date:
http://developer.openstack.org/api-ref-image-v2.html

The image version 2 schema shows the following json object:
{
"additionalProperties": {
"type": "string"
},
"name": "image",
"links": [{
"href": "{self}",
"rel": "self"
}, {
"href": "{file}",
"rel": "enclosure"
}, {
"href": "{schema}",
"rel": "describedby"
}],
"properties": {
"status": {
"enum": ["queued", "saving", "active", "killed", "deleted", 
"pending_delete"],
"type": "string",
"description": "Status of the image (READ-ONLY)"
},
"tags": {
"items": {
"type": "string",
"maxLength": 255
},
"type": "array",
"description": "List of strings related to the image"
},
"kernel_id": {
"pattern": 
"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
"type": "string",
"description": "ID of image stored in Glance that should be used as 
the kernel when booting an AMI-style image."
},
"container_format": {
"enum": ["ami", "ari", "aki", "bare", "ovf", "ova"],
"type": "string",
"description": "Format of the container"
},
"min_ram": {
"type": "integer",
"description": "Amount of ram (in MB) required to boot image."
},
"ramdisk_id": {
"pattern": 
"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
"type": "string",
"description": "ID of image stored in Glance that should be used as 
the ramdisk when booting an AMI-style image."
},
"locations": {
"items": {
"required": ["url", "metadata"],
"type": "object",
"properties": {
"url": {
"type": "string",
"maxLength": 255
},
"metadata": {
"type": "object"
}
}
},
"type": "array",
"description": "A set of URLs to access the image file kept in 
external store"
},
"visibility": {
"enum": ["public", "private"],
"type": "string",
"description": "Scope of image accessibility"
},
"updated_at": {
"type": "string",
"description": "Date and time of the last image modification 
(READ-ONLY)"
},
"owner": {
"type": "string",
"description": "Owner of the image",
"maxLength": 255
},
"file": {
"type": "string",
"description": "(READ-ONLY)"
},
"min_disk": {
"type": "integer",
"description": "Amount of disk space (in GB) required to boot 
image."
},
"virtual_size": {
"type": "integer",
"description": "Virtual size of image in bytes (READ-ONLY)"
},
"id": {
"pattern": 
"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
"type": "string",
"description": "An identifier for the image"
},
"size": {
"type": "integer",
"description": "Size of image file in bytes (READ-ONLY)"
},
"instance_uuid": {
"type": "string",
"description": "ID of instance used to create this image."
},
"os_distro": {
"type": "string",
"description": "Common name of operating system distribution as 
specified in 
http://docs.openstack.org/trunk/openstack-compute/admin/content/adding-images.html";
},
"name": {
"type": "string",
"description": "Descriptive name for the image",
"maxLength": 255
},
"checksum": {
"type": "string",
"description": "md5 hash of image contents. (READ-ONLY)",
"maxLength": 32
},
"created_at": {
"type": "string",
"description": "Date and time of image registration (READ-ONLY)"
},
"disk_format": {
"enum": ["ami", "ari", "aki", "vhd", "vmdk", "raw", "qcow2", "vdi", 
"iso"],
"type": "string",
"description": "Format of the disk"
},
"os_version": {
"type": "string",
"description": "Operating system version as specified by the 
distributor"
},
"protected": {
"type": "boolean",
"description": "If true, image will not be deletable."
},
"architecture": {
"type": "st

[Yahoo-eng-team] [Bug 1399778] [NEW] Failing to create image using glance endpoint version 2

2014-12-05 Thread Diogo Monteiro
Public bug reported:

Not able to create an image utilizing the glance service REST api version 2
I've tried a few different approaches

#1 Using the python-glance client

Running:
glance --debug --os-image-api-version 2  image-create --locations 
"https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img";

Exception:
Traceback (most recent call last):
  File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/v2/images.py",
 line 154, in create
setattr(image, key, value)
  File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/warlock/model.py",
 line 75, in __setattr__
self.__setitem__(key, value)
  File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/warlock/model.py",
 line 50, in __setitem__
raise exceptions.InvalidOperation(msg)
warlock.exceptions.InvalidOperation: Unable to set 'locations' to '['{', 
'url:', 
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img,',
 'metadata:', '{}', '}']'. Reason: '{' is not of type 'object'

Failed validating 'type' in schema['properties']['locations']['items']:
{'properties': {'metadata': {'type': 'object'},
'url': {'maxLength': 255, 'type': 'string'}},
 'required': ['url', 'metadata'],
 'type': 'object'}


#2 Adding a location to an existing image using python-glance client

Running:
glance --debug --os-image-api-version 2  location-add 
bf9e453a-8aef-4806-9afe-9546694f814b --url 
"https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img";

Exception:
The administrator has disabled API access to image locations


#3 Sending curl requests directly to glance's version 2 endpoints

Running:
Node.js app using request lib

"requestOptions": {
"url": "http://controller01.qa.cloud:9292/v2/images";,
"json": true,
"headers": {
  "X-Auth-Project-Id": "2136a657f30c4f65895dd95164f4dda6",
  "X-Auth-Token": "ba36f4becfa04740bd524a37404ae29f"
},
"timeout": 1,
"strictSSL": false,
"body": {
  "name": "cdos-template6001",
  "container_format": "ovf",
  "disk_format": "raw",
  "min_ram": 1,
  "min_disk": 10,
  "locations": [
{
  "url": 
"https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img";,
  "metadata": {}
}
  ]
}

request.post(requestOptions, callback);

Exception: 
403 Forbidden\n\nAttribute 'locations' is reserved.\n\n 


Can anybody confirm if the glance version 2 image-create REST endpoint even 
works?
If it does how to consume the API, any docs?

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1399778

Title:
  Failing to create image using glance endpoint version 2

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Not able to create an image utilizing the glance service REST api version 2
  I've tried a few different approaches

  #1 Using the python-glance client

  Running:
  glance --debug --os-image-api-version 2  image-create --locations 
"https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img";

  Exception:
  Traceback (most recent call last):
File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/v2/images.py",
 line 154, in create
  setattr(image, key, value)
File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/warlock/model.py",
 line 75, in __setattr__
  self.__setitem__(key, value)
File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/warlock/model.py",
 line 50, in __setitem__
  raise exceptions.InvalidOperation(msg)
  warlock.exceptions.InvalidOperation: Unable to set 'locations' to '['{', 
'url:', 
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img,',
 'metadata:', '{}', '}']'. Reason: '{' is not of type 'object'

  Failed validating 'type' in schema['properties']['locations']['items']:
  {'properties': {'metadata': {'type': 'object'},
  'url': {'maxLength': 255, 'type': 'string'}},
   'required': ['url', 'metadata'],
   'type': 'object'}

  
  #2 Adding a location to an existing image using python-glance client

  Running:
  glance --debug --os-image-api-version 2  location-add 
bf9e453a-8aef-4806-9afe-9546694f814b --url 
"https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img";

  Exception:
  The administrator has disabled API access to image locations

  
  #3 Sending curl requests directly to glance's version 2 endpoints

  Running:
  Node.js app using request lib

  "requestOptions": {
  "url": "http://controller01.qa.cloud:9292/

[Yahoo-eng-team] [Bug 1399768] [NEW] migration ofr endpoint_filter failes due to foreign key constraint

2014-12-05 Thread Adam Young
Public bug reported:

 keystone-manage db_sync --extension endpoint_filter 2

fails with


2014-12-05 13:54:39.295 11241 TRACE keystone OperationalError: 
(OperationalError) (1005, "Can't create table 'keystone.project_endpoint_group' 
(errno: 150)") '\nCREATE TABLE project_endpoint_group (\n\tendpoint_group_id 
VARCHAR(64) NOT NULL, \n\tproject_id VARCHAR(64) NOT NULL, \n\tPRIMARY KEY 
(endpoint_group_id, project_id), \n\tFOREIGN KEY(endpoint_group_id) REFERENCES 
endpoint_group (id)\n)\n\n' ()


Migration 1 fails executing the below sql.


CREATE TABLE project_endpoint_group (endpoint_group_id VARCHAR(64) NOT NULL, 
project_id VARCHAR(64) NOT NULL, PRIMARY KEY (endpoint_group_id, project_id), 
FOREIGN KEY(endpoint_group_id) REFERENCES endpoint_group (id));
ERROR 1005 (HY000): Can't create table 'keystone.project_endpoint_group' 
(errno: 150)

Removing the clause  FOREIGN KEY(endpoint_group_id) REFERENCES
endpoint_group (id)) makes it work.

THis is on Fedora 20 and I mariadb flavor of MySQL.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1399768

Title:
  migration ofr endpoint_filter failes due to foreign key constraint

Status in OpenStack Identity (Keystone):
  New

Bug description:
   keystone-manage db_sync --extension endpoint_filter 2

  fails with

  
  2014-12-05 13:54:39.295 11241 TRACE keystone OperationalError: 
(OperationalError) (1005, "Can't create table 'keystone.project_endpoint_group' 
(errno: 150)") '\nCREATE TABLE project_endpoint_group (\n\tendpoint_group_id 
VARCHAR(64) NOT NULL, \n\tproject_id VARCHAR(64) NOT NULL, \n\tPRIMARY KEY 
(endpoint_group_id, project_id), \n\tFOREIGN KEY(endpoint_group_id) REFERENCES 
endpoint_group (id)\n)\n\n' ()


  Migration 1 fails executing the below sql.

  
  CREATE TABLE project_endpoint_group (endpoint_group_id VARCHAR(64) NOT NULL, 
project_id VARCHAR(64) NOT NULL, PRIMARY KEY (endpoint_group_id, project_id), 
FOREIGN KEY(endpoint_group_id) REFERENCES endpoint_group (id));
  ERROR 1005 (HY000): Can't create table 'keystone.project_endpoint_group' 
(errno: 150)

  Removing the clause  FOREIGN KEY(endpoint_group_id) REFERENCES
  endpoint_group (id)) makes it work.

  THis is on Fedora 20 and I mariadb flavor of MySQL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1399768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399760] [NEW] neutron-service is throwing an excessive amount of warnings

2014-12-05 Thread Joe Gordon
Public bug reported:

http://logs.openstack.org/58/136158/9/check/check-tempest-dsvm-neutron-
full/abea49f/logs/screen-q-svc.txt.gz?level=TRACE

Is full of a lot of repetitive warnings, and it is unclear if something
may be going wrong or not.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399760

Title:
  neutron-service is throwing an excessive amount of warnings

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/58/136158/9/check/check-tempest-dsvm-
  neutron-full/abea49f/logs/screen-q-svc.txt.gz?level=TRACE

  Is full of a lot of repetitive warnings, and it is unclear if
  something may be going wrong or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399761] [NEW] neutron-service is throwing an excessive amount of warnings

2014-12-05 Thread Joe Gordon
Public bug reported:

http://logs.openstack.org/58/136158/9/check/check-tempest-dsvm-neutron-
full/abea49f/logs/screen-q-svc.txt.gz?level=TRACE

Is full of a lot of repetitive warnings, and it is unclear if something
may be going wrong or not.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399761

Title:
  neutron-service is throwing an excessive amount of warnings

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/58/136158/9/check/check-tempest-dsvm-
  neutron-full/abea49f/logs/screen-q-svc.txt.gz?level=TRACE

  Is full of a lot of repetitive warnings, and it is unclear if
  something may be going wrong or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399749] [NEW] [LBaaS V2] A delete operation on healthmonitor throws HTTP 500 Internal Server Error

2014-12-05 Thread German Eichberger
Public bug reported:


I attempted to delete a healthmonitor object using the CLI and the call
failed with an HTTP 500.Subsequently I deleted the pool and the
loadbalancer objects before successfully being able to delete the
healthmonitor.

Not sure if there is a dependency ,but the way to handle this gracefully
is to report the error and not throw the HTTP 500 .

neutron lbaas-healthmonitor-delete 0c4aa5b9-d343-4ce3-b191-611e85c1f216

DEBUG: neutronclient.client RESP:500 CaseInsensitiveDict(
{'date': 'Fri, 05 Dec 2014 17:35:43 GMT', 'content-length': '161', 'conte 
nt-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-525545b7-8943-43bd-a3c4-288971e7f12e'}

) {"NeutronError
": {"message": "Invalid state PENDING_DELETE of loadbalancer resource 
0c4aa5b9-d343-4ce3-b191-611e85c1f216", "type": "StateInvalid
", "detail": ""}}

DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": {"message": 
"Invalid state PENDING_DELETE of loadbalancer resourc
e 0c4aa5b9-d343-4ce3-b191-611e85c1f216", "type": "StateInvalid", "detail": ""}}
ERROR: neutronclient.shell Invalid state PENDING_DELETE of loadbalancer 
resource 0c4aa5b9-d343-4ce3-b191-611e85c1f216
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 593, 
in run_subcommand
return run_command(cmd, cmd_parser, sub_argv)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 88, 
in run_command
return cmd.run(known_args)
File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/_init_.py", 
line 568, in run
obj_deleter(_id)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 101, in with_params
ret = self.function(instance, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 755, in delete_lbaas_healthmonitor
(lbaas_healthmonitor))
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 1471, in delete
headers=headers, params=params)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 1460, in retry_request
headers=headers, params=params)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 1403, in do_request
self._handle_fault_response(status_code, replybody)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 1371, in _handle_fault_response
exception_handler_v20(status_code, des_error_body)
File "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", 
line 68, in exception_handler_v20
status_code=status_code)
StateInvalidClient: Invalid state PENDING_DELETE of loadbalancer resource 
0c4aa5b9-d343-4ce3-b191-611e85c1f216
DEBUG: neutronclient.shell clean_up DeleteHealthMonitor
DEBUG: neutronclient.shell Got an error: Invalid state PENDING_DELETE of 
loadbalancer resource 0c4aa5b9-d343-4ce3-b191-611e85c1f21
6

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399749

Title:
  [LBaaS V2] A delete operation on healthmonitor throws HTTP 500
  Internal Server Error

Status in OpenStack Neutron (virtual network service):
  New

Bug description:

  I attempted to delete a healthmonitor object using the CLI and the
  call failed with an HTTP 500.Subsequently I deleted the pool and the
  loadbalancer objects before successfully being able to delete the
  healthmonitor.

  Not sure if there is a dependency ,but the way to handle this
  gracefully is to report the error and not throw the HTTP 500 .

  neutron lbaas-healthmonitor-delete
  0c4aa5b9-d343-4ce3-b191-611e85c1f216

  DEBUG: neutronclient.client RESP:500 CaseInsensitiveDict(
  {'date': 'Fri, 05 Dec 2014 17:35:43 GMT', 'content-length': '161', 'conte 
nt-type': 'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-525545b7-8943-43bd-a3c4-288971e7f12e'}

  ) {"NeutronError
  ": {"message": "Invalid state PENDING_DELETE of loadbalancer resource 
0c4aa5b9-d343-4ce3-b191-611e85c1f216", "type": "StateInvalid
  ", "detail": ""}}

  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": {"message": 
"Invalid state PENDING_DELETE of loadbalancer resourc
  e 0c4aa5b9-d343-4ce3-b191-611e85c1f216", "type": "StateInvalid", "detail": 
""}}
  ERROR: neutronclient.shell Invalid state PENDING_DELETE of loadbalancer 
resource 0c4aa5b9-d343-4ce3-b191-611e85c1f216
  Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
593, in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py", line 
88, in run_command
  return cmd.run(known_args)
  File 
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/_init_.py", 
line 568, in run
  obj_deleter(_id)
  File "/us

[Yahoo-eng-team] [Bug 1399706] [NEW] QoS on Juno with RBD backend dont work for VM

2014-12-05 Thread Ivan Arsenault
Public bug reported:

Hi,
QoS for volume work with RBD but not for VM. So to solve this problem, you need 
to pass quota:disk_write_bytes_sec etc... extra parameters to VM.

At ligne 645/usr/lib/python2.7/site-
packages/nova/virt/libvirt/imagebackend.py add this at the end on the
«libvirt_info» method add this extra code :

Ex:
  def libvirt_info(self, disk_bus, disk_dev, device_type, cache_mode,
 extra_specs, hypervisor_version):
...
 if auth_enabled:
 info.auth_secret_type = 'ceph'
 info.auth_secret_uuid = CONF.libvirt.rbd_secret_uuid
+tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
+'disk_write_bytes_sec', 'disk_write_iops_sec',
+'disk_total_bytes_sec', 'disk_total_iops_sec']
+for key, value in extra_specs.iteritems():
+scope = key.split(':')
+if len(scope) > 1 and scope[0] == 'quota':
+if scope[1] in tune_items:
+setattr(info, scope[1], value)
return info

after this patch, if you «dumpxml VM ID» with virsh you got the missing
...

Ex:

virsh # dumpxml 2

...
   
  
  

  
  



  
  
  
83886080
15000
  
  
  

...

Voilà...

Ivan

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399706

Title:
  QoS on Juno with RBD backend dont work for VM

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi,
  QoS for volume work with RBD but not for VM. So to solve this problem, you 
need to pass quota:disk_write_bytes_sec etc... extra parameters to VM.

  At ligne 645/usr/lib/python2.7/site-
  packages/nova/virt/libvirt/imagebackend.py add this at the end on the
  «libvirt_info» method add this extra code :

  Ex:
def libvirt_info(self, disk_bus, disk_dev, device_type, cache_mode,
   extra_specs, hypervisor_version):
  ...
   if auth_enabled:
   info.auth_secret_type = 'ceph'
   info.auth_secret_uuid = CONF.libvirt.rbd_secret_uuid
  +tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
  +'disk_write_bytes_sec', 'disk_write_iops_sec',
  +'disk_total_bytes_sec', 'disk_total_iops_sec']
  +for key, value in extra_specs.iteritems():
  +scope = key.split(':')
  +if len(scope) > 1 and scope[0] == 'quota':
  +if scope[1] in tune_items:
  +setattr(info, scope[1], value)
  return info

  after this patch, if you «dumpxml VM ID» with virsh you got the
  missing ...

  Ex:

  virsh # dumpxml 2

  ...
 


  


  
  
  



  83886080
  15000



  
  ...

  Voilà...

  Ivan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399585] Re: Add ability to set settings from environment variables

2014-12-05 Thread OpenStack Infra
** Changed in: horizon
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399585

Title:
  Add ability to set settings from environment variables

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  To ease deployment and configuration, we want to being able to use
  environment variables as settings from horizon django app.

  A use case can be, for example, to set `SECRET_KEY` in a uwsgi
  configuration file provisioned by chef/puppet...

  With env variables, cfg software doesn't have to know very much about
  horizon (and its update) and can add specific settings.

  Another use case is for packaging to set STATIC_ROOT (and push files
  in there own package) without having to change a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399683] [NEW] 0.7.5, centos7, openstack havana, mounts created for nonexistent devices

2014-12-05 Thread Chris Buben
Public bug reported:


cloud-init 0.7.5 on centos 7, running in OpenStack Havana.

/etc/fstab entries are created for devices that do not exist on the
system.

[cbuben@cbc7gr4x1 ~]$ rpm -q cloud-init
cloud-init-0.7.5-10.el7.centos.1.x86_64

[cbuben@cbc7gr4x1 ~]$ blkid
/dev/vda1: UUID="abc9bf42-9fd1-41de-82df-48812c34a876" TYPE="ext4" 
/dev/vdb: LABEL="ephemeral0" UUID="91394da3-4d82-4f80-b466-24efd9f6e146" 
SEC_TYPE="ext2" TYPE="ext3" 
/dev/vdc: UUID="bc02e445-949e-4703-8c8c-ebfcfaea7d4c" TYPE="swap" 
/dev/vdd: SEC_TYPE="msdos" LABEL="config-2" UUID="F632-BDC3" TYPE="vfat" 

[cbuben@cbc7gr4x1 ~]$ grep -A5 mounts: /etc/cloud/cloud.cfg
mounts:
 - [ ephemeral0, /mnt/ephemeral0 ]
 - [ ephemeral1, /mnt/ephemeral1 ]
 - [ ephemeral2, /mnt/ephemeral2 ]
 - [ ephemeral3, /mnt/ephemeral3 ]

> ACTUAL RESULTS

[cbuben@cbc7gr4x1 ~]$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Dec  5 01:37:44 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=abc9bf42-9fd1-41de-82df-48812c34a876 /   ext4
defaults1 1
/dev/vdb/mnt/ephemeral0 autodefaults,nofail,comment=cloudconfig 
0   2
ephemeral1  /mnt/ephemeral1 autodefaults,nofail,comment=cloudconfig 
0   2
ephemeral2  /mnt/ephemeral2 autodefaults,nofail,comment=cloudconfig 
0   2
ephemeral3  /mnt/ephemeral3 autodefaults,nofail,comment=cloudconfig 
0   2
/dev/vdcnoneswapsw,comment=cloudconfig  0   0


> EXPECTED RESULTS

[cbuben@cbc7gr4x1 ~]$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Dec  5 01:37:44 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=abc9bf42-9fd1-41de-82df-48812c34a876 /   ext4
defaults1 1
/dev/vdb/mnt/ephemeral0 autodefaults,nofail,comment=cloudconfig 
0   2
/dev/vdcnoneswapsw,comment=cloudconfig  0   0


I believe this may be a regression in 0.7.3.

In 0.7.2, it seems like the following logic: https://github.com/cbuben
/cloud-init/blob/0.7.2/cloudinit/config/cc_mounts.py#L79-L81 sets the
mount point to None if the device with the given name does not exist.
Later on, in https://github.com/cbuben/cloud-
init/blob/0.7.2/cloudinit/config/cc_mounts.py#L149-L150 , the None value
in the field is used to skip the device.  On our EL6 systems, we run
this identical cloud.cfg and do NOT have this problem.

In 0.7.3 there was some refactoring to break out the cleanup of the
device name lookup, but otherwise the flow if the logic is the same.
However, if the device does not exist (i.e. sanitize_devname returns
None), the mount point is NOT set to the sentinel value None:
https://github.com/cbuben/cloud-
init/blob/0.7.3/cloudinit/config/cc_mounts.py#L100-L102 .  Then later,
in https://github.com/cbuben/cloud-
init/blob/0.7.3/cloudinit/config/cc_mounts.py#L160-L161 the same check
for "mount point is None" is applied.

If I re-add setting the mountpoint to None if the device with a given
name doesn't exist, correct behavior apperas to be restored:
https://github.com/cbuben/cloud-
init/compare/0.7.5...0_7_5_bad_devs_in_fstab

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1399683

Title:
  0.7.5, centos7, openstack havana, mounts created for nonexistent
  devices

Status in Init scripts for use on cloud images:
  New

Bug description:
  
  cloud-init 0.7.5 on centos 7, running in OpenStack Havana.

  /etc/fstab entries are created for devices that do not exist on the
  system.

  [cbuben@cbc7gr4x1 ~]$ rpm -q cloud-init
  cloud-init-0.7.5-10.el7.centos.1.x86_64

  [cbuben@cbc7gr4x1 ~]$ blkid
  /dev/vda1: UUID="abc9bf42-9fd1-41de-82df-48812c34a876" TYPE="ext4" 
  /dev/vdb: LABEL="ephemeral0" UUID="91394da3-4d82-4f80-b466-24efd9f6e146" 
SEC_TYPE="ext2" TYPE="ext3" 
  /dev/vdc: UUID="bc02e445-949e-4703-8c8c-ebfcfaea7d4c" TYPE="swap" 
  /dev/vdd: SEC_TYPE="msdos" LABEL="config-2" UUID="F632-BDC3" TYPE="vfat" 

  [cbuben@cbc7gr4x1 ~]$ grep -A5 mounts: /etc/cloud/cloud.cfg
  mounts:
   - [ ephemeral0, /mnt/ephemeral0 ]
   - [ ephemeral1, /mnt/ephemeral1 ]
   - [ ephemeral2, /mnt/ephemeral2 ]
   - [ ephemeral3, /mnt/ephemeral3 ]

  > ACTUAL RESULTS

  [cbuben@cbc7gr4x1 ~]$ cat /etc/fstab
  #
  # /etc/fstab
  # Created by anaconda on Fri Dec  5 01:37:44 2014
  #
  # Accessible filesystems, by reference, are maintained under '/dev/disk'
  # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
  #
  UUID=abc9bf42-9fd1-41de-82df-48812c34a876 /   ext4
defaults1 1
  /dev/vdb  /mnt/ephemeral0 autodefaults,n

[Yahoo-eng-team] [Bug 1396529] Re: Nova deletes instance when compute/rabbit is dead at the end of live migration

2014-12-05 Thread Pawel Koniszewski
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396529

Title:
  Nova deletes instance when compute/rabbit is dead at the end of live
  migration

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When e.g. nova-compute or rabbit-server dies during live migration and
  somehow nova-compute is not able to report new host for migrated VM,
  then after successful system recovery nova deletes VM instead of
  sending host update.  This is from nova log:

  09:00:25.704 INFO nova.compute.manager [-] [instance: 
b8a3bdd6-809f-44b4-875d-df3feafab41a] Deleting instance as its host (node-16) 
is not equal to our host (node-15).
  09:00:27.972 INFO oslo.messaging._drivers.impl_rabbit [-] Reconnecting to 
AMQP server on 10.4.8.2:5672
  09:00:27.972 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect 
for 1.0 seconds...
  09:00:28.981 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP 
server on 10.4.8.2:5672
  09:00:36.464 INFO nova.compute.manager [-] Lifecycle event 1 on VM 
b8a3bdd6-809f-44b4-875d-df3feafab41a
  09:00:36.468 INFO nova.virt.libvirt.driver [-] [instance: 
b8a3bdd6-809f-44b4-875d-df3feafab41a] Instance destroyed successfully.
  09:00:36.471 INFO nova.virt.libvirt.firewall [-] [instance: 
b8a3bdd6-809f-44b4-875d-df3feafab41a] Attempted to unfilter instance which is 
not filtered
  09:00:36.521 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP 
server on 10.4.8.2:5672
  09:00:36.565 INFO nova.compute.manager 
[req-93e15eda-8d65-49f5-a195-52b91da7aa68 None] [instance: 
b8a3bdd6-809f-44b4-875d-df3feafab41a] During the sync_power process the 
instance has moved from host node-15 to host node-16
  09:00:36.566 INFO nova.virt.libvirt.driver [-] [instance: 
b8a3bdd6-809f-44b4-875d-df3feafab41a] Deleting instance files 
/var/lib/nova/instances/b8a3bdd6-809f-44b4-875d-df3feafab41a
  09:00:36.566 INFO nova.virt.libvirt.driver [-] [instance: 
b8a3bdd6-809f-44b4-875d-df3feafab41a] Deletion of 
/var/lib/nova/instances/b8a3bdd6-809f-44b4-875d-df3feafab41a complete

  However VM record in database is still present (with state MIGRATING)
  and volume is still attached to VM that does not exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1396529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399585] Re: Add ability to set settings from environment variables

2014-12-05 Thread Matthias Runge
** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399585

Title:
  Add ability to set settings from environment variables

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  To ease deployment and configuration, we want to being able to use
  environment variables as settings from horizon django app.

  A use case can be, for example, to set `SECRET_KEY` in a uwsgi
  configuration file provisioned by chef/puppet...

  With env variables, cfg software doesn't have to know very much about
  horizon (and its update) and can add specific settings.

  Another use case is for packaging to set STATIC_ROOT (and push files
  in there own package) without having to change a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392564] Re: metadata proxy is not started when network has ipv6 and ipv4 subnets

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1392564

Title:
  metadata proxy is not started when network has ipv6 and ipv4 subnets

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  Commit 1b66e11b5d8c0b3de0610ca02c3e10b6f64ae375 introduce a new
  problem that metadata proxy will not be started when a isolated
  network contains ipv6 subnet with dhcp enabled and a ipv4 subnet.

  See the discussion in the code review for details:

  https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1392564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399585] [NEW] Add ability to set settings from environment variables

2014-12-05 Thread Guilhem Lettron
Public bug reported:

To ease deployment and configuration, we want to being able to use
environment variables as settings from horizon django app.

A use case can be, for example, to set `SECRET_KEY` in a uwsgi
configuration file provisioned by chef/puppet...

With env variables, cfg software doesn't have to know very much about
horizon (and its update) and can add specific settings.

Another use case is for packaging to set STATIC_ROOT (and push files in
there own package) without having to change a file.

** Affects: horizon
 Importance: Undecided
 Assignee: Guilhem Lettron (guilhem-fr)
 Status: In Progress

** Description changed:

  To ease deployment and configuration, we want to being able to use
  environment variables as settings from horizon django app.
  
  A use case can be, for example, to set `SECRET_KEY` in a uwsgi
  configuration file provisioned by chef/puppet...
  
  With env variables, cfg software doesn't have to know very much about
  horizon (and its update) and can add specific settings.
  
- Another use case is for packaging to set STATIC_ROOT without having to
- change a file.
+ Another use case is for packaging to set STATIC_ROOT (and push files in
+ there own package) without having to change a file.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399585

Title:
  Add ability to set settings from environment variables

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  To ease deployment and configuration, we want to being able to use
  environment variables as settings from horizon django app.

  A use case can be, for example, to set `SECRET_KEY` in a uwsgi
  configuration file provisioned by chef/puppet...

  With env variables, cfg software doesn't have to know very much about
  horizon (and its update) and can add specific settings.

  Another use case is for packaging to set STATIC_ROOT (and push files
  in there own package) without having to change a file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391326] Re: Remove openvswitch core plugin entry point

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391326

Title:
  Remove openvswitch core plugin entry point

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  The openvswitch core plugin has been removed[1] from neutron tree but
  not its entry point.

  setup.cfg:

  neutron.core_plugins =
openvswitch = 
neutron.plugins.openvswitch.ovs_neutron_plugin:OVSNeutronPluginV2

  
  [1] https://bugs.launchpad.net/neutron/+bug/1323729

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1391326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378215] Re: If db deadlock occurs for some reason while deleting an image, no one can delete the image any more

2014-12-05 Thread Alan Pevec
** Changed in: glance/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378215

Title:
  If db deadlock occurs for some reason while deleting an image, no one
  can delete the image any more

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance juno series:
  Fix Released

Bug description:
  Glance api returns 500 Internal Server Error, if db deadlock occurs in 
glance-registry for some reason while deleting an image. 
  The image 'status' is set to deleted and 'deleted' is set to False. As 
deleted is still False, the image is visible in image list but it can not be 
deleted any more.

  If you try to delete this image it will raise 404 (Not Found) error
  for V1 api and 500 (HTTPInternalServerError) for V2 api.

  Note:
  To reproduce this issue I've explicitly raised "db_exception.DBDeadlock" 
exception from "_image_child_entry_delete_all" method under 
"\glance\db\sqlalchemy\api.py".

  glance-api.log
  --
  2014-10-06 00:53:10.037 6827 INFO glance.registry.client.v1.client 
[2b47d213-6f80-410f-9766-dc80607f0224 7e7c3a413f184dbcb9a65404dbfcc0f0 
309c5ff4082c423
  1bcc17d8c55c83997 - - -] Registry client request DELETE 
/images/f9f8a40d-530b-498c-9fbc-86f29da555f4 raised ServerError
  2014-10-06 00:53:10.045 6827 INFO glance.wsgi.server 
[2b47d213-6f80-410f-9766-dc80607f0224 7e7c3a413f184dbcb9a65404dbfcc0f0 
309c5ff4082c4231bcc17d8c55c83997 - - -] Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 433, 
in handle_one_response
  result = self.application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/glance/glance/common/wsgi.py", line 394, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/osprofiler/web.py", line 106, 
in __call__
  return request.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
748, in __call__
  return self._call_app(env, start_response)
File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
684, in _call_app
  return self._app(env, _fake_start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/glance/glance/common/wsgi.py", line 394, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/glance/glance/common/wsgi.py", line 394, in __call__
  response = req.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *a

[Yahoo-eng-team] [Bug 1323599] Re: Network topology: some terms still not translatable

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1323599

Title:
  Network topology: some terms still not translatable

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  Bug 1226159 did a great job to improve the translatability of the
  network topology's popup windows. However there are still words that
  don't show as translated (see screenshot):

   - The resource type, like "instance" or "router" 
   - Buttons, like Terminate instance
   - Perhaps it would also be good to make use of the status translated list 
(see 
https://github.com/openstack/horizon/blob/8314fb1367/openstack_dashboard/dashboards/project/instances/tables.py#L687)
 for the Instance Status

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1323599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399580] [NEW] Add ability to set settings from a directory

2014-12-05 Thread Guilhem Lettron
Public bug reported:

As a company, we want to set some common settings for our horizon without 
having to diverg from upstream in default "settings.py".
We also want to let developers being able to use local_settings.py for local 
dev.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399580

Title:
  Add ability to set settings from a directory

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As a company, we want to set some common settings for our horizon without 
having to diverg from upstream in default "settings.py".
  We also want to let developers being able to use local_settings.py for local 
dev.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399573] [NEW] allows to configure disk driver IO policy

2014-12-05 Thread sahid
Public bug reported:

libvirt allows to configure the disk IO policy with io=native or
io=threads which according this email clearly improves performance:

https://www.redhat.com/archives/libvirt-users/2011-June/msg4.html

We should give the possibility to configure this as we do for
disk_cachemode

** Affects: nova
 Importance: Wishlist
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399573

Title:
  allows to configure disk driver IO policy

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  libvirt allows to configure the disk IO policy with io=native or
  io=threads which according this email clearly improves performance:

  https://www.redhat.com/archives/libvirt-users/2011-June/msg4.html

  We should give the possibility to configure this as we do for
  disk_cachemode

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396544] Re: Default `target={}` value leaks into subsequent `policy.check()` calls

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1396544

Title:
  Default `target={}` value leaks into subsequent `policy.check()` calls

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  In Progress
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Due to mutable dictionary being used as the default `target` argument
  value the first target calculated from scratch in POLICY_CHECK
  function will be used for all subsequent calls to POLICY_CHECK with 2
  arguments. The wrong `target` can either lead to a reduced set of
  operations on an entity for a given user, or to enlarged one. The
  latter case poses a security breach from an cloud operators' point of
  view.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1396544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388985] Re: Launch Stack doesn't apply parameters from environment file

2014-12-05 Thread Alan Pevec
** Changed in: heat/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1388985

Title:
  Launch Stack doesn't apply parameters from environment file

Status in Orchestration API (Heat):
  Fix Committed
Status in heat icehouse series:
  Fix Committed
Status in heat juno series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I'm using an environment file to pass a parameter into a simple heat
  template and the parameter is not relfected in the Horizon UI nor the
  created resource.

  To recreate:
  Project->Orchestration->Stacks->Launch Stack
  Provde the template.txt and env.txt files I've attached.  I used the "File" 
source option.
  Click next.
  I'd expect to see the net_name as "betternetname" from the env file, but 
instead "defaultnet" is displayed.

  Continuing to launch, the network is named "defaultnet".

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1388985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226944] Re: Cinder API v2 support

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1226944

Title:
  Cinder API v2 support

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  When clicking on the volumes tab in both the Admin and the project
  panels I get the following stack trace

  Traceback:
  File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in 
get_response
115. response = callback(request, *callback_args, 
**callback_kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
38. return view_func(request, *args, **kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
86. return view_func(request, *args, **kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
54. return view_func(request, *args, **kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
38. return view_func(request, *args, **kwargs)
  File "/opt/horizon/horizon/decorators.py" in dec
86. return view_func(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py" in 
view
68. return self.dispatch(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/django/views/generic/base.py" in 
dispatch
86. return handler(request, *args, **kwargs)
  File "/opt/horizon/horizon/tables/views.py" in get
155. handled = self.construct_tables()
  File "/opt/horizon/horizon/tables/views.py" in construct_tables
146. handled = self.handle_table(table)
  File "/opt/horizon/horizon/tables/views.py" in handle_table
118. data = self._get_data_dict()
  File "/opt/horizon/horizon/tables/views.py" in _get_data_dict
44. data.extend(func())
  File "/opt/horizon/openstack_dashboard/dashboards/admin/volumes/views.py" in 
get_volumes_data
48. self._set_id_if_nameless(volumes, instances)
  File "/opt/horizon/openstack_dashboard/dashboards/project/volumes/views.py" 
in _set_id_if_nameless
72. if not volume.display_name:
  File "/usr/local/lib/python2.7/dist-packages/cinderclient/base.py" in 
__getattr__
268. raise AttributeError(k)

  Exception Type: AttributeError at /admin/volumes/
  Exception Value: display_name

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1226944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340411] Re: Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral RBD

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340411

Title:
  Evacuate Fails 'Invalid state of instance files' using Ceph Ephemeral
  RBD

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Greetings,

  
  We can't seem to be able to evacuate instances from a failed compute node 
using shared storage. We are using Ceph Ephemeral RBD as the storage medium.

  
  Steps to reproduce:

  nova evacuate --on-shared-storage 6e2081ec-2723-43c7-a730-488bb863674c node-24
  or
  POST  to http://ip-address:port/v2/tenant_id/servers/server_id/action with 
  {"evacuate":{"host":"node-24","onSharedStorage":1}}

  
  Here is what shows up in the logs:

  
  180>Jul 10 20:36:48 node-24 nova-nova.compute.manager AUDIT: Rebuilding 
instance
  <179>Jul 10 20:36:48 node-24 nova-nova.compute.manager ERROR: Setting 
instance vm_state to ERROR
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5554, 
in _error_out_instance_on_exception
  yield
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2434, 
in rebuild_instance
  _("Invalid state of instance files on shared"
  InvalidSharedStorage: Invalid state of instance files on shared storage
  <179>Jul 10 20:36:49 node-24 nova-oslo.messaging.rpc.dispatcher ERROR: 
Exception during message handling: Invalid state of instance files on shared 
storage
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 133, in _dispatch_and_reply
  incoming.message))
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 176, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 393, 
in decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 
139, in inner
  return func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in 
wrapped
  payload)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in 
wrapped
  return f(self, context, *args, **kw)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 274, 
in decorated_function
  pass
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 260, 
in decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 327, 
in decorated_function
  function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 303, 
in decorated_function
  e, sys.exc_info())
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 290, 
in decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2434, 
in rebuild_instance
  _("Invalid state of instance files on shared"
  InvalidSharedStorage: Invalid state of instance files on shared storage

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357368] Re: Source side post Live Migration Logic cannot disconnect multipath iSCSI devices cleanly

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357368

Title:
  Source side post Live Migration Logic cannot disconnect multipath
  iSCSI devices cleanly

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When a volume is attached to a VM in the source compute node through
  multipath, the related files in /dev/disk/by-path/ are like this

  stack@ubuntu-server12:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.50:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.a5-lun-24
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-24

  The information on its corresponding multipath device is like this
  stack@ubuntu-server12:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:24 sdl 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
    `- 18:0:0:24 sdj 8:144 active undef running

  But when the VM is migrated to the destination, the related
  information is like the following example since we CANNOT guarantee
  that all nodes are able to access the same iSCSI portals and the same
  target LUN number. And the information is used to overwrite
  connection_info in the DB before the post live migration logic is
  executed.

  stack@ubuntu-server13:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b5-lun-100
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-100

  stack@ubuntu-server13:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:100 sdf 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
    `- 18:0:0:100 sdg 8:144 active undef running

  As a result, if post live migration in source side uses ,  and 
 to find the devices to clean up, it may use 192.168.3.51, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 100.
  However, the correct one should be 192.168.3.50, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 24.

  Similar philosophy in (https://bugs.launchpad.net/nova/+bug/1327497)
  can be used to fix it: Leverage the unchanged multipath_id to find
  correct devices to delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249319] Re: evacuate on ceph backed volume fails

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249319

Title:
  evacuate on ceph backed volume fails

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When using nova evacuate to move an instance from one compute host to
  another, the command silently fails. The issue seems to be that the
  rebuild process builds an incorrect libvirt.xml file that no longer
  correctly references the ceph volume.

  Specifically under the  section I see:

  

  where in the original libvirt.xml the file was:

  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358583] Re: [OSSA 2014-038] List instances by IP results in DoS of nova-network (CVE-2014-3708)

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358583

Title:
  [OSSA 2014-038] List instances by IP results in DoS of nova-network
  (CVE-2014-3708)

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Hi,

  On a customer install which has approximately 500 VMs in the system,
  running the following will hang:

  nova list --ip 199

  What will happen afterwards is that the nova-network process will stop
  responding for a while, a trace shows that it's receiving a huge
  amount of data.  Upon further investigation, it looks like the issue
  maybe the right here:

  
https://github.com/openstack/nova/blob/stable/icehouse/nova/network/manager.py#L420

  On this installation:

  nova=> select count(*) from virtual_interfaces;
   count 
  ---
   11985
  (1 row)

  So with 1 run, we're sending almost 12K records to a single nova-
  network process which takes up a huge CPU load (and blocks it from
  doing anything else).

  What ends up happening is other things start timing out in the system,
  such as resizes and new deployments:

  2014-08-19 03:44:49.511 31562 ERROR nova.compute.manager 
[req-e7b6d34f-81b5-46f9-a5e9-25ccfb863cfe bac292822cdf451f81201b3c1957914f 
78578deaaf3542c087101746d1ad3f50] [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] Setting instance vm_state to ERROR
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] Traceback (most recent call last):
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3547, in 
finish_resize
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] disk_info, image)
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3490, in 
_finish_resize
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] migration['dest_compute'])
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064]   File 
"/usr/lib/python2.7/dist-packages/nova/network/api.py", line 95, in wrapped
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] return func(self, context, *args, 
**kwargs)
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064]   File 
"/usr/lib/python2.7/dist-packages/nova/network/api.py", line 509, in 
setup_networks_on_host
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] 
self.network_rpcapi.setup_networks_on_host(context, **args)
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064]   File 
"/usr/lib/python2.7/dist-packages/nova/network/rpcapi.py", line 270, in 
setup_networks_on_host
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] teardown=teardown)
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064]   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 361, in 
call
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] return self.prepare().call(ctxt, 
method, **kwargs)
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064]   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] wait_for_reply=True, timeout=timeout)
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064]   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] timeout=timeout)
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064]   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
412, in send
  2014-08-19 03:44:49.511 31562 TRACE nova.compute.manager [instance: 
28bf47af-1063-473c-9c7c-bb6351e97064] return se

[Yahoo-eng-team] [Bug 1279172] Re: Unicode encoding error exists in extended Nova API, when the data contain unicode

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279172

Title:
  Unicode encoding error exists in extended Nova API, when the data
  contain unicode

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  We have developed an extended Nova API, the API query disks at first, then 
add a disk to an instance.
  After querying, if disk has non-english disk name, unicode will be converted 
to str in nova/api/openstack/wsgi.py line 451 
  "node = doc.createTextNode(str(data))", then unicode encoding error exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370613] Re: InvalidHypervisorVirtType: Hypervisor virtualization type 'powervm' is not recognised

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370613

Title:
  InvalidHypervisorVirtType: Hypervisor virtualization type 'powervm' is
  not recognised

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in IBM PowerVC Driver for OpenStack:
  In Progress

Bug description:
  With these changes we have a list of known hypervisor types for
  scheduling:

  https://review.openstack.org/#/c/109591/
  https://review.openstack.org/#/c/109592/

  There is a powervc driver in stackforge (basically the replacement for
  the old powervm driver) which has a hypervisor type of 'powervm' and
  trying to boot anything against that fails in scheduling since the
  type is unknown.

  http://git.openstack.org/cgit/stackforge/powervc-driver/

  Seems like adding powervm to the list shouldn't be an issue given
  other things in that list like bhyve and phyp.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376933] Re: _poll_unconfirmed_resize timing window causes instance to stay in verify_resize state forever

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376933

Title:
  _poll_unconfirmed_resize timing window causes instance to stay in
  verify_resize state forever

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  If the _poll_unconfirmed_resizes periodic task runs in
  nova/compute/manager.py:ComputeManager._finish_resize() after the
  migration record has been updated in the database but before the
  instances has been updated.

  2014-09-30 16:15:00.897 112868 INFO nova.compute.manager [-] Automatically 
confirming migration 207 for instance 799f9246-bc05-4ae8-8737-4f358240f586
  2014-09-30 16:15:01.109 112868 WARNING nova.compute.manager [-] [instance: 
799f9246-bc05-4ae8-8737-4f358240f586] Setting migration 207 to error: In states 
stopped/resize_finish, not RESIZED/None

  This causes _poll_unconfirmed_resizes to see that the VM task_state is
  still 'resize_finish' instead of None, and set the migration record to
  error state. Which in turn causes the VM to be stuck in resizing
  forever.

  Two fixes have been proposed for this issue so far but were reverted
  because they caused other race conditions. See the following two bugs
  for more details.

  https://bugs.launchpad.net/nova/+bug/1321298
  https://bugs.launchpad.net/nova/+bug/1326778

  This timing issue still exists in Juno today in an environment with
  periodic tasks set to run once every 60 seconds and with a
  resize_confirm_window of 1 second.

  Would a possible solution for this be to change the code in
  _poll_unconfirmed_resizes() to ignore any VMs with a task state of
  'resize_finish' instead of setting the corresponding migration record
  to error? This is the task_state it should have right before changed
  to None in finish_resize(). Then next time _poll_unconfirmed_resizes()
  is called, the migration record will still be fetched and the VM will
  be checked again and in the updated vm_state/task_state.

  add the following in _poll_unconfirmed_resizes():

   # This removes a race condition
  if task_state == 'resize_finish':
  continue

  prior to: 
  elif vm_state != vm_states.RESIZED or task_state is not None:
  reason = (_("In states %(vm_state)s/%(task_state)s, not "
 "RESIZED/None") %
{'vm_state': vm_state,
 'task_state': task_state})
  _set_migration_to_error(migration, reason,
  instance=instance)
  continue

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372049] Re: Launching multiple VMs fails over 63 instances

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372049

Title:
  Launching multiple VMs fails over 63 instances

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in Messaging API for OpenStack:
  Won't Fix

Bug description:
  RHEL-7.0
  Icehouse
  All-In-One

  Booting 63 VMs at once (with "num-instances" attribute) works fine.
  Setup is able to support up to 100 VMs booted in ~50 bulks.

  Booting 100 VMs at once, without Neutron network, so no network for
  the VMs, works fine.

  Booting 64 (and more) VMs boots only 63 VMs. any of the VMs over 63 are 
booted in ERROR state with details: VirtualInterfaceCreateException: Virtual 
Interface creation failed
  Failed VM's port at DOWN state

  Details:
  After the initial boot commands goes through, all CPU usage goes down (no 
neutron/nova CPU consumption) untll nova's vif_plugging_timeout is reached. at 
which point 1 (= #num_instances - 63) VM is set to ERROR, and the rest of the 
VMs reach active state.

  Guess: seems like neutron is going into some deadlock until some of
  the load is reduced by vif_plugging_timeout


  disabling neutorn-nova port notifications allows all VMs to be
  created.

  Notes: this is recreated also with multiple Compute nodes, and also
  multiple neutron RPC/API workers

  
  Recreate:
  set nova/neutron quota's to "-1"
  make sure neutorn-nova port notifications is ON on both neutron and nova conf 
files
  create a network in your tenant

  boot more than 64 VMs

  nova boot --flavor 42 test_VM --image cirros --num-instances 64


  [yfried@yfried-mobl-rh ~(keystone_demo)]$ nova list
  
+--+--+++-+-+
  | ID   | Name 
| Status | Task State | Power State | Networks|
  
+--+--+++-+-+
  | 02d7b680-efd8-4291-8d56-78b43c9451cb | 
test_VM-02d7b680-efd8-4291-8d56-78b43c9451cb | ACTIVE | -  | Running
 | demo_private=10.0.0.156 |
  | 05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | 
test_VM-05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | ACTIVE | -  | Running
 | demo_private=10.0.0.150 |
  | 09131f19-5e83-4a40-a900-ffca24a8c775 | 
test_VM-09131f19-5e83-4a40-a900-ffca24a8c775 | ACTIVE | -  | Running
 | demo_private=10.0.0.160 |
  | 0d3be93b-73d3-4995-913c-03a4b80ad37e | 
test_VM-0d3be93b-73d3-4995-913c-03a4b80ad37e | ACTIVE | -  | Running
 | demo_private=10.0.0.164 |
  | 0fcadae4-768c-44a1-9e1c-ac371d1803f9 | 
test_VM-0fcadae4-768c-44a1-9e1c-ac371d1803f9 | ACTIVE | -  | Running
 | demo_private=10.0.0.202 |
  | 11a87db1-5b15-4cad-a749-5d53e2fd8194 | 
test_VM-11a87db1-5b15-4cad-a749-5d53e2fd8194 | ACTIVE | -  | Running
 | demo_private=10.0.0.201 |
  | 147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | 
test_VM-147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | ACTIVE | -  | Running
 | demo_private=10.0.0.147 |
  | 1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | 
test_VM-1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | ACTIVE | -  | Running
 | demo_private=10.0.0.187 |
  | 1d0b7210-f5a0-4827-b338-2014e8f21341 | 
test_VM-1d0b7210-f5a0-4827-b338-2014e8f21341 | ACTIVE | -  | Running
 | demo_private=10.0.0.165 |
  | 1df564f6-5aac-4ac8-8361-bd44c305332b | 
test_VM-1df564f6-5aac-4ac8-8361-bd44c305332b | ACTIVE | -  | Running
 | demo_private=10.0.0.145 |
  | 2031945f-6305-4cdc-939f-5f02171f82b2 | 
test_VM-2031945f-6305-4cdc-939f-5f02171f82b2 | ACTIVE | -  | Running
 | demo_private=10.0.0.149 |
  | 256ff0ed-0e56-47e3-8b69-68006d658ad6 | 
test_VM-256ff0ed-0e56-47e3-8b69-68006d658ad6 | ACTIVE | -  | Running
 | demo_private=10.0.0.177 |
  | 2b7256a8-c04a-42cf-9c19-5836b585c0f5 | 
test_VM-2b7256a8-c04a-42cf-9c19-5836b585c0f5 | ACTIVE | -  | Running
 | demo_private=10.0.0.180 |
  | 2daac227-e0c9-4259-8e8e-b8a6e93b45e3 | 
test_VM-2daac227-e0c9-4259-8e8e-b8a6e93b45e3 | ACTIVE | -  | Running
 | demo_private=10.0.0.191 |
  | 425c170f-a450-440d-b9ba-0408d7c69b25 | 
test_VM-425c170f-a450-440d-b9ba-0408d7c69b25 | ACTIVE | -  | Running
 | demo_private=10.0.0.169 |
  | 461fcce3-96ae-4462-ab65-fb63f3552703 | 
test_VM-461fcce3-96ae-4462-ab65-fb63f3552703 | ACTIVE | -  | Running
 | demo_private=10.0.0.179 |
  | 46a9965d-6511-44a3-ab71-a87767cda759 | 
test_VM-46a9965d-6511-44a3-ab71-a87767cda759 | ACTIVE | -  | Running
 | demo_private=10.0.0.199 |
  | 4c4ce671

[Yahoo-eng-team] [Bug 1375467] Re: db deadlock on _instance_update()

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375467

Title:
  db deadlock on _instance_update()

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  continuing from the same pattern as that of
  https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
  unhandled deadlocks on derivatives of _instance_update(), such as the
  stacktrace below.  As _instance_update() is a point of transaction
  demarcation based on its use of get_session(), the @_retry_on_deadlock
  should be added to this method.

  Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 133, in _dispatch_and_reply\
  incoming.message))\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 176, in _dispatch\
  return self._do_dispatch(endpoint, method, ctxt, args)\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch\
  result = getattr(endpoint, method)(ctxt, **new_args)\
  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 887, 
in instance_update\
  service)\
  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py", line 
139, in inner\
  return func(*args, **kwargs)\
  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 130, 
in instance_update\
  context, instance_uuid, updates)\
  File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 742, in 
instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 164, 
in wrapper\
  return f(*args, **kwargs)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2208, 
in instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 2299, 
in _instance_update\
  session.add(instance_ref)\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
447, in __exit__\
  self.rollback()\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", 
line 58, in __exit__\
  compat.reraise(exc_type, exc_value, exc_tb)\
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 
444, in __exit__\
  self.commit()\
  File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py", line 443, in _wrap\
  _raise_if_deadlock_error(e, self.bind.dialect.name)\
  File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py", line 427, in _raise_if_deadlock_error\
  raise exception.DBDeadlock(operational_error)\
  DBDeadlock: (OperationalError) (1213, \'Deadlock found when trying to get 
lock; try restarting transaction\') None None\

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380624] Re: VMware: booting from a volume does not configure config driver if necessary

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380624

Title:
  VMware: booting from a volume does not configure config driver if
  necessary

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When booting from a volume the config driver will not be mounted (if
  configured)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380792] Re: requests to EC2 metadata's '/2009-04-04/meta-data/security-groups' failing

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380792

Title:
  requests to EC2 metadata's '/2009-04-04/meta-data/security-groups'
  failing

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in nova package in Ubuntu:
  Confirmed

Bug description:
  Just did a distro upgrade to juno rc2.. Running an old nova-network
  cloud with mult-host, nova-api running on compute host.  Noticed
  ubuntu instances cloud-init is failing:

  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 177, in 
retry_url
  resp = urllib2.urlopen(req)
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
  return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 406, in open
  response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 519, in http_response
  'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 444, in error
  return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
  result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 527, in http_error_default
  raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)

  Looking at nova-api.log on compute, webob is throwing an exception:

  2014-10-13 13:47:37.468 9183 INFO nova.metadata.wsgi.server 
[req-e133f95b-5f99-41e5-89dc-8e35b41f7cd6 None] 10.0.0.6 "GET 
/2009-04-04/meta-data/security-groups HTTP/1.1" status: 400 len: 265 time: 
0.2675409
  2014-10-13 13:48:41.947 9182 ERROR nova.api.ec2 
[req-47b84883-a48c-4004-914b-c983895a33be None] FaultWrapper: You cannot set 
Response.body to a text object (use Response.text)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 Traceback (most recent call 
last):
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py", line 87, in 
__call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 return 
req.get_response(self.application)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 application, 
catch_exc_info=False)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 resp = 
self.call_func(req, *args, **self.kwargs)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 return self.func(req, 
*args, **kwargs)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py", line 99, in 
__call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 rv = 
req.get_response(self.application)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 application, 
catch_exc_info=False)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 resp = 
self.call_func(req, *args, **self.kwargs)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 return self.func(req, 
*args, **kwargs)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/nova/api/metadata/handler.py", line 136, in 
__call__
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 req.response.body = resp
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2   File 
"/usr/lib/python2.7/dist-packages/webob/response.py", line 373, in _body__set
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 raise TypeError(msg)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec2 TypeError: You cannot set 
Response.body to a text object (use Response.text)
  2014-10-13 13:48:41.947 9182 TRACE nova.api.ec

[Yahoo-eng-team] [Bug 1382318] Re: NoValidHost failure when trying to spawn instance with unicode name

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382318

Title:
  NoValidHost failure when trying to spawn instance with unicode name

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Using the libvirt driver on Juno RC2 code, trying to create an
  instance with unicode name:

  
"\uff21\uff22\uff23\u4e00\u4e01\u4e03\u00c7\u00e0\u00e2\uff71\uff72\uff73\u0414\u0444\u044d\u0628\u062a\u062b\u0905\u0907\u0909\u20ac\u00a5\u5642\u30bd\u5341\u8c79\u7af9\u6577"

  Blows up:

  http://paste.openstack.org/show/121560/

  The libvirt config code shouldn't be casting values to str(), it
  should be using six.text_type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1382318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386687] Re: Overview page: OverflowError when cinder limits are negative

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1386687

Title:
  Overview page: OverflowError when cinder limits are negative

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  This is the Cinder twin to bug 1370869 which was resolved for Nova.
  For some yet-to-be-fully-debugged reasons, after deleting multiple
  instances the quota_usages table for Cinder ended up with negative
  values for several of the "in use" limits, causing the Overview Page
  to fail with an error 500:

  OverflowError at /project/
  cannot convert float infinity to integer

  Even if this is (probably?) a rare occurrence, it would make sense to
  also add guards for the cinder limits and make the overview page more
  resilient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1386687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386376] Re: Validation parameter_type.url regex doesn't pass validation for IPv6 addresses

2014-12-05 Thread Alan Pevec
** Changed in: keystone/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1386376

Title:
  Validation parameter_type.url regex doesn't pass validation for IPv6
  addresses

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone juno series:
  Fix Released

Bug description:
  Can't create an endpoint with an IPv6 address in the URL. E.g.:

  [root@ ~]# curl -k -i -X POST https://localhost:35357/v3/endpoints -H 
"Accept: application/json" -H "X-Auth-Token: 96d82b1a36a94b439fd91d2a875380be" 
-H "Content-Type: application/json" -d '{"endpoint": {"interface": "admin", 
"name": "metering", "region": "RegionOne", "url": 
"https://[fd55:faaf:e1ab:3ea:9:114:251:134]:8777/v2";, "service_id": 
"57118ebd91094d7d8d609136d185f0dd"}}'; echo
  HTTP/1.1 400 Bad Request
  Date: Mon, 27 Oct 2014 18:42:32 GMT
  Server: Apache/2.2.15 (Red Hat)
  Vary: X-Auth-Token
  Content-Length: 182
  Connection: close
  Content-Type: application/json

  {"error": {"message": "Invalid input for field 'url'. The value is
  'https://[fd55:faaf:e1ab:3ea:9:114:251:134]:8777/v2'.", "code": 400,
  "title": "Bad Request"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1386376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385484] Re: Failed to start nova-compute after evacuate

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385484

Title:
  Failed to start nova-compute after evacuate

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  After evacuated successfully, and restarting the failed host to get it
  back. User will run into below error.


  <179>Sep 23 01:48:35 node-1 nova-compute 2014-09-23 01:48:35.346 13206 ERROR 
nova.openstack.common.threadgroup [-] error removing image
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
117, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
49, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", line 483, 
in run_service
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 163, in start
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1018, in 
init_host
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self._destroy_evacuated_instances(context)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 712, in 
_destroy_evacuated_instances
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
bdi, destroy_disks)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 962, in 
destroy
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
destroy_disks, migrate_data)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1080, in 
cleanup
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self._cleanup_rbd(instance)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1090, in 
_cleanup_rbd
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
LibvirtDriver._get_rbd_driver().cleanup_volumes(instance)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py", line 238, in 
cleanup_volumes
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
self.rbd.RBD().remove(client.ioctx, volume)
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/rbd.py", line 300, in remove
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
raise make_ex(ret, 'error removing image')
  2014-09-23 01:48:35.346 13206 TRACE nova.openstack.common.threadgroup 
ImageBusy: error removing image

T

[Yahoo-eng-team] [Bug 1393362] Re: linuxbridge agent is using too much memory

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393362

Title:
  linuxbridge agent is using too much memory

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  When vxlan is configured:

  $ ps aux | grep linuxbridge
  vagrant  21051  3.2 28.9 504764 433644 pts/3   S+   09:08   0:02 python 
/usr/local/bin/neutron-linuxbridge-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini

  
  A list with over 16 million numbers is created here:

   for segmentation_id in range(1, constants.MAX_VXLAN_VNI + 1):

  
https://github.com/openstack/neutron/blob/b5859998bc662569fee4b34fa079b4c37744de2c/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py#L526

  and does not seem to be garbage collected for some reason.

  Using xrange instead:

  $ ps -aux | grep linuxb
  vagrant   7397  0.1  0.9 106412 33236 pts/10   S+   09:19   0:05 python 
/usr/local/bin/neutron-linuxbridge-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386236] Re: NUMA scheduling will not attempt to pack an instance onto a host

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1386236

Title:
  NUMA scheduling will not attempt to pack an instance onto a host

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  When creating a flavor which includes "hw:numa_nodes": "1", all
  instances booted with this flavor are always pinned to NUMA node0.
  Multiple instances end up on node0 and no instances are on node1.  Our
  expectation was that instances would be balanced across NUMA nodes.

  To recreate:

  1) Ensure you have a compute node with at least 2 sockets
  2) Create a flavor with vcpus and memory which fits within one socket
  3) Add the flavor key: nova flavor-key  set hw:numa_nodes=1
  4) Boot more than 1 instances
  5) Verify where the vcpus are pinned

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1386236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393435] Re: Subnet delete for IPv6 SLAAC should not require prior port disassoc

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393435

Title:
  Subnet delete for IPv6 SLAAC should not require prior port disassoc

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  With the current Neutron implementation, a subnet cannot be deleted
  until all associated IP addresses have been removed from ports (via
  port update) or the associated ports/VMs have been deleted.   

  In the case of SLAAC-enabled subnets, however, it's not feasible to
  require removal of SLAAC-generated addresses individually from each
  associated port before deleting a subnet because of the multicast
  nature of RA messages. For SLAAC-enabled subnets, the processing of
  subnet delete requests needs to be changed so that these subnets will
  be allowed to be deleted, and all ports get disassociated from their
  corresponding SLAAC IP address, when there are ports existing
  on the SLAAC subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1393435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394030] Re: big switch: optimized floating IP calls missing data

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394030

Title:
  big switch: optimized floating IP calls missing data

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  The newest version of the backend controller supports a floating IP
  API instead of propagating floating IP operations through full network
  updates. When testing with this new API, we found that the data is
  missing from the body on the Big Switch neutron plugin side so the
  optimized path doesn't work correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394052] Re: Fix exception handling in _get_host_metrics()

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394052

Title:
  Fix exception handling in _get_host_metrics()

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  In resource_tracker.py, the exception path of _get_host_metrics()
  contains a wrong variable name.

  for monitor in self.monitors:
  try:
  metrics += monitor.get_metrics(nodename=nodename)
  except Exception:
  LOG.warn(_("Cannot get the metrics from %s."), monitors)   
<-- Need to change 'monitors' to 'monitor'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394569] Re: KeyError "extra_specs" in _cold_migrate

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394569

Title:
  KeyError "extra_specs" in _cold_migrate

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Trying cold migrate

  stack@o11n200:/root$ nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task 
State | Power State | Networks   |
  
+--+---+++-++
  | 476324ba-644c-472b-9de7-e4434d8211bc | fedora_instance_1 | ACTIVE | -   
   | Running | private=10.140.0.2 |
  | 3dae14aa-9168-4b5c-bb7f-3f38315dc791 | fedora_instance_2 | ACTIVE | -   
   | Running | private=10.140.0.6 |
  | e80b5863-f228-49ac-b35f-8d10a0d6e4eb | fedora_instance_3 | ACTIVE | -   
   | Running | private=10.140.0.8 |
  | 05b2ea29-ebcd-4a25-9582-6fbd223fba73 | fedora_instance_4 | ACTIVE | -   
   | Running | private=10.140.0.7 |
  
+--+---+++-++
  stack@o11n200:/root$ nova migrate fedora_instance_3
  ERROR (BadRequest): The server could not comply with the request since it is 
either malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-86801856-94cf-43c1-b21e-cb084abf8aac)

  Screen (n-api)

  2014-11-20 14:01:48.426 ERROR 
nova.api.openstack.compute.contrib.admin_actions 
[req-86801856-94cf-43c1-b21e-cb084abf8aac admin admin] Error in migrate 
u'\'extra_specs\'\nTraceb
  ack (most recent call last):\n\n  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply\nincoming.message))\n\n  
  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n\n  File "/u
  sr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n\n  File "/usr/loca
  l/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, in 
inner\nreturn func(*args, **kwargs)\n\n  File 
"/opt/stack/nova/nova/conductor/manager.py", line 49
  0, in migrate_server\nreservations)\n\n  File 
"/opt/stack/nova/nova/conductor/manager.py", line 550, in _cold_migrate\n
quotas.rollback()\n\n  File "/usr/local/lib/pytho
  n2.7/dist-packages/oslo/utils/excutils.py", line 82, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n\n  File 
"/opt/stack/nova/nova/conductor/manager.py", line 5
  36, in _cold_migrate\n
request_spec[\'instance_type\'].pop(\'extra_specs\')\n\nKeyError: 
\'extra_specs\'\n'
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions Traceback (most recent call 
last):
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/admin_actions.py", line 
151, in _migrate
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions 
self.compute_api.resize(req.environ['nova.context'], instance)
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
"/opt/stack/nova/nova/compute/api.py", line 221, in wrapped
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions return func(self, context, 
target, *args, **kwargs)
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
"/opt/stack/nova/nova/compute/api.py", line 211, in inner
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions return function(self, 
context, instance, *args, **kwargs)
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
"/opt/stack/nova/nova/compute/api.py", line 238, in _wrapped
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions return fn(self, context, 
instance, *args, **kwargs)
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
"/opt/stack/nova/nova/compute/api.py", line 192, in inner
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions return f(self, context, 
instance, *args, **kw)
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions   File 
"/opt/stack/nova/nova/compute/api.py", line 2598, in resize
  2014-11-20 14:01:48.426 TRACE 
nova.api.openstack.compute.contrib.admin_actions 
reservations=quotas.reservations or [])
  2014-11-2

[Yahoo-eng-team] [Bug 1394551] Re: Legacy GroupAffinity and GroupAntiAffinity filters are broken

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1394551

Title:
  Legacy GroupAffinity and GroupAntiAffinity filters are broken

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Both GroupAffinity and GroupAntiAffinity filters are broken. The
  scheduler does not respect the filters and schedules the servers
  against the policy.

  Reproduction steps:
  0) Spin up a single node devstack 
  1) Add GroupAntiAffinityFilter to  scheduler_default_filters in nova.conf and 
restart the nova services
  2) Boot multiple server with the following command 
  nova boot --image cirros-0.3.2-x86_64-uec --flavor 42 --hint group=foo 
server-1

  Expected behaviour:
  The second and any further boot should fail with NoValidHostFound exception 
as anti-affinity policy cannot be fulfilled.

  Actual behaviour:
  Any number of servers are booted to the same compute node

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1394551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397796] Re: alembic v. 0.7.1 will support "remove_fk" and others not expected by heal_script

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397796

Title:
  alembic v. 0.7.1 will support "remove_fk" and others not expected by
  heal_script

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  neutron/db/migration/alembic_migrations/heal_script.py seems to have a
  hardcoded notion of what commands Alembic is prepared to pass within
  the execute_alembic_command() call.   When Alembic 0.7.1 is released,
  the tests in neutron.tests.unit.db.test_migration will fail as
  follows:

  Traceback (most recent call last):
File "neutron/tests/unit/db/test_migration.py", line 194, in 
test_models_sync
  self.db_sync(self.get_engine())
File "neutron/tests/unit/db/test_migration.py", line 136, in db_sync
  migration.do_alembic_command(self.alembic_config, 'upgrade', 'head')
File "neutron/db/migration/cli.py", line 61, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/command.py",
 line 165, in upgrade
  script.run_env()
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/script.py",
 line 382, in run_env
  util.load_python_file(self.dir, 'env.py')
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/util.py",
 line 241, in load_python_file
  module = load_module_py(module_id, path)
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/compat.py",
 line 79, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File "neutron/db/migration/alembic_migrations/env.py", line 109, in 

  run_migrations_online()
File "neutron/db/migration/alembic_migrations/env.py", line 100, in 
run_migrations_online
  context.run_migrations()
File "", line 7, in run_migrations
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/environment.py",
 line 742, in run_migrations
  self.get_context().run_migrations(**kw)
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/migration.py",
 line 305, in run_migrations
  step.migration_fn(**kw)
File 
"/var/jenkins/workspace/openstack_sqla_master/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py",
 line 32, in upgrade
  heal_script.heal()
File "neutron/db/migration/alembic_migrations/heal_script.py", line 81, 
in heal
  execute_alembic_command(el)
File "neutron/db/migration/alembic_migrations/heal_script.py", line 92, 
in execute_alembic_command
  METHODS[command[0]](*command[1:])
  KeyError: 'remove_fk'
  

  I'll send a review for the obvious fix though I have a suspicion
  there's something more deliberate going on here, so consider this just
  a heads up!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1397796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397549] Re: Attaching volumes with CHAP authentication fails on Hyper-V

2014-12-05 Thread Alan Pevec
** Changed in: nova/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397549

Title:
  Attaching volumes with CHAP authentication fails on Hyper-V

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  Fix Released

Bug description:
  Attaching volumes fails on Hyper-V when the iSCSI when CHAP
  authentication is required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1397549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396932] Re: The hostname regex pattern doesn't match valid hostnames

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1396932

Title:
  The hostname regex pattern doesn't match valid hostnames

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  The regex used to match hostnames is opinionated, and it's opinions
  differ from RFC 1123 and RFC 952.

  The following hostnames will fail that are valid.

  6952x 
  openstack-1
  a1a
  x.x1x
  example.org.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1396932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386727] Re: Cinder API v2 support instance view

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1386727

Title:
  Cinder API v2 support instance view

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  There was a bug report: https://bugs.launchpad.net/bugs/1226944 to fix
  Horizon to communicate with Cinder v2 API.

  But there's a problem still exists for the instance view.

  
  If you're using cinder v2 you will get an error for instance view:

  
https://github.com/openstack/horizon/blob/stable/juno/openstack_dashboard/api/nova.py#L720
  
https://github.com/openstack/horizon/blob/stable/icehouse/openstack_dashboard/api/nova.py#L668

  => should be
  volume.name = volume_data.name


  Reproduce:

  * Add new cinder endpoint (API v2) 
  * Login to Horizon 
  * Create instance
  * Show instance details => 500


  [Tue Oct 28 12:26:29 2014] [error] Internal Server Error: 
/project/instances/cd38d21d-0281-40cf-b31b-c39f27f62ea8/
  [Tue Oct 28 12:26:29 2014] [error] Traceback (most recent call last):
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 112, in 
get_response
  [Tue Oct 28 12:26:29 2014] [error] response = wrapped_callback(request, 
*callback_args, **callback_kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
  [Tue Oct 28 12:26:29 2014] [error] return view_func(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 54, in dec
  [Tue Oct 28 12:26:29 2014] [error] return view_func(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/decorators.py", line 38, in dec
  [Tue Oct 28 12:26:29 2014] [error] return view_func(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 69, in 
view
  [Tue Oct 28 12:26:29 2014] [error] return self.dispatch(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 87, in 
dispatch
  [Tue Oct 28 12:26:29 2014] [error] return handler(request, *args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 71, in get
  [Tue Oct 28 12:26:29 2014] [error] context = 
self.get_context_data(**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py",
 line 251, in get_context_data
  [Tue Oct 28 12:26:29 2014] [error] context = super(DetailView, 
self).get_context_data(**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 56, in 
get_context_data
  [Tue Oct 28 12:26:29 2014] [error] exceptions.handle(self.request)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/tabs/views.py", line 51, in 
get_context_data
  [Tue Oct 28 12:26:29 2014] [error] tab_group = 
self.get_tabs(self.request, **kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py",
 line 287, in get_tabs
  [Tue Oct 28 12:26:29 2014] [error] instance = self.get_data()
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/utils/memoized.py", line 90, in 
wrapped
  [Tue Oct 28 12:26:29 2014] [error] value = cache[key] = func(*args, 
**kwargs)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py",
 line 273, in get_data
  [Tue Oct 28 12:26:29 2014] [error] redirect=redirect)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/instances/views.py",
 line 261, in get_data
  [Tue Oct 28 12:26:29 2014] [error] instance_id)
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py",
 line 668, in instance_volumes_list
  [Tue Oct 28 12:26:29 2014] [error] volume.name = volume_data.display_name
  [Tue Oct 28 12:26:29 2014] [error]   File 
"/usr/lib/python2.7/dist-packages/cinderclient/base.py", line 271, in 
__getattr__
  [Tue Oct 28 12:26:29 2014] [error] raise AttributeError(k)
  [Tue Oct 28 12:26:29 2014] [error] AttributeError: display_name

To manage notific

[Yahoo-eng-team] [Bug 1391524] Re: alternate navigation broken

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1391524

Title:
  alternate navigation broken

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  horizon_dashboard_nav relies on menu organized in PanelGroups. When
  not using a PanelGroup, e.g in Identity dashboard, the last level of
  navigation is broken.

  This is apparently not an issue with Accordion Navigation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1391524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2014-12-05 Thread Alan Pevec
** Changed in: cinder/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Committed
Status in Cinder icehouse series:
  In Progress
Status in Cinder juno series:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance icehouse series:
  New
Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  Confirmed
Status in Keystone juno series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron icehouse series:
  New
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print "RESPONSE %s-%d" % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete" % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s" % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1198566] Re: expired image location url cause glance client errors

2014-12-05 Thread Alan Pevec
** Changed in: glance/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1198566

Title:
  expired image location url cause glance client errors

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance juno series:
  Fix Released

Bug description:
  We have a multi-node Openstack cluster running Folsom 2012.2.3 on
  Ubuntu 12.04. A few days ago we added a new compute node, and found
  that we were unable to launch new instances from a pre-existing Ubuntu
  Server 12.04 LTS image stored in glance. Each spawning attempt would
  deposit a glance client exception (shown below) in the compute node's
  nova-compute.log.

  After quite a lot of investigation, I found that the original
  --location URL (used during "glance image-create") of the Ubuntu
  Server image had gone out of date. This was evidently causing a
  BadStoreUri exception on the glance server during instance spawning,
  resulting in a 500 error being returned to our new compute node's
  glance client. I was able to resolve the problem by re-importing an
  Ubuntu Server 12.04 LTS image from a working mirror.

  Improved error logging would have saved us hours of troubleshooting.

  2013-06-27 21:19:24 ERROR nova.compute.manager 
[req-f8d7c23a-e8ad-4059-bea4-4fc588a6afe0 9d8968d3f17f4697aaf923c14651ce7b 
e5fb3c6db0db4e9c86d0301005e2e5bb] [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] Instance failed to spawn
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] Traceback (most recent call last):
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 756, in _spawn
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] block_device_info)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] temp_level, payload)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] self.gen.next()
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 92, in wrapped
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] return f(*args, **kw)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1099, in 
spawn
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] admin_pass=admin_password)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1365, in 
_create_image
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] project_id=instance['project_id'])
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 131, 
in cache
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] *args, **kwargs)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 178, 
in create_image
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] prepare_template(target=base, *args, 
**kwargs)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 795, in inner
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] retval = f(*args, **kwargs)
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 122, 
in call_if_not_exists
  2013-06-27 21:19:24 5290 TRACE nova.compute.manager [instance: 
1cdd84ad-ba1b-4e5c-8711-8e2c91b48c3d] fetch_func(tar

[Yahoo-eng-team] [Bug 1334647] Re: Nova api service doesn't handle SIGHUP signal properly

2014-12-05 Thread Alan Pevec
** Changed in: cinder/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334647

Title:
  Nova api service doesn't handle SIGHUP signal properly

Status in Cinder:
  Fix Committed
Status in Cinder juno series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Invalid

Bug description:
  When SIGHUP signal is send to nova-api service, it doesn't complete
  processing of all pending requests before terminating all the
  processes.

  Steps to reproduce:

  1. Run nova-api service as a daemon.
  2. Send SIGHUP signal to nova-api service.
 kill -1 

  After getting SIGHUP signal all the processes of nova-api stops instantly, 
without completing existing request, which might cause failure.
  Ideally after getting the SIGHUP signal nova-api process should stop getting 
new requests and wait for existing requests to complete before terminating all 
the processes and restarting all nova-api processes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1334647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388716] Re: User cannot create HA L3 Router

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1388716

Title:
  User cannot create HA L3 Router

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  Currently, after modifying the policy.json a standard user cannot
  create a HA L3 router.

  This is caused by neutron attempting to create a new network without a tenant 
under the users context.
  All other operations with tenant-less  owners performed during the creation 
of the router will complete successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1388716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387401] Re: token_flush can hang if lots of tokens

2014-12-05 Thread Alan Pevec
** Changed in: keystone/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1387401

Title:
  token_flush can hang if lots of tokens

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone juno series:
  Fix Released

Bug description:
  
  If you've got a system that can generate lots of tokens, token_flush can 
hang. For DB2, this happens if you create > 100 tokens in a second (for mysql 
it's 1000 tokens in a second). The query to get the time to delete returns the 
100th timestamp which is the same as the min timestamp, and then it goes to 
delete < min timestamp and none match, so none are deleted, then it gets stuck 
in a loop since the function always returns the min timestamp.

  This could be fixed easily by using <= rather than < for the deletion
  comparison.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1387401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396954] Re: Selenium gate job failing with timeout (horizon.test.jasmine.jasmine_tests.ServicesTests)

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1396954

Title:
  Selenium gate job failing with timeout
  (horizon.test.jasmine.jasmine_tests.ServicesTests)

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  A great number of the selenium unit tests jobs seem to currently be
  failing due to a timeout on
  horizon.test.jasmine.jasmine_tests.ServicesTests:

  014-11-27 10:04:38.098 | 
==
  2014-11-27 10:04:38.098 | ERROR: test 
(horizon.test.jasmine.jasmine_tests.ServicesTests)
  2014-11-27 10:04:38.099 | 
--
  2014-11-27 10:04:38.099 | Traceback (most recent call last):
  2014-11-27 10:04:38.099 |   File 
"/home/jenkins/workspace/gate-horizon-selenium/horizon/test/helpers.py", line 
287, in test
  2014-11-27 10:04:38.099 | self.run_jasmine()
  2014-11-27 10:04:38.099 |   File 
"/home/jenkins/workspace/gate-horizon-selenium/horizon/test/helpers.py", line 
271, in run_jasmine
  2014-11-27 10:04:38.099 | wait.until(jasmine_done)
  2014-11-27 10:04:38.099 |   File 
"/home/jenkins/workspace/gate-horizon-selenium/.tox/venv/local/lib/python2.7/site-packages/selenium/webdriver/support/wait.py",
 line 71, in until
  2014-11-27 10:04:38.100 | raise TimeoutException(message)
  2014-11-27 10:04:38.100 | TimeoutException: Message: 

  This is blocking the Horizon gate.

  See for example:
  
http://logs.openstack.org/20/137420/1/check/gate-horizon-selenium/0707449/console.html
  
http://logs.openstack.org/37/122737/15/check/gate-horizon-selenium/9f83318/console.html
  
http://logs.openstack.org/30/135730/2/check/gate-horizon-selenium/35564c7/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1396954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386932] Re: context.elevated: copy.copy causes admin role leak

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386932

Title:
  context.elevated: copy.copy causes admin role leak

Status in Cinder:
  Fix Committed
Status in Manila:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  In neutron/context.py,

  ```
  context = copy.copy(self)
  context.is_admin = True

  if 'admin' not in [x.lower() for x in context.roles]:
  context.roles.append('admin')
  ```

  copy.copy should be replaced by copy.deepcopy such that the list
  reference is not shared between objects. From my cursory search on
  github this also affects cinder, gantt, nova, neutron, and manila.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1386932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365806] Re: Noopfirewall driver or security group disabled should avoid impose security group related calls to Neutron server

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365806

Title:
  Noopfirewall driver or security group disabled should avoid impose
  security group related calls to Neutron server

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  With openvswitch neutron agent, during the daemon loop, the phase for
  setup_port_filters will try to grab/call method
  'security_group_rules_for_devices'  to Neutron Server.

  And this operation will be very time consuming and have big
  performance bottleneck as it include ports query,  rules query,
  network query as well as reconstruct the huge Security groups Dict
  Message.  This message size is very large and for processing it, it
  will occupy a lot of CPU of Neutron Server. In cases like VM/perhost
  arrive to 700, the Neutron server will be busy doing the message and
  couldn't to do other thing and this could lead to message queue
  connection timeout and make queue disconnect the consumers. As a
  result the Neutron server is crashed and not function either for
  deployments or for API calls.

  For the Noopfirewall or security group disabled situation, this
  operation should be avoided. Because eventually these reply message
  would not be used by Noopfirewall driver.  (There methods are pass).

   with self.firewall.defer_apply():
  for device in devices.values():
  LOG.debug(_("Update port filter for %s"), device['device'])
  self.firewall.update_port_filter(device)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323729] Re: Remove Open vSwitch and Linuxbridge plugins from the Neutron tree

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323729

Title:
  Remove Open vSwitch and Linuxbridge plugins from the Neutron tree

Status in devstack - openstack dev environments:
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released

Bug description:
  This bug will track the removal of the Open vSwitch and Linuxbridge
  plugins from the Neutron source tree. These were deprecated in
  Icehouse and will be removed before Juno releases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1323729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367034] Re: NSX: prevents creating multiple networks same vlan but different physical network

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367034

Title:
  NSX: prevents creating multiple networks same vlan but different
  physical network

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  NSX: prevents creating multiple networks same vlan but different
  physical network

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330826] Re: Neutron network:dhcp port is not assigned EUI64 IPv6 address for SLAAC subnet

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330826

Title:
  Neutron network:dhcp port is not assigned EUI64 IPv6 address for SLAAC
  subnet

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  In an IPv6 subnet which has ipv6_address_mode set to slaac or 
dhcpv6-stateless, Neutron should use EUI-64 address assignment for all the 
addresses. Also if a fixed IP address is specified for such a subnet, we should 
report an appropriate error message during port creation or port update 
operation. 
   
  A simple scenario to reproduce this issue...

  #As an admin user, create a provider network and associate an IPv4 and IPv6 
subnet.
  cd ~/devstack
  source openrc admin admin
  neutron net-create N-ProviderNet --provider:physical_network=ipv6-net 
--provider:network_type=flat --shared
  neutron subnet-create --name N-ProviderSubnet N-ProviderNet 20.1.1.0/24  
--gateway 20.1.1.1 --allocation-pool start=20.1.1.100,end=20.1.1.150
  neutron subnet-create --name N-ProviderSubnetIPv6 --ip_version 6 
--ipv6-address-mode slaac --gateway fe80::689d:41ff:fe20:44ca N-ProviderNet 
2001:1:2:3::/64

  As a normal tenant, launch a VM with the provider net-id. You could
  see that ipAddress assigned to dhcp port is "2001:1:2:3::1" which is
  not an EUI64 based address.

  sridhar@ControllerNode:~/devstack$ neutron port-list -F mac_address -F 
fixed_ips
  
+---+---+
  | mac_address   | fixed_ips   
  |
  
+---+---+
  | fa:16:3e:6a:db:6f | {"subnet_id": "61d2661d-22a0-449c-8823-b4d781515f66", 
"ip_address": "172.24.4.2"} |
  | fa:16:3e:54:56:13 | {"subnet_id": "3e3487de-036c-4ab7-ba3f-c6b5db041fb2", 
"ip_address": "20.1.1.101"} |
  |   | {"subnet_id": "716234df-1f46-434c-be48-d976a86438d6", 
"ip_address": "2001:1:2:3::1"}  |
  | fa:16:3e:dd:e9:82 | {"subnet_id": "61d2661d-22a0-449c-8823-b4d781515f66", 
"ip_address": "172.24.4.4"} |
  | fa:16:3e:52:1f:43 | {"subnet_id": "fbad7350-83c4-4cad-aa95-fecac232cea1", 
"ip_address": "10.0.0.101"} |
  | fa:16:3e:8a:f0:b6 | {"subnet_id": "61d2661d-22a0-449c-8823-b4d781515f66", 
"ip_address": "172.24.4.3"} |
  | fa:16:3e:02:d2:50 | {"subnet_id": "fbad7350-83c4-4cad-aa95-fecac232cea1", 
"ip_address": "10.0.0.1"}   |
  | fa:16:3e:45:5c:00 | {"subnet_id": "3e3487de-036c-4ab7-ba3f-c6b5db041fb2", 
"ip_address": "20.1.1.102"} |
  |   | {"subnet_id": "716234df-1f46-434c-be48-d976a86438d6", 
"ip_address": "2001:1:2:3:f816:3eff:fe45:5c00"} |
  
+---+---+

  sridhar@ControllerNode:~/devstack$ sudo ip netns exec 
qdhcp-93093763-bc7d-4be4-91ad-0ef9ba69273c ifconfig
  tap4828cfbd-fe Link encap:Ethernet  HWaddr fa:16:3e:54:56:13
    inet addr:20.1.1.101  Bcast:20.1.1.255  Mask:255.255.255.0
    inet6 addr: 2001:1:2:3:f816:3eff:fe54:5613/64 Scope:Global
    inet6 addr: 2001:1:2:3::1/64 Scope:Global
    inet6 addr: fe80::f816:3eff:fe54:5613/64 Scope:Link
    UP BROADCAST RUNNING  MTU:1500  Metric:1
    RX packets:337 errors:0 dropped:0 overruns:0 frame:0
    TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:37048 (37.0 KB)  TX bytes:3936 (3.9 KB)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1330826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391256] Re: rootwrap config files contain reference to deleted quantum binaries

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391256

Title:
  rootwrap config files contain reference to deleted quantum binaries

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  dhcp and l3 rootwrap filters contain reference to quantum-ns-metadata-
  proxy binary which has been deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1391256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336624] Re: Libvirt driver cannot avoid ovs_hybrid

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1336624

Title:
  Libvirt driver cannot avoid ovs_hybrid

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  This bug is related to Nova and Neutron.

  Libvirt driver cannot avoid ovs_hybrid though if NoopFirewallDriver is
  selected, while using LibvirtGenericVIFDriver at Nova and ML2+OVS at
  Neutron.

  Since Nova follows "binding:vif_detail" from Neutron [1], that is
  intended behavior. OVS mech driver in Neutron always return the
  following vif_detail:

vif_details: {
  "port_filter": true,
  "ovs_hybrid_plug": true,
}

  So, Neutron is right place to configure to avoid ovs_hybrid plugging.
  I think we can set ovs_hybrid_plug=False in OVS mech driver if
  security_group is disabled.

  [1] https://review.openstack.org/#/c/83190/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1336624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373851] Re: security groups db queries load excessive data

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373851

Title:
  security groups db queries load excessive data

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  The security groups db queries are loading extra data from the ports
  table that is unnecessarily hindering performance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373547] Re: Cisco N1kv: Remove unnecessary REST call to delete VM network on controller

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373547

Title:
  Cisco N1kv: Remove unnecessary REST call to delete VM network on
  controller

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  Remove the rest call to delete vm network on the controller and ensure
  that database remains consistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374556] Re: SQL lookups to get port details should be batched

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374556

Title:
  SQL lookups to get port details should be batched

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  The RPC handler for looking up security group details for ports does
  it one port at a time, which means an individual SQL query with a join
  for every port on a compute node, which could be 100+ in a heavily
  subscribed environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369239] Re: OpenDaylight MD should not ignore 400 errors

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369239

Title:
  OpenDaylight MD should not ignore 400 errors

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  400 (Bad Request) errors are ignored in every create or update
  operation to OpenDaylight. Referring to the comment, it protects
  against conflicts with already existing resources.

  In case of update operations, it seems irrelevant and masks "real" bad
  requests. It could also be removed in create operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367500] Re: IPv6 network doesn't create namespace, dhcp port

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367500

Title:
  IPv6 network doesn't create namespace, dhcp port

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  IPv6 networking has been changed during last commits. 
  Create network and IPv6 subnet with default settings. Create port in the 
network:
  it doesn't create any namespace, it doesn't create DHCP port in the subnet, 
although port get IP from DHCP server.
  Although IPv4 networking continues to work as required.

  $ neutron net-create netto
  Created a new network:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | 849b4dbf-0914-4cfb-956b-e0cc5d8054ab |
  | name| netto|
  | router:external | False|
  | shared  | False|
  | status  | ACTIVE   |
  | subnets |  |
  | tenant_id   | 5664b23312504826818c9cb130a9a02f |
  +-+--+

  $ neutron subnet-create --ip-version 6 netto 2011::/64
  Created a new subnet:
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | allocation_pools  | {"start": "2011::2", "end": 
"2011:::::fffe"} |
  | cidr  | 2011::/64   
 |
  | dns_nameservers   | 
 |
  | enable_dhcp   | True
 |
  | gateway_ip| 2011::1 
 |
  | host_routes   | 
 |
  | id| e10300d1-194f-4712-b2fc-2107ac3fe909
 |
  | ip_version| 6   
 |
  | ipv6_address_mode | 
 |
  | ipv6_ra_mode  | 
 |
  | name  | 
 |
  | network_id| 849b4dbf-0914-4cfb-956b-e0cc5d8054ab
 |
  | tenant_id | 5664b23312504826818c9cb130a9a02f
 |
  
+---+--+

  $ neutron port-create netto
  Created a new port:
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| True
   |
  | allowed_address_pairs | 
   |
  | binding:vnic_type | normal  
   |
  | device_id | 
   |
  | device_owner  | 
   |
  | fixed_ips | {"subnet_id": 
"e10300d1-194f-4712-b2fc-2107ac3fe909", "ip_address": "2011::2"} |
  | id| 175eaa91-441e-48df-9267-bc7fc808dce8
   |
  | mac_address   | fa:16:3e:26:51:79   
   |
  | name  | 
   |
  | network_id| 849b4dbf-0914-4cfb-956b-e0cc5d8054ab
   |
  | security_groups   | c7756502-5eda-4f43-9977-21cfb73b4d4e
   |
  | status| DOWN
   |
  | tenant_id | 5664b23312504826818c9cb130a9a02f
   |
  
+---++

  $ neutron port-list | gre

[Yahoo-eng-team] [Bug 1375698] Re: mlnx agent throws exception - "unbound method sleep()"

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1375698

Title:
  mlnx agent throws exception - "unbound method sleep()"

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  Traceback:
  2014-09-30 13:28:53.603 529 DEBUG neutron.plugins.mlnx.agent.utils 
[req-a040c7ec-9a06-4f85-9420-60d6c4ca376e None] get_attached_vnics 
get_attached_vnics /opt/stack/neutron/neutron/plugins/mlnx/agent/utils.py:81
  2014-09-30 13:28:56.608 529 DEBUG neutron.plugins.mlnx.common.comm_utils 
[req-a040c7ec-9a06-4f85-9420-60d6c4ca376e None] Request timeout - call again 
after 3 seconds decorated 
/opt/stack/neutron/neutron/plugins/mlnx/common/comm_utils.py:58
  2014-09-30 13:28:56.608 529 CRITICAL neutron 
[req-a040c7ec-9a06-4f85-9420-60d6c4ca376e None] TypeError: unbound method 
sleep() must be called with RetryDecorator instance as first argument (got int 
instance instead)
  2014-09-30 13:28:56.608 529 TRACE neutron Traceback (most recent call last):
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 426, in 
  2014-09-30 13:28:56.608 529 TRACE neutron main()
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 421, in main
  2014-09-30 13:28:56.608 529 TRACE neutron agent.daemon_loop()
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 373, in daemon_loop
  2014-09-30 13:28:56.608 529 TRACE neutron port_info = 
self.scan_ports(previous=port_info, sync=sync)
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 247, in scan_ports
  2014-09-30 13:28:56.608 529 TRACE neutron cur_ports = 
self.eswitch.get_vnics_mac()
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"agent/eswitch_neutron_agent.py", line 63, in get_vnics_mac
  2014-09-30 13:28:56.608 529 TRACE neutron return 
set(self.utils.get_attached_vnics().keys())
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"/opt/stack/neutron/neutron/plugins/mlnx/agent/utils.py", line 83, in 
get_attached_vnics
  2014-09-30 13:28:56.608 529 TRACE neutron vnics = self.send_msg(msg)
  2014-09-30 13:28:56.608 529 TRACE neutron   File 
"/opt/stack/neutron/neutron/plugins/mlnx/common/comm_utils.py", line 59, in 
decorated
  2014-09-30 13:28:56.608 529 TRACE neutron 
RetryDecorator.sleep_fn(sleep_interval)
  2014-09-30 13:28:56.608 529 TRACE neutron TypeError: unbound method sleep() 
must be called with RetryDecorator instance as first argument (got int instance 
instead)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1375698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372438] Re: Race condition in l2pop drops tunnels

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372438

Title:
  Race condition in l2pop drops tunnels

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  The issue was originally raised by a Red Hat performance engineer (Joe
  Talerico)  here: https://bugzilla.redhat.com/show_bug.cgi?id=1136969
  (see starting from comment 4).

  Joe created a Fedora instance in his OS cloud based on RHEL7-OSP5
  (Icehouse), where he installed Rally client to run benchmarks against
  that cloud itself. He assigned a floating IP to that instance to be
  able to access API endpoints from inside the Rally machine. Then he
  ran a scenario which basically started up 100+ new instances in
  parallel, tried to access each of them via ssh, and once it succeeded,
  clean up each created instance (with its ports). Once in a while, his
  Rally instance lost connection to outside world. This was because
  VXLAN tunnel to the compute node hosting the Rally machine was dropped
  on networker node where DHCP, L3, Metadata agents were running. Once
  we restarted OVS agent, the tunnel was recreated properly.

  The scenario failed only if L2POP mechanism was enabled.

  I've looked thru the OVS agent logs and found out that the tunnel was
  dropped due to a legitimate fdb entry removal request coming from
  neutron-server side. So the fault is probably on neutron-server side,
  in l2pop mechanism driver.

  I've then looked thru the patches in Juno to see whether there is
  something related to the issue already merged, and found the patch
  that gets rid of _precommit step when cleaning up fdb entries. Once
  we've applied the patch on the neutron-server node, we stopped to
  experience those connectivity failures.

  After discussion with Vivekanandan Narasimhan, we came up with the
  following race condition that may result in tunnels being dropped
  while legitimate resources are still using them:

  (quoting Vivek below)

  '''
  - - port1 delete request comes in;
  - - port1 delete request acquires lock
  - - port2 create/update request comes in;
  - - port2 create/update waits on due to unavailability of lock
  - - precommit phase for port1 determines that the port is the last one, so we 
should drop the FLOODING_ENTRY;
  - - port1 delete applied to db;
  - - port1 transaction releases the lock
  - - port2 create/update acquires the lock
  - - precommit phase for port2 determines that the port is the first one, so 
request FLOODING_ENTRY + MAC-specific flow creation;
  - - port2 create/update request applied to db;
  - - port2 transaction releases the lock

  Now at this point postcommit of either of them could happen, because 
code-pieces operate outside the
  locked zone.  

  If it happens, this way, tunnel would retain:

  - - postcommit phase for port1 requests FLOODING_ENTRY deletion due to port1 
deletion
  - - postcommit phase requests FLOODING_ENTRY + MAC-specific flow creation for 
port2;

  If it happens the below way, tunnel would break:
  - - postcommit phase for create por2 requests FLOODING_ENTRY + MAC-specific 
flow 
  - - postcommit phase for delete port1 requests FLOODING_ENTRY deletion
  '''

  We considered the patch to get rid of precommit for backport to
  Icehouse [1] that seems to eliminate the race, but we're concerned
  that we reverted that to previous behaviour in Juno as part of DVR
  work [2], though we haven't done any testing to check whether the
  issue is present in Juno (though brief analysis of the code shows that
  it should fail there too).

  Ideally, the fix for Juno should be easily backportable because the
  issue is currently present in Icehouse, and we would like to have the
  same fix for both branches (Icehouse and Juno) instead of backporting
  patch [1] to Icehouse and implementing another patch for Juno.

  [1]: https://review.openstack.org/#/c/95165/
  [2]: https://review.openstack.org/#/c/102398/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377241] Re: Lock wait timeout on delete port for DVR

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377241

Title:
  Lock wait timeout on delete port for DVR

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  We run a script to configure networks, VMs, Routers and assigin floatingIP to 
the VM.
  After it is created, then we run a script to clean all ports, networks, 
routers and gateway and FIP.

  The issue is seen when there is a back to back call to router-
  interface-delete and router-gateway-clear.

  There are three calls to router-interface-delete and the fourth call
  to router-gateway-clear.

  At this time there is a db lock obtained for port delete and when the
  other delete comes in, it timeout.

  
  2014-10-03 09:28:39.587 DEBUG neutron.openstack.common.lockutils 
[req-a89ee05c-d8b2-438a-a707-699f450d3c41 admin 
d3bb4e1791814b809672385bc8252688] Got semaphore "db-access" from (pid=25888) 
lock /opt/stack/neutron/neutron/openstack/common/lockutils.py:168
  2014-10-03 09:29:30.777 INFO neutron.wsgi [-] (25888) accepted 
('192.168.15.144', 54899)
  2014-10-03 09:29:30.778 INFO neutron.wsgi [-] (25888) accepted 
('192.168.15.144', 54900)
  2014-10-03 09:29:30.778 INFO neutron.wsgi [-] (25888) accepted 
('192.168.15.144', 54901)
  2014-10-03 09:29:30.778 INFO neutron.wsgi [-] (25888) accepted 
('192.168.15.144', 54902)
  2014-10-03 09:29:30.780 ERROR neutron.api.v2.resource 
[req-a89ee05c-d8b2-438a-a707-699f450d3c41 admin 
d3bb4e1791814b809672385bc8252688] remove_router_interface failed
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 87, in resource
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 200, in _handle_action
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 247, in 
remove_router_interface
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource context.elevated(), 
router, subnet_id=subnet_id)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 557, in 
delete_csnat_router_interface_ports
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource l3_port_check=False)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 983, in delete_port
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource port_db, binding = 
db.get_locked_port_and_binding(session, id)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/db.py", line 135, in 
get_locked_port_and_binding
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource 
with_lockmode('update').
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2310, in one
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource ret = list(self)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2353, in 
__iter__
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2368, in 
_execute_and_instances
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 662, in 
execute
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource params)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 761, in 
_execute_clauseelement
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource compiled_sql, 
distilled_params
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 874, in 
_execute_context
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource context)
  2014-10-03 09:29:30.780 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/compat/handle_error.py",
 line 125, in _handle_dbapi_exception
  2014-10-03 09:29:30.780 TRACE n

[Yahoo-eng-team] [Bug 1377350] Re: BSN: inconsistency when backend missing during delete

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377350

Title:
  BSN: inconsistency when backend missing during delete

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  When objects are deleted in ML2 and there is a failure in a driver in
  post-commit. There is no retry mechanism to delete that object from
  with the driver at a later time.[1] This means that objects deleted
  while there is no connectivity to the backend controller will never be
  deleted until another even causes a synchronization.


  1.
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1039

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379609] Re: Cisco N1kv: Fix add-tenant in update network profile

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379609

Title:
  Cisco N1kv: Fix add-tenant in update network profile

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  During cisco-network-profile-update, if a tenant id is being added to the 
network profile, the current behavior is to remove all the tenant-network 
profile bindings and add the new list of tenants. This works well with horizon 
since all the existing tenant UUIDs, along with the new tenant id, are passed 
during update network profile.
  If you try to update a network profile and add new tenant to the network 
profile via CLI, this will replace the existing tenant-network profile bindings 
and add only the new one.

  Expected behavior is to not delete the existing tenant bindings and
  instead only add new tenants to the list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1379609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381768] Re: AttributeError: 'module' object has no attribute 'LDAP_CONTROL_PAGE_OID' with python-ldap 2.4

2014-12-05 Thread Alan Pevec
** Changed in: keystone/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1381768

Title:
  AttributeError: 'module' object has no attribute
  'LDAP_CONTROL_PAGE_OID' with python-ldap 2.4

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone juno series:
  Fix Released
Status in keystone package in Ubuntu:
  Confirmed

Bug description:
  When using LDAP backend with keystone Juno RC2, the following error
  occurs:

  AttributeError: 'module' object has no attribute
  'LDAP_CONTROL_PAGE_OID'

  It looks like that attribute was removed in python-ldap 2.4 which
  breaks Ubuntu Trusty and Utopic and probably RHEL7.

  
  More details on this change here in the library are here:

  https://mail.python.org/pipermail//python-ldap/2012q1/003105.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1381768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379510] Re: Big Switch: sync is not retried after failure

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379510

Title:
  Big Switch: sync is not retried after failure

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  If the topology sync fails, no other sync attempts will be made
  because the server manager clears the hash from the DB before the sync
  operation. It shouldn't do this because the backend ignores the hash
  on a sync anyway.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1379510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378450] Re: [OSSA 2014-039] Maliciously crafted dns_nameservers will crash neutron (CVE-2014-7821)

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378450

Title:
  [OSSA 2014-039] Maliciously crafted dns_nameservers will crash neutron
  (CVE-2014-7821)

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed
Status in neutron juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  The following request body will crash neutron nodes.

  {"subnet": {"network_id": "2aeb163a-a415-4568-bb9e-9c0ac93d54e4", 
"ip_version": 4, 
  "cidr": "192.168.1.3/16", 
  "dns_nameservers": 
[""]}}

  Even strace stops logging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381379] Re: Using postgresql and creating a security group rule with protocol value as integer getting DBAPIError exception

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381379

Title:
  Using postgresql  and creating a security group rule with protocol
  value as integer getting  DBAPIError exception

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  Using postgressql  and creating a scurity group rule protocol value as
  integer getting error DBAPIError exception wrapped from operator does
  not exist.

  Running the jenkins :check-tempest-dsvm-ironic-pxe_ssh-postgres-nv
  fails

  Code :
  curl -i -X POST http://$Server_ip:9696/v2.0/security-group-rules.json -H 
"User-Agent: python-neutronclient" -H "X-Auth-Token: $TOKENID" -d 
'{"security_group_rule": {"ethertype": "IPv4", "direction": "ingress", 
"protocol": "17", "security_group_id": "$Security_goup_id"}}'

  
  Error in the log:
  2014-10-15 06:24:22.756 23647 DEBUG neutron.policy 
[req-4e3855ad-ef66-4a63-b69d-7351d4a1a4b3 None] Enforcing rules: 
['create_security_group_rule'] _build_match_rule 
/opt/stack/new/neutron/neutron/policy.py:221
  2014-10-15 06:24:22.774 23647 ERROR oslo.db.sqlalchemy.exc_filters 
[req-4e3855ad-ef66-4a63-b69d-7351d4a1a4b3 ] DBAPIError exception wrapped from 
(ProgrammingError) operator does not exist: character varying = integer
  LINE 3: ...on IN ('ingress') AND securitygrouprules.protocol IN (17) AN...
   ^
  HINT:  No operator matches the given name and argument type(s). You might 
need to add explicit type casts.
   'SELECT securitygrouprules.tenant_id AS securitygrouprules_tenant_id, 
securitygrouprules.id AS securitygrouprules_id, 
securitygrouprules.security_group_id AS securitygrouprules_security_group_id, 
securitygrouprules.remote_group_id AS securitygrouprules_remote_group_id, 
securitygrouprules.direction AS securitygrouprules_direction, 
securitygrouprules.ethertype AS securitygrouprules_ethertype, 
securitygrouprules.protocol AS securitygrouprules_protocol, 
securitygrouprules.port_range_min AS securitygrouprules_port_range_min, 
securitygrouprules.port_range_max AS securitygrouprules_port_range_max, 
securitygrouprules.remote_ip_prefix AS securitygrouprules_remote_ip_prefix 
\nFROM securitygrouprules \nWHERE securitygrouprules.tenant_id = 
%(tenant_id_1)s AND securitygrouprules.tenant_id IN (%(tenant_id_2)s) AND 
securitygrouprules.direction IN (%(direction_1)s) AND 
securitygrouprules.protocol IN (%(protocol_1)s) AND 
securitygrouprules.ethertype IN (%(ethertype_1)s) AND securitygrouprules.secu
 rity_group_id IN (%(security_group_id_1)s)' {'direction_1': u'ingress', 
'tenant_id_2': u'a0ec4b20678a472ebbab28526cb53fef', 'ethertype_1': 'IPv4', 
'protocol_1': 17, 'tenant_id_1': u'a0ec4b20678a472ebbab28526cb53fef', 
'security_group_id_1': u'e9936f7a-00dd-4afe-9871-f1ab21fe7ea4'}
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/compat/handle_error.py",
 line 59, in _handle_dbapi_exception
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters e, 
statement, parameters, cursor, context)
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1024, in 
_handle_dbapi_exception
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters 
exc_info
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 196, in 
raise_from_cause
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters 
reraise(type(exception), exception, tb=exc_tb)
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 867, in 
_execute_context
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters 
context)
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 324, in 
do_execute
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters 
ProgrammingError: (ProgrammingError) operator does not exist: character varying 
= integer
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters LINE 3: 
...on IN ('ingress') AND securitygrouprules.protocol IN (17) AN...
  2014-10-15 06:24:22.774 23647 TRACE oslo.db.sqlalchemy.exc_filters
  ^
  2014-10-15 06:24:22.7

[Yahoo-eng-team] [Bug 1378508] Re: KeyError in DHPC RPC when port_update happens.- this is seen when a delete_port event occurs

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378508

Title:
  KeyError in DHPC RPC when port_update happens.- this is seen when a
  delete_port event occurs

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  When there is a delete_port event occassionally we are seeing a TRACE
  in dhcp_rpc.py file.

  2014-10-07 12:31:39.803 DEBUG neutron.api.rpc.handlers.dhcp_rpc 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Update dhcp port {u'port': 
{u'network_id': u'12548499-8387-480e-b29c-625dbf320ecf', u'fixed_ips': 
[{u'subnet_id': u'88031ffe-9149-4e96-a022-65468f6bcc0e'}]}} from ubuntu. from 
(pid=4414) update_dhcp_port 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py:290
  2014-10-07 12:31:39.803 DEBUG neutron.openstack.common.lockutils 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Got semaphore "db-access" 
from (pid=4414) lock 
/opt/stack/neutron/neutron/openstack/common/lockutils.py:168
  2014-10-07 12:31:39.832 ERROR oslo.messaging.rpc.dispatcher 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Exception during message 
handling: 'network_id'
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 294, in 
update_dhcp_port
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher 'update_port')
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 81, in 
_port_action
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher net_id = 
port['port']['network_id']
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher KeyError: 
'network_id'
  2014-10-07 12:31:39.832 TRACE oslo.messaging.rpc.dispatcher 
  2014-10-07 12:31:39.833 ERROR oslo.messaging._drivers.common 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] Returning exception 
'network_id' to caller
  2014-10-07 12:31:39.833 ERROR oslo.messaging._drivers.common 
[req-803de1d2-a128-41f1-8686-2bec72c61f5a None None] ['Traceback (most recent 
call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 294, in 
update_dhcp_port\n\'update_port\')\n', '  File 
"/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 81, in 
_port_action\nnet_id = port[\'port\'][\'network_id\']\n', "KeyError: 
'network_id'\n"]
  2014-10-07 12:31:39.839 DEBUG neutron.context 
[req-7d40234b-6e11-4645-9bab-8f9958df5064 None None] Arguments dropped when 
creating context: {u'project_name': None, u'tenant': None} from (pid=4414) 
__init__ /opt/stack/neutron/neutron/context.py:83

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381886] Re: nova list show incorrect when neutron re-assign floatingip

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381886

Title:
  nova list show incorrect when neutron re-assign floatingip

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  boot more several instances, create a floatingip, when re-assign the 
floatingip to multi instances, nova list will show incorrect result.
  >>>neutron floatingip-associate floatingip-id instance0-pord-id
  >>>neutron floatingip-associate floatingip-id instance1-port-id
  >>>neutron floatingip-associate floatingip-id instance2-port-id
  >>>nova list
  (nova list result will be like:)
  --
  instance0  fixedip0,  floatingip
  instance1  fixedip1,  floatingip
  instance2  fixedip2,  floatingip

  instance0,1,2, they all have floatingip, but run "neutron floatingip-list", 
we can see it only bind to instance2.
  another situation is that after a few time(half a min, or longer), "nova 
list" can show correct result.
  ---
  instance0  fixedip0
  instance1  fixedip1
  instance2  fixedip2,  floatingip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382023] Re: Horizon fails with Django-1.7

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382023

Title:
  Horizon fails with Django-1.7

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  as reported here: http://lists.alioth.debian.org/pipermail/openstack-
  devel/2014-October/007488.html

  or with this backtrace, horizon Juno and Icehouse in Debian fail with
  this backtrace:

  http://paste.fedoraproject.org/142396/13459234/

  
  [Thu Oct 16 11:33:45.901644 2014] [:error] [pid 1581] [remote ::1:27029] 
mod_wsgi (pid=1581): Exception occurred processing WSGI script 
'/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'.
  [Thu Oct 16 11:33:45.901690 2014] [:error] [pid 1581] [remote ::1:27029] 
Traceback (most recent call last):
  [Thu Oct 16 11:33:45.901707 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 168, 
in __call__
  [Thu Oct 16 11:33:45.901793 2014] [:error] [pid 1581] [remote ::1:27029] 
self.load_middleware()
  [Thu Oct 16 11:33:45.901807 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 46, 
in load_middleware
  [Thu Oct 16 11:33:45.901879 2014] [:error] [pid 1581] [remote ::1:27029] 
mw_instance = mw_class()
  [Thu Oct 16 11:33:45.901889 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/django/middleware/locale.py", line 23, 
in __init__
  [Thu Oct 16 11:33:45.901929 2014] [:error] [pid 1581] [remote ::1:27029] 
for url_pattern in get_resolver(None).url_patterns:
  [Thu Oct 16 11:33:45.901939 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py", line 367, 
in url_patterns
  [Thu Oct 16 11:33:45.902065 2014] [:error] [pid 1581] [remote ::1:27029] 
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
  [Thu Oct 16 11:33:45.902076 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/django/core/urlresolvers.py", line 361, 
in urlconf_module
  [Thu Oct 16 11:33:45.902091 2014] [:error] [pid 1581] [remote ::1:27029] 
self._urlconf_module = import_module(self.urlconf_name)
  [Thu Oct 16 11:33:45.902099 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
  [Thu Oct 16 11:33:45.902134 2014] [:error] [pid 1581] [remote ::1:27029] 
__import__(name)
  [Thu Oct 16 11:33:45.902146 2014] [:error] [pid 1581] [remote ::1:27029]   
File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/urls.py",
 line 36, in 
  [Thu Oct 16 11:33:45.902182 2014] [:error] [pid 1581] [remote ::1:27029] 
url(r'', include(horizon.urls))
  [Thu Oct 16 11:33:45.902191 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 29, 
in include
  [Thu Oct 16 11:33:45.902231 2014] [:error] [pid 1581] [remote ::1:27029] 
patterns = getattr(urlconf_module, 'urlpatterns', urlconf_module)
  [Thu Oct 16 11:33:45.902242 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/django/utils/functional.py", line 224, 
in inner
  [Thu Oct 16 11:33:45.902336 2014] [:error] [pid 1581] [remote ::1:27029] 
self._setup()
  [Thu Oct 16 11:33:45.902346 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/django/utils/functional.py", line 357, 
in _setup
  [Thu Oct 16 11:33:45.902359 2014] [:error] [pid 1581] [remote ::1:27029] 
self._wrapped = self._setupfunc()
  [Thu Oct 16 11:33:45.902367 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/horizon/base.py", line 778, in 
url_patterns
  [Thu Oct 16 11:33:45.902525 2014] [:error] [pid 1581] [remote ::1:27029] 
return self._urls()[0]
  [Thu Oct 16 11:33:45.902537 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/horizon/base.py", line 812, in _urls
  [Thu Oct 16 11:33:45.902552 2014] [:error] [pid 1581] [remote ::1:27029] 
url(r'^%s/' % dash.slug, include(dash._decorated_urls)))
  [Thu Oct 16 11:33:45.902561 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/horizon/base.py", line 487, in 
_decorated_urls
  [Thu Oct 16 11:33:45.902573 2014] [:error] [pid 1581] [remote ::1:27029] 
url(r'^%s/' % url_slug, include(panel._decorated_urls)))
  [Thu Oct 16 11:33:45.902581 2014] [:error] [pid 1581] [remote ::1:27029]   
File "/usr/lib/python2.7/site-packages/horizon/base.py", line 2

[Yahoo-eng-team] [Bug 1382562] Re: security groups remote_group fails with CIDR in address pairs

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382562

Title:
  security groups remote_group fails with CIDR in address pairs

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Add a CIDR to allowed address pairs of a host. RPC calls from the
  agents will run into this issue now when retrieving the security group
  members' IPs. I haven't confirmed because I came across this working
  on other code, but I think this may stop all members of the security
  groups referencing that group from getting their rules over the RPC
  channel.

  
File "neutron/api/rpc/handlers/securitygroups_rpc.py", line 75, in 
security_group_info_for_devices
  return self.plugin.security_group_info_for_ports(context, ports)
File "neutron/db/securitygroups_rpc_base.py", line 202, in 
security_group_info_for_ports
  return self._get_security_group_member_ips(context, sg_info)
File "neutron/db/securitygroups_rpc_base.py", line 209, in 
_get_security_group_member_ips
  ethertype = 'IPv%d' % netaddr.IPAddress(ip).version
File 
"/home/administrator/code/neutron/.tox/py27/local/lib/python2.7/site-packages/netaddr/ip/__init__.py",
 line 281, in __init__
  % self.__class__.__name__)
  ValueError: IPAddress() does not support netmasks or subnet prefixes! See 
documentation for details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382825] Re: jshint networktopolgy missing semicolon

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1382825

Title:
  jshint networktopolgy missing semicolon

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  fix
  Running jshint ...
  2014-10-18 15:45:15.907 | 
horizon/static/horizon/js/horizon.networktopology.js: line 552, col 70, Missing 
semicolon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1382825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384487] Re: big switch server manager uses SSLv3

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384487

Title:
  big switch server manager uses SSLv3

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  The communication with the backend is done using the default protocol
  of ssl.wrap_socket, which is SSLv3. This protocol is vulnerable to the
  Poodle attack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385485] Re: Image metadata dialog has hardcoded segments

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1385485

Title:
  Image metadata dialog has hardcoded segments

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  Admin->Images->Update Metadata.

  Note "Other" and "Filter" are not translatable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1385485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382076] Re: Can not add router interface to SLAAC network

2014-12-05 Thread Alan Pevec
** Changed in: neutron/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382076

Title:
  Can not add router interface to SLAAC network

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron juno series:
  Fix Released

Bug description:
  Looks like after resolving of
  https://bugs.launchpad.net/neutron/+bug/1330826 it's impossible now to
  connect router to subnet with SLAAC addressing.

  Steps to reproduce:

  $ neutron net-create netto
  $ neutron subnet-create --ip_version 6 --ipv6-address-mode slaac 
--ipv6-ra-mode slaac netto 2021::/64
  $ neutron router-create netrouter
  $ neutron router-interface-add {router_id} {subnet_id}

  The error is:
  Invalid input for operation: IPv6 address 2021::1 can not be directly 
assigned to a port on subnet 8cc737a7-bac1-4fbc-b03d-dfdac7194c08 with slaac 
address mode.

  ** The same behaviour if you set gateway explicitly to fixed IP address like 
2022::7:
  $ neutron subnet-create --ip_version 6 --gateway 2022::7 --ipv6-address-mode 
slaac --ipv6-ra-mode slaac netto 2022::/64
  $ neutron router-interface-add  {router_id} {subnet_id}
  Invalid input for operation: IPv6 address 2022::7 can not be directly 
assigned to a port on subnet f4ebf914-9749-49e4-9498-5c10c7bf9a5d with slaac 
address mode.

  *** The same behaviour if we use dhcpv6-stateless instead of SLAAC.

  1. It should be legal possibility to add port with fixed IP to 
SLAAC/stateless networks.
  2. When router add its interface to SLAAC subnet it should receive its own 
SLAAC address by default, if fixed IP address is not specified explicitly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383916] Re: instance status in instance details screen is not translated

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1383916

Title:
  instance status in instance details screen is not translated

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  In Project/Admin->Compute->Instances->Instance Detail the status is
  reported in English.  This should match the translated status shown in
  the instance table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1383916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384116] Re: Missing borders for "Actions" column in Firefox

2014-12-05 Thread Alan Pevec
** Changed in: horizon/juno
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384116

Title:
  Missing borders for "Actions" column in Firefox

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) juno series:
  Fix Released

Bug description:
  In Firefox only, some rows are still missing borders in "Actions"
  column. Moreover, the title row itself still should be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >